\useunder

\ul

Xuefeng Jiang^{1,2,∗}, Fangyuan Wang^{1,∗}, Rongzhang Zheng^{1,∗}, Han Liu^{1},

Yixiong Huo^{1}, Jinzhang Peng^{1}, Lu Tian^{1}, Emad Barsoum^{1}*Xuefeng Jiang, Fangyuan Wang and Rongzhang Zheng share equal contributions.^{1}Advanced Micro Devices (AMD), Inc.{xuefeng.jiang, fangyuan.wang, rongzhang.zheng, han.liu, yixiong.huo, jinz.peng, lu.tian, emad.barsoum}@amd.com^{2}Institute of Computing Technology, Chinese Academy of Sciences and University of Chinese Academy of Sciences.{jiangxuefeng21b}@ict.ac.cn

###### Abstract

Precise localization is of great importance for autonomous parking task since it provides service for the downstream planning and control modules, which significantly affects the system performance. For parking scenarios, dynamic lighting, sparse textures, and the instability of global positioning system (GPS) signals pose challenges for most traditional localization methods. To address these difficulties, we propose VIPS-Odom, a novel semantic visual-inertial odometry framework for underground autonomous parking, which adopts tightly-coupled optimization to fuse measurements from multi-modal sensors and solves odometry. Our VIPS-Odom integrates parking slots detected from the synthesized bird-eye-view (BEV) image with traditional feature points in the frontend, and conducts tightly-coupled optimization with joint constraints introduced by measurements from the inertial measurement unit, wheel speed sensor and parking slots in the backend. We develop a multi-object tracking framework to robustly track parking slots’ states. To prove the superiority of our method, we equip an electronic vehicle with related sensors and build an experimental platform based on ROS2 system. Extensive experiments demonstrate the efficacy and advantages of our method compared with other baselines for parking scenarios.

## I Introduction

Simultaneous localization and mapping (SLAM) is a significant task in many fields including virtual reality and autonomous driving. In the field of automotive, SLAM modules aim to achieve precise and real-time localization so that other modules like planning and control can respond on time. There are many popular visual-inertial localization estimation methods [1] that obtain feature points from camera images, incorporate other sensors like the inertial measurement unit (IMU) and wheel speed sensor (WSS) to build SLAM modules, and achieve the state-of-the-art performance.

However, as shown in Fig 1, traditional visual methods suffer from unstable feature tracking which is often caused by light reflection and frequently moving objects. These methods usually cannot extract enough effective feature points in texture-less and dynamic lighting parking environments, which bring challenges to autonomous parking task, including automated parking assistance (APA) and automated valet parking (AVP). APA aims to assist parking the vehicle into a nearby vacant parking slot whileAVP focuses on long-distance navigation around the parking lot and finding a vacant parking slot.Both tasks have high requirements on localization precision.In practice, insufficient and unstable feature points in complicated parking environments cause accumulated drifts, and even failures in extreme cases [10].Though there are some methods to mitigate accumulated drifts like loop closure detection [5], it still remains a problem to design localization methods with high precision and stability in such complicated scenarios [10].

To improve localization performance for parking scenarios, a feasible strategy is to enable and fuse more sensors’ measurements to accomplish more accurate and robust localization. In recent years, it has become a trend to exploit semantic elements from the scene to optimize localization estimation of the vehicle [27, 26, 28]. The primary advantage of these methods is compensating for the weakness of traditional feature point detectors, and providing more constraints for the localization estimation.

In this work, we propose VIPS-Odom, a semantic visual-inertial optimization-based odometry system to reduce accumulated localization errors by dynamically incorporating detected parking slot corner points as extra robust feature points into the SLAM frontend, and simultaneously exploiting parking slot observations as semantic constraints of optimization in the SLAM backend. Meanwhile, different from previous works, we robustly track and maintain all parking slots’ states under a multi-object tracking framework.Our method achieves higher localization precision against other baseline methods for autonomous parking. Our main technical contributions can be summarized as follows:

- 1.
To tackle difficulties in the underground environment, we propose to exploit parking slots as extra high-level semantic observation to assist SLAM. To the best of our knowledge, VIPS-Odom is the first to provide a holistic solution of fusing the parking slot observation into both the SLAM frontend and backend via an explicit way.

- 2.
We equip an electronic vehicle with related sensors as our experimental platform and conduct extensive real-world experiments over both long-distance and short-distance trajectories in different underground parking lots, which is more enriched than previous related works for autonomous parking scenarios.

- 3.
Experimental results reflect the superiority and stability of VIPS-Odom against other baselines, including inertial-based EKF, traditional visual-inertial optimization-based VINS, semantic visual-inertial optimization-based VISSLAM and LiDAR-based A-LOAM.

## II Related Work

### II-A Multi-sensor SLAM

There are some well-noted localization methods including Extended Kalman filter (EKF) [16], MSCKF [31], VINS [1], ORB-SLAM [29] and LOAM [25, 23].EKF [16] is a classical loosely-coupled framework to fuse measurements from various sensors. A-LOAM [23] calculates the odometry via LiDAR point cloud registration [25] which requires the expensive LiDAR sensor.The high cost of LiDAR sensors makes it an impractical choice for standard L2 parking applications.For commercial-grade autonomous parking systems, it is preferred to use the camera, inertial measurement unit (IMU) and wheel speed sensor (WSS) instead of the expensive LiDAR. Thus, visual-inertial SLAM (VI-SLAM) is considered to be more promising for commercial use.According to different approaches to utilizing multi-sensor measurements, VI-SLAM systems can be categorized as loosely-coupled and tightly-coupled methods according to their ways to fuse measurements from different sensors.The former updates the pose estimation with measurements from various sensors independently, while the latter tightly associates those measurements to jointly optimize the odometry, which often leads to more precise localization [26].VINS [1] is a representative state-of-the-art and open-source method, which utilizes the front-view camera and IMU to jointly optimize odometry in a sliding window composed of several keyframes.VI-SLAM system depends on low-level visual features points or directly utilize the intensity of pixels. However, both low-level visual features and image pixels suffer from instability and inconsistency during navigating in parking environments [28, 26].

### II-B Semantic SLAM for Autonomous Parking

There are some attempts to incorporate semantic features into VI-SLAM systems. SLAM++ [18] is an early attempt to build a real-time object-oriented SLAM system which constructs an explicit graph of objects. [17] proposes a SLAM system where semantic objects in the environment are incorporated into a bundle-adjustment framework. Mask-SLAM [2] improves feature point selection with the assistance of semantic segmentation to exclude unstable objects. These methods usually consider semantic features from traditional objects to assist the SLAM task. However, they may be not suitable in the complicated parking environments since parking lots are decorated with scene-specific unconventional semantic markings (like parking slot lines [26], parking slot numbers [27] and bumper lines [28]).

For autonomous parking scenarios, AVP-SLAM [28] is a representative semantic SLAM system, which includes the mapping procedure and the localization procedure. In the mapping process, it constructs a semantic map including parking lines and other semantic elements segmented by a modified U-Net [3]. Based on this constructed map and inertial sensors, poses of the vehicle can be estimated in the localization procedure via a loosely-coupled way. However, the localization precision would drop if the constructed map is not frequently updated since the appearance of the parking lot will change as time goes by.

Different from AVP-SLAM, there are some attempts to utilize tightly-coupled approaches by exploiting semantic objects.VISSLAM [26] exploits parking slots as semantic features in the SLAM backend optimization for the first time. However, its system utilizes DeepPS from Matlab to detect parking slots (PSs) and resorts to a hard-match based strategy to match the previously observed PSs, which makes it prone to mismatching . To build a more robust SLAM system for parking and overcome the limitations of VISSLAM, we build our VIPS-Odom system and utilize the ubiquitous YOLOv5 [6] to perform real-time PS detection, flexibly and robustly track and maintain all PS states via a multi-object tracking framework instead of the hard-match strategy. To overcome difficulties of parking environments, we further adopt the observed PSs as robust semantic constraints in the SLAM backend and add those robust feature points to the frontend to achieve more stable and robust localization. We additionally add wheel encoder measurements into backend optimization to further improve the localization accuracy. We conduct more diverse comparison than two most relevant works AVP-SLAM and VISSLAM as shown in Tab.I.

## III Methodology

### III-A Sensor Equipment & Calibration

Multi-modal sensor data of VIPS-Odom include visual feature points derived from corner detection, IMU and WSS measurements between two consecutive keyframes [1], and parking slot (PS) observations from BEV images.

Sensor Configuration Our system (in Fig.2) includes an inertial measurement unit (IMU, mounted in the center of this vehicle), a wheel speed sensor (WSS, mounted near this vehicle’s right back wheel), and four surround-view fisheye cameras mounted at the front, rear, left, and right side. Images are recorded with a resolution of $1920\times 1080$ pixels. The BEV images with resolution of $576\times 576$ pixels cover an area of $11.32m\times 11.32m$.A $360^{\circ}$ LiDAR is also mounted on the vehicle roof which produces point cloud data to support LiDAR-SLAM evaluation for A-LOAM [23].

Sensor CalibrationSince various sensor measurements are tightly-coupled in VIPS-Odom, we carefully calibrate related parameters.We utilize an open-source toolbox [21] to optimize intrinsic of four fisheye cameras. For extrinsic calibration, we follow [20] to refine the camera-IMU and IMU-WSS transformation with coarse initialization.

### III-B BEV Image Generation

We adopt inverse perspective mapping (IPM) [28] to construct the bird’s eye view (BEV) image with four surround-view fisheye cameras. Relationships of the body coordinate, image coordinate and BEV coordinate can be formulated as:

$\left[\begin{array}[]{c}x_{v}\\y_{v}\\1\end{array}\right]=T_{c}^{v}\pi^{-1}\left(\left[\begin{array}[]{c}u_{f}\\v_{f}\\1\end{array}\right]\right)=H_{b}^{v}\left[\begin{array}[]{c}u_{b}\\v_{b}\\1\end{array}\right]$ | (1) |

where $\pi^{-1}(\cdot)$ is the inverse of fisheye projection model. $T_{c}^{v}$ is the extrinsic matrix of camera with respect to the body coordinate, and $H_{b}^{v}$ denote the transformation from the BEV coordinate to body coordinate. $[x_{v},y_{v}]^{\top}$, $[x_{f},y_{f}]^{\top}$ and $[x_{b},y_{b}]^{\top}$ stand for locations at the body, fisheye camera and BEV coordinate respectively.

### III-C Optimization Formulation

Generally, VIPS-Odom is a tightly-coupled optimization-based SLAM system which integrates measurements from multi-modal sensors. The overall framework of VIPS-Odom is shown in Fig.3.Herein we illustrate the optimization framework of VIPS-Odom. $\mathcal{Z}$ denotes the visual feature points captured in front-view fisheye camera, $\mathcal{O}$ denotes the observation of parking slot center in the world frame, $\mathcal{M}$ and $\mathcal{W}$ denote the IMU measurements and WSS measurements. We aim to solve the poses $\mathcal{T}$, map keypoints $\mathcal{P}$ matched with $\mathcal{Z}$ and the locations of PS center $\mathcal{L}$. The optimization objective can be defined as follows:

$\small\{\mathcal{L},\mathcal{T},\mathcal{P}\}^{*}=\arg\max_{\mathcal{L},%\mathcal{T},\mathcal{P}}p(\mathcal{L},\mathcal{T},\mathcal{P}|\mathcal{O},%\mathcal{Z},\mathcal{M},\mathcal{W}).$ | (2) |

Our main target is to get more precise estimation of $\mathcal{T}$ via multi-sensor observation. Following [26], we reformulate $p$ with Bayes’ theorem [9] as follows:

$\small p(\mathcal{L},\mathcal{T},\mathcal{P}|O,\mathcal{Z},\mathcal{M},%\mathcal{W})\propto p(\mathcal{L},\mathcal{T},\mathcal{P})p(O,\mathcal{Z},%\mathcal{M},\mathcal{W}|\mathcal{L},\mathcal{T},\mathcal{P}).$ | (3) |

We further factorize $p$ by separating $\mathcal{O}$ from other modalities since visual keypoints $\mathcal{Z}$ and PS observations $\mathcal{O}$ are obtained from two independent sensor measurements:

$\displaystyle p(\mathcal{L},\mathcal{T},\mathcal{P}|O,\mathcal{Z},\mathcal{M},%\mathcal{W})$ | |||

$\displaystyle\propto p(\mathcal{L})p(\mathcal{T},\mathcal{P})p(O|\mathcal{L},%\mathcal{T},\mathcal{P})p(\mathcal{Z},\mathcal{M},\mathcal{W}|\mathcal{L},%\mathcal{T},\mathcal{P})$ | |||

$\displaystyle=p(\mathcal{L})p(\mathcal{T},\mathcal{P})p(O|\mathcal{L},\mathcal%{T})p(\mathcal{Z},\mathcal{M},\mathcal{W}|\mathcal{T},\mathcal{P})$ | (4) | ||

$\displaystyle=\underbrace{p(\mathcal{T})p(\mathcal{P})p(\mathcal{Z}|\mathcal{T%},\mathcal{P})p(\mathcal{M},\mathcal{W}|\mathcal{T})}_{\text{visual-inertial %term}}\underbrace{p(\mathcal{L})p(O|\mathcal{L},\mathcal{T})}_{\text{PS term}},$ |

where the first bracket is related to visual features, IMU/WSS motion data, and the remainder denotes the PS related term.Given above analysis, we can perform optimization by jointly minimizing visual reprojection error, inertial motion error, and PS error. VIPS-Odom deals with both low-level geometric/motion data as well as higher-level semantic information simultaneously, which facilitates more stable and accurate localization for parking scenarios. The backend optimization is discussed in Sec.III-F.

### III-D Parking Slot (PS) Detection & Management

PS Detection Different from feature line detection in AVP-SLAM [28] and matlab-based model in VISSLAM [26], we exploit CNN for PS detection since there are previous successful attempts of using CNN to detect PS [8, 11]. Following [11], we develop our light-weight PS detector based on YOLOv5 [6] to detect PS in BEV images and predict the state of PS (i.e., occupied or vacant, see Fig.3). Detected PS states later assist the autonomous parking planning to choose a vacant PS. Training details are introduced in Sec.IV-A.

PS ManagementWe utilize a classic multi-object tracking framework SORT [15] to manage PS states. Originally, parking slots are detected in the BEV image frame. Then detected PS corner points are converted to the body frame of ego vehicle (see Eq.1).According to the timestamp when the PS is detected in the BEV images, we can fetch back the nearest pose of this vehicle among the keyframes in the sliding window (see Sec.III-F). Then the pose of PS could be converted from the body frame to the world frame.The world frame is initialized at the beginning of the trajectory (see Sec.III-A).Each parking slot can be viewed as different rectangles located in different positions in the world frame.They are also plotted on the constructed map and BEV images (see Fig.7), which assist downstream tasks in autonomous parking.To assign detections to existing parking slots, we need to update the tracker status timely.Each target’s bounding box geometry is estimated by predicting its new location in the world frame based on Kalman Filter [14].The assignment cost matrix is then computed as the intersection-over-union (IoU) distance between each detection and all predicted bounding boxes from existing targets. The assignment is solved with the Hungarian assignment algorithm [4, 15]. Additionally, a minimum IoU is imposed to reject assignments where the detection to target overlap is less than $0.3$. Matched PS detection to existing PS will share the same id. We also implement and evaluate hard match association method used in VIS-SLAM [26], which pre-defines thresholds for previous PS association and new PS creation, and related analysis is discussed in Sec.IV-F.

### III-E Frontend Integration with PS

For frontend visual odometry, visual features are detected via Shi-Tomasi algorithm [7] and we mildly maintain about one hundred feature points for each new image. Visual features are tracked by the KLT sparse optical flow [12].Note that feature point number is much smaller than other optimization-based SLAM methods because parking lots are often texture-less and dynamic lighting [28] which implies fewer feature points that can be detected.Correspondingly, traditional SLAM methods like ORB-SLAM [29] usually show unsatisfying performance in parking scenarios [13].

As shown in Fig.4, we introduce detected parking slot (PS) corner points as extra robust feature points to mitigate the aforementioned issue.We integrate fisheye camera images from multiple perspectives into one BEV image through IPM (Sec. III-B). Here, the points on each BEV image correspond one-to-one to points on the original fisheye camera, therefore, each PS feature points in BEV space is paired with a pixel in front-view fish-eye camera through transformation matrix.After predicted by the PS detector (see Sec.III-D), these PS feature points in the BEV image frame can be inversely projected into the camera frame as extra feature points according to the transformation matrix between the BEV coordinate and the fisheye camera coordinate(see Sec.III-B).Aiming to maintain higher-quality and more robust feature points, we merge these extra PS feature points to the original detected Shi-tomasi visual feature point sequence dynamically.These extra feature points benefit localization under the challenging texture-less environment of parking scene.2D features are firstly undistorted and then projected to a unit sphere after passing outlier rejection following VINS [1].Keyframe selection is conducted during the SLAM frontend and we follow two commonly-used criteria in VINS [1], mainly according to the parallax between current frame and previous frames. We also conduct related experiments to examinehow the number of feature point affects our system’s precision in Sec.IV-D.

### III-F Backend Optimization with PS

We elaborate on the backend non-linear optimization to exploit multi-modal sensors’ measurements. Based on VINS [1], VIPS-Odom performs optimization within a sliding window. Overall optimization objective can be roughly divided into visual-inertial term and PS term (see Sec.III-C).

Visual-inertial TermVisual-inertial term can be constructed via the formulation of VINS, and we further exploit the wheel speed sensor (WSS) to enrich inertial measurements for complicated environments. Specifically, in each sliding window, VIPS-Odom maintains a set of $K$ keyframes $f_{0},\ldots,f_{K-1}$ to perform backend optimization. Each keyframe $f_{k}$ is associated with related timestamp $t_{k}$ of frame $k$ and camera pose $\mathcal{T}^{k}=[R_{k},t_{k}]\in\mathbb{SE}(3)$.The motion (related to change of orientation, position and velocity) between two consecutive keyframes can be determined by either pre-integrated IMU/WSS data or visual odometry [1]. Referring to [24], we synthesize measurements of IMU and WSS by pre-integrating. The overall visual-intertial optimization function is formulated as follows:

$\small\mathcal{F}(\mathbf{x})=\sum_{p}\sum_{j}\mathbf{e}_{r}^{p,j^{T}}\mathbf{%W}_{r}\mathbf{e}_{r}^{p,j}+\sum_{k=0}^{K-2}\mathbf{e}_{s}^{k^{T}}\mathbf{%\Sigma}_{k,k+1}^{-1}\mathbf{e}_{s}^{k}+\mathbf{e}_{m}^{T}\mathbf{e}_{m},$ | (5) |

where $\mathcal{F}(\mathbf{x})$ denotes the visual-inertial backend loss function, $p$ represents the index of observed feature points (or landmarks), and$j$ is the index of the imageon which the landmark $p$ appears. $K$ is the number of images in the sliding window. $e_{r}^{p,j}$ means the bundle adjustment [22] reprojection residual, $e_{s}^{k}$ represents the inertial residual, and $e_{m}$ denotes the marginalization residual. $\mathbf{W}_{r}$ is the uniform information matrix for all reprojection terms and $\boldsymbol{\Sigma}_{k,k+1}$ denotes the forward propagation of covariance [1, 24]. The vector $\mathbf{x}$ incorporates the motion states of keyframes, each landmark’s inverse depths [1]which can be formulated as:

$\mathbf{x}=\begin{bmatrix}\mathbf{x}_{0},\mathbf{x}_{1},\ldots\mathbf{x}_{K-1}%,\lambda_{0},\lambda_{1},\ldots\lambda_{m-1}\\\end{bmatrix},$ | (6) |

$\mathbf{x}_{k}=\begin{bmatrix}\mathbf{p}_{k}^{w},\mathbf{v}_{k}^{w},\mathbf{q}%_{k}^{w},\mathbf{b}_{a_{k}},\mathbf{b}_{\boldsymbol{\omega}_{k}}\end{bmatrix},$ | (7) |

where the state of $k$-th keyframe is $\mathbf{x}_{k}$, $\lambda_{0},\ldots,\lambda_{m-1}$ denote the inverse depth of each landmark in camera frame,$\mathbf{p}_{k}^{w},\mathbf{v}_{k}^{w},\mathbf{q}_{k}^{w}$ denote the pose, speed and orientation of the $k$-th frame which correlates to $\mathcal{T}^{k}$, $\mathbf{b}_{a_{k}},\mathbf{b}_{\boldsymbol{\omega}_{k}}$ denote the IMU’s accelerometer and gyroscope bias corresponding to the $k$-th image.Other details of our optimization procedure are basically in accordance with VINS [1].

PS TermWe incorporate PS into backend explicitly. In each sliding window, since each PS with the same id is associated with multiple observations, it naturally forms a registration constraint between each observed PS and its maintained state in the world frame. Each maintained state and PS observations are associated via PS id provided by our PS management module (discussed in Sec.III-D). Our PS term is illustrated in Fig.5.Thus, the registration error of the parking slot $i$ observed at keyframe $k$ can be defined as:

$\mathbf{e_{reg}}^{k,i}=\mathcal{T}^{k}\mathcal{L}^{i,k}-\mathcal{O}_{i},$ | (8) |

where $\mathcal{O}_{i}$ denotes the current maintained PS (id=$i$) location state in the world frame, $\mathcal{T}^{k}$ denotes the $k$-th keyframe’s vehicle’s pose (calculated by position and orientation in Eq.7) in the world frame, $\mathcal{L}^{i,k}$ denotes the PS (id=$i$) observation in the body frame. PS error term is derived as:

$\mathcal{G}(\mathbf{x})=\sum\limits_{k=1}^{K}\sum\limits_{i\in\mathcal{S}}^{|%\mathcal{S}|}\rho((\mathbf{e_{reg}}^{k,i})^{-1}\alpha_{k}^{i}\mathbf{e_{reg}}^%{k,i}),$ | (9) |

where $S$ is the set of seen PSs in this sliding window and $\rho(\cdot)$ denotes the robust kernel function [32]. $\alpha_{k}^{i}$ is a reweighting term based on the $k-th$ keyframe’s observation of a certain parking slot (id=$i$). If the $i-th$ PS in the $k-th$ keyframe is not observed, $\alpha_{k}^{i}$ will be $0$. The reason to add this reweighting term is that due to perspective distortion and lens distortions, outer regions of generated BEV images are often stretched or compressed, which introduces more errors to PS detection (see Fig.5). For a certain $k-th$ keyframe, we first compute the pixel distance $d_{k}^{i}$ (divided by half of BEV image size in Sec.III-A) between the center point of observed $i-th$ PS in this keyframe (supposing $k-th$ keyframe incorporates $N_{k}$ PSs) and the center point of BEV image in BEV image frame:

$\alpha_{k}^{i}=\frac{N_{k}e^{-d_{i}^{k}}}{\sum_{i=1}^{N_{k}}e^{-d_{i}^{k}}}.$ | (10) |

Given above, $\mathcal{F}(\mathbf{x})$ covers the visual-intertial constraint term while $\mathcal{G}(\mathbf{x})$ provides a semantic registration constraint term, and their combination follows the formulation in Sec.III-C.

## IV Experiments

### IV-A Experiment Setup

We deploy our system on an electronic vehicle along with a workstation of Ubuntu operating system, and all modules communicate via ROS2 [30]. The CPU and GPU configurations are i7-8700 CPU@3.20GHz and A5000 respectively. The overall optimization backend is implemented via Ceres Solver [32] and we adopt commonly-used Cauchy function as robust kernel function (see Eq.9). All experiments are repeated for three times and averaged. Our experiments are conducted in two parking lots (see Fig.6).

BaselinesOur baselines include four classical methods which are suitable to deploy on a commercial-grade vehicle: (i) EKF [16]: EKF is a classic loosely-coupled approach to fusing IMU and WSS meaurements. (ii) A-LOAM [23]: A-LOAM is a LiDAR-based method which utilizes ICP [25] with a point-to-edge distance. (iii) VISSLAM [26]: VISSLAM utilizes IMU, feature points and matched parking slots to jointly optimize the odometry. Since its source codes are not released, we refer to VINS [1] to process IMU and camera measurements, and additionally add the PS measurements into the backend optimization. (iv) VINS [1]: VINS provides a tightly-coupled optimization-based approach to fusing IMU measurements and feature points from the front-view fisheye camera. In addition, we refer to [24] to add WSS measurements into VINS to improve the accuracy and stability of localization for parking environments.We compare these methods in Tab.II. Geometric Map is constructed by feature points, Point Cloud Map is constructed by LiDAR point clouds, and Semantic Map is constructed by both feature points and parking slots in our method.

Method | Sensor | Map | PS |
---|---|---|---|

EKF [16] | IMU + WSS | $\times$ | $\times$ |

A-LOAM [23] | LiDAR | Point Cloud | $\times$ |

VISSLAM [26] | Camera + IMU | Semantic | $\checkmark$ |

VINS [1] | Camera + IMU + WSS | Geometric | $\times$ |

Ours | Camera + IMU + WSS | Semantic | $\checkmark$ |

PS DetectorOur PS detector is a YOLOv5 based network, which is specifically trained with nearly 50k BEV images collected in underground and outdoor parking lots for better generalization.Our PSDet module views each parking slot (PS) as a planar object with corresponding human-annotated label (i.e. PS shape, PS occupation state, 4 corner point location and related visible states), see Fig.3, 5 and 7.We only keep PS detection results with output probabilities greater than 50%.The training process lasts for 28 epochs with a fixed batchsize of 64. The well trained model is then deployed to support online inference via TensorRT [33].

Evaluation TrajectoriesWe collect a comprehensive group of short-distance trajectories (APA scenarios) and another group of long-distance trajectories (AVP scenarios) in two real-world parking lots (see Tab.IV and Fig.6, $\#1$ denotes the first one and $\#2$ denotes another). The vehicle moves at the speed of about 4 km/h for short-distance trajectories, and about 10 km/h for long-distance ones. During each trajectory, we record all sensors’ perception measurements into ROS bags [30] for offline evaluation. We prepare Tab.III to show lengths and duration time of trajectories.

Ground Truth Acquisition & MetricWe use Root Mean Square Error (RMSE) which is a commonly used error metric for measuring trajectory errors. Most methods use GPS/RTK signal as ground truth in the open outdoor area, but it is not suitable for underground environments which have poor coverage of GPS/RTK measurements. Therefore, we refer to utilize motion capture system to acquire trajectories’ ground truth by following previous works [19].

### IV-B Experimental Results

We compare our method with other three baseline methods for various parking trajectories, including short-distance parking scenarios and long-distance parking scenarios. Experimental results are shown in Tab.IV. The corresponding parking trajectories are shown in Fig.6.

For short-distance scenarios, our VIPS-Odom achieves better overall localization performance. Merging parking slot features to the visual-inertial odometry can effectively reduce localization error. Meanwhile, A-LOAM has comparable results with ours though LiDAR is much costly than our sensor configuration, which reflects the potential of using optimization-based methods with low-cost sensors. For long-distance scenarios, optimization-based methods, VISSLAM, VINS and VIPS-Odom, achieve better localization performance. VISSLAM got a failure case on one long-distance trajectory and we analyze the reason in Sec.IV-F. With semantic constraints provided by PS, our VIPS-Odom has lower localization error than VINS, which indicates the drifts are efficiently reduced by incorporating PS.

### IV-C Ablation Study

In this work, we fuse the parking slot (PS) information into both SLAM frontend and backend, and herein we conduct ablation study to investigate the role of PS information in the frontend or backend. The experimental results are shown in Tab.V. Based on these results, we find the incorporating of PS in the frontend benefits the short-distance localization during short-distance parking trajectories, and the incorporating of PS in the backend benefits the reduction of long-distance accumulated drift over long-distance trajectories and improve the localization precision.In short-distance (near the target parking slot) parking scenarios, the parking slot features supplemented by the frontend improve the overall quality of the feature points, helping the algorithm to better track feature points and estimate pose in the backend. In long-distance scenarios, vehicles will cruise at a relatively steady speed. At this time, it is particularly important to add parking slot as extra semantic objects in the backend optimization to reduce long-term accumulated drifts.

### IV-D Sensitivity Analysis

When deployed in practice, optimization-based SLAM methods have two important hyper-parameters including keyframe number of each sliding window and feature point number of tracked images. Herein we conduct sensitivity analysis on these two hyper-parameters to demonstrate the robustness (or sensitivity) of our VIPS-Odom against another optimization-based method VINS [1]. Related results in shown in Tab.VI.For feature point number selection, we choose different number of maximum feature points in the front-view camera to investigate its impact on the localization accuracy. With the assistance of PS, our VIPS-Odom is more robust to the selection of feature point in long-distance trajectories. We also observe that VIPS-Odom is slightly less robust than VINS in short-distance trajectories. Meanwhile, the selection of keyframe number in the optimization sliding window is of significance in the optimization-based methods.We choose different number of keyframes in the backend optimization to investigate its impact on the localization accuracy. As the results shown, our VIPS-Odom achieves better robustness on keyframe number selection.This helps VIPS-Odom have better generalization for more scenarios.

### IV-E Qualitative Evaluation and Real-time Analysis

With Rviz toolkit, we visualize one long-distance trajectory (#1) and provide qualitative evaluation of our VIPS-Odom in Fig.7. Our real-time PS detection module detects parking slots on the current frame. Furthermore, our PS management module tracks all observed parking slots under a multi-object tracking framework. With the tracking framework, we robustly track all PS states to handle real-time PS observation and manage all PS states, thus maintained well-tracked PS information (Fig.7(a)) is considered in backend. Regarding frequency, the frequency of IMU/WSS readings are 100Hz, and the frequency of fisheye cameras is 20Hz. BEV image generation and PS Detection on generated BEV images are of 10Hz. The odometry is solved timely at 25Hz.

### IV-F PS Association Analysis

We compare the performance (RMSE errors) of our SORT based PS association method and hard match policy used in VISSLAM [26]. Backend optimization is both performed by VIPS-Odom for a fair comparison. Our PS association is achieved by maintaining PS states in a SORT based real-time multi-object tracking framework. Hard match policy is achieved via strictly matching with PS observation center-point averaged value, which is prone to suffer long-distance noisy observation. We showcase experimental results in Tab.VIII. For the long-distance Round (#1) trajectory, hard match policy suffers from severe PS mismatches, creates numerous new parking slots in matching module, and encounters failure to form reliable PS factors (see III-F). Above experiments demonstrate the flexibility and benefit of our PS association method against the hard match policy.

Trajectory | Hard Match Policy | SORT |
---|---|---|

Short-distance (Mean) | 0.165 | 0.126 |

Long-distance Round (#1) | fail | 1.09 |

Long-distance Round (#2) | 1.49 | 1.27 |

### IV-G Backend PS reweighting Analysis

Herein we conduct ablation study on the utilization of reweighting PS error terms in Eq.9. We modify the $\alpha_{k}^{i}$ in Eq.9 as constant $1$ to dismiss the reweighting terms. From experimental results, we find utilizing the reweighting terms can help to improve the localization accuracy.

## V Conclusion

In this work, we propose a novel tightly-coupled SLAM system VIPS-Odom and fuse the measurements from an inertial measurement unit, a wheel speed sensor and four surround-view fisheye cameras to achieve high-precision localization with stability. We detect parking slots from BEV images in real time, robustly maintain observed parking slots’ states, and simultaneously exploit parking slot observations as both frontend visual features and backend optimization factors to improve the localization precision.We also develop an experimental platform with the related sensor suite for evaluating the performance of VIPS-Odom.Extensive real-world experiments covering both short-distance and long-distance parking scenarios reflect the effectiveness and stability of our method against other baseline methods. For future works, we consider to improve VIPS-Odom in the following aspects:(1) We do not include loop detection module into our system. Since loop detection is not necessary for all short-distance APA scenarios and partial long-distance AVP scenarios. We can extend loop detection module to complete our system to further improve the performance for long-distance scenarios.(2) Beside the parking slots, We can incorporate more semantic objects into our system.

## References

- [1] Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020.
- [2] Kaneko M, Iwami K, Ogawa T, et al. Mask-SLAM: Robust feature-based monocular SLAM by masking using semantic segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2018: 258-266.
- [3] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer International Publishing, 2015: 234-241.
- [4] Jonker R, Volgenant T. Improving the Hungarian assignment algorithm[J]. Operations research letters, 1986, 5(4): 171-175.
- [5] Labbe M, Michaud F. Online global loop closure detection for large-scale multi-session graph-based SLAM[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014.
- [6] Glenn Jocher et al. Yolo v5, 2021.
- [7] Shi J. Good features to track[C]//1994 Proceedings of IEEE conference on computer vision and pattern recognition. IEEE, 1994: 593-600.
- [8] Zhang L, Huang J, Li X, et al. Vision-based parking-slot detection: A DCNN-based approach and a large-scale benchmark dataset[J]. IEEE Transactions on Image Processing, 2018, 27(11): 5350-5364.
- [9] Bernardo J M, et al. Bayesian theory[M]. John Wiley & Sons, 2009.
- [10] Xiao J, Zhou Z, Yi Y, et al. A survey on wireless indoor localization from the device perspective[J]. ACM Computing Surveys (CSUR), 2016, 49(2): 1-31.
- [11] Li W, Cao L, Yan L, et al. Vacant parking slot detection in the around view image based on deep learning[J]. Sensors, 2020, 20(7): 2138.
- [12] Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision[C]//IJCAI’81: 7th international joint conference on Artificial intelligence. 1981, 2: 674-679.
- [13] Etxeberria-Garcia, Mikel, et al. ”Visual Odometry in challenging environments: an urban underground railway scenario case.” IEEE Access 10 (2022): 69200-69215.
- [14] Kalman R E. A new approach to linear filtering and prediction problems[J]. 1960.
- [15] Wojke N, Bewley A, Paulus D. Simple online and realtime tracking with a deep association metric[C]//2017 IEEE international conference on image processing (ICIP). IEEE, 2017: 3645-3649.
- [16] Huang S, Dissanayake G. Convergence and consistency analysis for extended Kalman filter based SLAM[J]. IEEE Transactions on robotics, 2007, 23(5): 1036-1049.
- [17] Frost D, Prisacariu V, Murray D. Recovering stable scale in monocular SLAM using object-supplemented bundle adjustment[J]. IEEE Transactions on Robotics, 2018, 34(3): 736-747.
- [18] Salas-Moreno, Renato F., et al. ”Slam++: Simultaneous localisation and mapping at the level of objects.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.
- [19] Schubert D, et al. The TUM VI benchmark for evaluating visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1680-1687.
- [20] Urban S, Leitloff J, Hinz S. Improved wide-angle, fisheye and omnidirectional camera calibration[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2015, 108: 72-79.
- [21] Scaramuzza D, et al. A toolbox for easily calibrating omnidirectional cameras[C]//2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006: 5695-5701.
- [22] Triggs, Bill, et al. ”Bundle adjustment—a modern synthesis.” Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms Corfu, Greece, September 21–22, 1999 Proceedings. Springer Berlin Heidelberg, 2000.
- [23] Shaozu Cao Tong Qin. A-loam. https://github.com/HKUST-Aerial-Robotics/A-LOAM, 2019.
- [24] Liu, Jinxu, Wei Gao, and Zhanyi Hu. ”Visual-inertial odometry tightly coupled with wheel encoder adopting robust initialization and online extrinsic calibration.” 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.
- [25] Zhang J, Singh S. LOAM: Lidar odometry and mapping in real-time[C]//Robotics: Science and systems. 2014, 2(9): 1-9.
- [26] Shao, Xuan, et al. ”A tightly-coupled semantic SLAM system with visual, inertial and surround-view sensors for autonomous indoor parking.” Proceedings of the 28th ACM International Conference on Multimedia. 2020.
- [27] Cui, Li, et al. ”Monte-Carlo localization in underground parking lots using parking slot numbers.”//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.
- [28] Qin, Tong, et al. ”Avp-slam: Semantic visual mapping and localization for autonomous vehicles in the parking lot.”//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
- [29] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE transactions on robotics, 2015, 31(5): 1147-1163.
- [30] Quigley M, et al. ROS: an open-source Robot Operating System[C]//ICRA workshop on open source software. 2009, 3(3.2).
- [31] Mourikis, Anastasios I., et al. ”A multi-state constraint Kalman filter for vision-aided inertial navigation.” Proceedings of IEEE international conference on robotics and automation. IEEE, 2007.
- [32] Agarwal S, Mierle K. Ceres solver: Tutorial & reference[J]. Google Inc, 2012, 2(72): 8.
- [33] Vanholder H. Efficient inference with tensorrt[C]//GPU Technology Conference. 2016, 1(2).