Skip to main content
  • Original Article
  • Open access
  • Published:

Surrounding Objects Detection and Tracking for Autonomous Driving Using LiDAR and Radar Fusion

Abstract

Radar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.

1 Introduction

Autonomous vehicle is a kind of intelligent car, which mainly relies on the computer system and sensor system inside the car to realize autonomous. Autonomous cars are integrated with automatic control, architecture, artificial intelligence, visual computing and many other technologies [1]. It is a highly developed product of computer science, pattern recognition and intelligent control technology, as well as an important symbol to measure a country’s scientific research strength and industrial level [2]. It has a broad application prospect in the field of national defense and national economy. In recent years, auto intelligent technology, such as lane departure warning, ACC adaptive cruise, higher driving Assistance Systems such as automatic parking (Advanced Driver Assistance Systems, ADAS) [3, 4] has been applied in car design, and to a certain extent, realize the self-driving car, bring many convenience to people’s life, and to reduce accidents, to provide a safe and convenient driving experience plays an indispensable role [5, 6]. With the continuous development and progress of science and technology, intelligent driving technology makes self-driving vehicles no longer out of reach on the road [7,8,9].

An autonomous vehicle is inseparable from environmental awareness, decision planning and motion control. As one of the keys of autonomous technology, environmental perception technology adds eyes to autonomous cars through a variety of on-board sensors to accurately perceive the surrounding environment to ensure the safety and reliability of driving [10]. At present, the most commonly used on-board sensors are LiDAR, radar and vision camera, etc., but all kinds of sensors have their advantages and disadvantages.

LiDAR has good directivity, high measuring precision and is not affected by road clutter. According to the structure and type, LiDAR can be divided into two types: single-line (two-dimensional) and multi-line (three-dimensional). Multi-line LiDAR has a certain pitch Angle and can realize surface scanning, but its price is relatively expensive. LiDAR is difficult to detect in close range and can be affected by the surrounding environment and weather. In addition, LiDAR is easy to be crosstalk, LiDAR can not judge whether the pulse light emitted by itself causes the shape of the object can not be judged. Radar speed ranging is the use of Doppler frequency shift, outward radiation wavelength for millimeter electromagnetic wave to complete the task of detecting the target. The electromagnetic wave is received by the receiver after being reflected by the target, and the information in the echo is analyzed to obtain the distance and relative velocity of the target. Radar has the advantages of strong penetrating ability of bad weather and good temperature stability.

However, the accuracy of radar in the detection range is directly restricted by the frequency band loss, and it is also unable to perceive the surrounding target categories, and accurate modeling of all surrounding obstacles cannot be carried out. Both LiDAR and RADAR can sense nearby targets. LiDAR are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. Different types of targets perceived by different sensors will collide.

Our method are two comprehensive LiDAR and radar sensor data, through the way of data fusion to perceive the autonomous vehicle targets around there are both accurate location information and accurate speed information, improve the environmental awareness accuracy, and control to provide effective data for the decision of autonomous cars, and thus reduce the autonomous car accidents.

In recent years, the fusion of LiDAR and Radar has become a hot topic for researchers at home and abroad. In 2017, Kwon et al. [11] created a Detection scheme for a partially occluded pedestrian based on occluded depth in LiDAR-radar sensor fusion. It is verified that LiDAR and Radar fusion data can effectively solve the problem of partially occluded targets. In 2019, Lee et al. [12] worked out the A Geometric Model based 2D LiDAR/Radar sensor fusion for tracking surrounding vehicles. The proposed fusion system improved the estimation performance by reflecting the characteristics of each sensor is confirmed. In 2020, Kim et al. designed Extended Kalman Filter (EKF) for vehicle position tracking using reliability function of Radar and LiDAR, the study confirmed that the accuracy of distance measurements was improved as a result of the LiDAR and radar sensor fusion, and the method that reflected distance errors was more accurate in the extended Kalman filter’s composition. In the same year, Farag, Wael put forward Road-objects tracking for autonomous driving using LiDAR and radar fusion. In this paper, a real-time road-object detection and tracking method for autonomous cars is proposed, implemented and described in detail [13].

Information fusion is the study of efficient methods for automatically or semi-automatically transforming information from different sources and different points in time into a representation that provides effective support for human or automated decision making. In this paper, radar and LiDAR, which are relatively mature in engineering applications, are used as environmental sensing sensors to solve the detection and tracking problems of surrounding targets of autonomous vehicles through sensor information fusion, so as to improve the stability, reliability and environmental adaptability of the environmental sensing system.

The rest of the work is arranged as follows. In Section 2, we provide an overview of related work. In Section 3, we derive our algorithms. First, we discuss the fusion of Radar and LiDAR. Subsequently, we present our modifications of the UKF (Unscented Kalman Filter, UKF) architecture to leverage the fused data. In Section 4, we discuss our results and finally give a summary and an outlook on future work.

2 Related Work

In terms of the specific model selection of the sensor, we used Delphi ESR millimeter wave radar produced by Delphi Corporation of the United States and Shenzhen Robosense RS-LiDAR-16 lines radar. Radar is a sensor that USES electromagnetic wave reflection to measure the distance, azimuth and velocity of the target. It is accurate and reliable in the measurement of distance and velocity information, but has poor identification of azimuth. Delphi 77G ESR, the radar frequency band is 77 GHz, calculates the distance from the target object by calculating the time needed to transmit electromagnetic wave to the environment and receive reflected wave, and calculates the frequency shift of the reflected wave received to get the motion velocity of the target object to be measured. With this radar, 64 targets can be acquired at the same time. Each target contains parameters such as the longitudinal distance of the target, the transverse Angle of the target and the longitudinal speed of the target. Radar can directly output the target sequence, but in the detection process, radar sometimes encounters the phenomenon of false detection. There is no target in the detection range that radar pays attention to, but the radar detects the presence of the target. Delphi 77G ESR is shown in Figure 1.

Figure 1
figure 1

Delphi 77G ESR

The master-slave control is widely employed in the robot manipulation. In most cases, the joystick or the keyboard is the routine input device for the robot master-slave control system. The system presented in this paper is shown in Figure 1.

In addition, the system characteristics, scanning range, accuracy and other performance parameters of Delphi 77G ESR are shown in Table 1.

Table 1 ESR technical parameters

The LiDAR system works simply by emitting a laser beam to complete a 360° scan in a horizontal direction counterclockwise. When the laser spot hits the target, the reflected laser beam passes through the optical receiving system and is detected by the optical detector, and then mixed with the original laser beam is converted into an electrical signal, which passes through the filter and amplifier and is input to the digital signal processor. After processing, the output to the computer shows the target point cloud. As the distance of the obstacle increases, the distance between LiDAR’s two adjacent scan lines increases. We use Shenzhen Robosense RS-LiDAR-16 radar. RS-LiDAR-16, launched by RoboSense, is the first of its kind in China, world leading 16-beam miniature LiDAR product. RS-LiDAR-16, as a solid-state hybrid LiDAR, integrates 16 laser/detector pairs mounted in a compact housing. The compact housing of RS-LiDAR-16 mounted with 16 laser/detector pairs rapidly spins and sends out high-frequency laser beams to continuously scan the surrounding environment. Advanced digital signal processing and ranging algorithms calculate point cloud data and reflectivity of objects to enable the machine to “see” the world and to provide reliable data for localization, navigation and obstacle avoidance. RS-LiDAR-16 installed on our autonomous vehicle, as shown in Figure 2. The related technical parameters are given in Table 2.

Figure 2
figure 2

RS-LiDAR-16 installed on our autonomous vehicle

Table 2 RS-LiDAR-16 LiDAR technical parameters

In this paper, the specific implementation process of the peripheral target detection and tracking method based on radar and LiDAR information fusion is to firstly conduct grid-based clustering of LiDAR point cloud data, select the target according to the rectangular box of clustering results, and identify the target centroid position. Radar can directly output the target sequence. Then, UKF is used to fuse radar and LiDAR targets.

3 Methodology

In this section, we describe our approach for the fusion of Radar and LiDAR. This section gives an overview of current research pertaining to the fusion of LiDAR and Radar sensor data. In this paper, the radar and liar information fusion based sensing system is designed by considering the performance of radar and LiDAR. The different detection ranges of radar and LiDAR are shown in Figure 3. The blue sector represents the LiDAR’s field of view, and the purple sector represents the radar. Targets detected by LiDAR are represented by orange rectangles and radar by blue circles. The detection Angle of the radar is small, ±10° on both sides, but the detection distance is relatively far, with the maximum detection distance up to 180 m. The LiDAR has a maximum measuring distance of 150 m with a measurement accuracy of ±2 cm and a number of points up to 300000 points per second. The horizontal Angle is 360° and the vertical Angle is − 15°~15°. We found that combining the advantages of LiDAR in position perception with the advantages of radar in target speed, the information fusion system based on LiDAR and Radar can obtain more accurate target position and speed information, and effectively improve the surrounding target perception accuracy of autonomous vehicles. Moreover, our results show that the method based on LiDAR and Radar information fusion can effectively avoid the malpractice of system failure caused by the failure of any sensor and improve the robustness of the system. The following sections explain each part in more detail.

Figure 3
figure 3

Detection range of radar and LiDAR (the yellow rectangle target is the target detected by LiDAR, and the blue circle target is the target detected by radar)

3.1 Moving Object Model

The motion state of the peripheral target of the autonomous vehicle is uncertain, so we establish a consistent model for the peripheral target [14,15,16,17]. We actually simplify the actual moving form of the peripheral target. It is assumed that the model object moves along a straight line and can also move at a fixed turning rate and a constant speed, as shown in Figure 4. The blue model is our autonomous car, and the orange model is our peripheral target. At some point, the target object is moving around the host Vehicle. The horizontal and vertical displacements of the target object from the host Vehicle are \(P_{x}\), \(P_{y}\). The \(P_{x}\) and \(P_{y}\) here are relative to the coordinate system of the host vehicle, that is, the host vehicle is the origin of the coordinate system, the front is the x axis, and the left side of the vehicle is the y axis. The speed of the host vehicle is v1 and the speed of the target object is v2, \(\delta_{{2}}\) is the heading angle of the target object, \(\dot{\delta }_{{2}}\) is the angular velocity of the target object. Then the motion state of the target object can be described as a vector x, as shown in Eq. (1). And it follows that the change in motion state of the target object \(\dot{x}\) is expressed as the differential equation \(g(x)\) in Eq. (1). Where, \(\dot{P}_{x}\), \(\dot{P}_{y}\) is the rate of change of position with time, \(\dot{v}_{2}\) is the rate of change of velocity, and \(\dot{\delta }_{2}\), \(\ddot{\delta }_{2}\) are the rate of change of angle and angular acceleration. In addition, the relationship between the rate of change of transverse position \(\dot{P}_{x}\) and the velocity v2 of the Target object is shown in Eq. (2):

$$x = \left[ \begin{gathered} P_{x} \hfill \\ P_{y} \hfill \\ v_{2} \hfill \\ \delta_{2} \hfill \\ \dot{\delta }_{2} \hfill \\ \end{gathered} \right],\;\dot{x} = g(x) = \left[ \begin{gathered} \dot{P}_{x} \hfill \\ \dot{P}_{y} \hfill \\ \dot{v}_{2} \hfill \\ \dot{\delta }_{2} \hfill \\ \ddot{\delta }_{2} \hfill \\ \end{gathered} \right],$$
(1)
$$\dot{P}_{x} = v_{2x} = v_{2} \cdot \cos \left( {\delta_{2} } \right).$$
(2)
Figure 4
figure 4

Moving object model

From Eqs. (1) and (2), the differential Eq. (3) of the motion state change of the Target object can be deduced:

$$\left[ \begin{gathered} \dot{P}_{x} \hfill \\ \dot{P}_{y} \hfill \\ \dot{v}_{2} \hfill \\ \dot{\delta }_{2} \hfill \\ \ddot{\delta }_{2} \hfill \\ \end{gathered} \right] = \left[ \begin{gathered} v_{2} \cdot \cos \left( {\delta_{2} } \right) \\ v_{2} \cdot \sin \left( {\delta_{2} } \right) \\ 0 \\ \dot{\delta }_{2} \\ 0 \\ \end{gathered} \right].$$
(3)

Suppose discrete time step k is related to duration time value \(t_{k}\), discrete time step k + 1 is associated with duration time value \(t_{k + 1}\). The time difference between \(t_{k + 1}\) and \(t_{k}\) is expressed as Δt, then the prediction model can be obtained by using \(\dot{x}\) and its integration with time \(x_{k + 1} = \int {\dot{x}{\text{d}}t} .\)

$$x_{k + 1} = x_{k} + \int_{{t_{k} }}^{{t_{k + 1} }} {\left[ \begin{gathered} \dot{P}_{x} (t) \hfill \\ \dot{P}_{y} (t) \hfill \\ \dot{v}_{2} (t) \hfill \\ \dot{\delta }_{2} (t) \hfill \\ \ddot{\delta }_{2} (t) \hfill \\ \end{gathered} \right]} \,{\text{d}}t = x_{k} + \left[ \begin{gathered} \int_{{t_{k} }}^{{t_{k + 1} }} {v_{2} (t) \cdot \cos (\delta_{2} (t)){\text{d}}t} \\ \int_{{t_{k} }}^{{t_{k + 1} }} {v_{2} (t) \cdot \sin (\delta_{2} (t)){\text{d}}t} \\ 0 \\ \dot{\delta }_{k} \Delta t \\ 0 \\ \end{gathered} \right] = x_{k} + \left[ \begin{gathered} \frac{{v_{k} }}{{\dot{\delta }_{k} }}\left( {\sin \left( {\delta_{k} + \dot{\delta }_{k} \Delta t} \right) - \sin \left( {\delta_{k} } \right)} \right) \\ \frac{{v_{k} }}{{\dot{\delta }_{k} }}\left( { - \cos \left( {\delta_{k} + \dot{\delta }_{k} \Delta t} \right) + \cos \left( {\delta_{k} } \right)} \right) \\ 0 \\ \dot{\delta }_{k} \Delta t \\ 0 \\ \end{gathered} \right].$$
(4)

It is assumed that turn rate (\(\delta_{2}\)) and velocity (\(v_{2}\)) remain unchanged. Because we have noise \(v_{k}\) in our equation of state, we put \(v_{k}\) as a state into our state variable space, and its predictive noise includes acceleration and angular acceleration as Eq. (5):

$$v_{k} = \left[ \begin{gathered} v_{a,k} \hfill \\ v_{{\ddot{\delta },k}} \hfill \\ \end{gathered} \right],\;v_{a,k} \sim N(0,\sigma_{a}^{2} ),\;v_{{\ddot{\delta }_{2} ,k}} \sim N(0,\sigma_{{\ddot{\delta }_{2} }}^{2} ).$$
(5)

If predictive noise is considered, the state of the target is described as:

$$x_{k + 1} = x_{k} + \left[ \begin{gathered} \frac{{v_{k} }}{{\dot{\delta }_{k} }}\left( {\sin \left( {\delta_{k} + \dot{\delta }_{k} \Delta t} \right) - \sin \left( {\delta_{k} } \right)} \right) \hfill \\ \frac{{v_{k} }}{{\dot{\delta }_{k} }}\left( { - \cos \left( {\delta_{k} + \dot{\delta }_{k} \Delta t} \right) + \cos \left( {\delta_{k} } \right)} \right) \hfill \\ 0 \hfill \\ \dot{\delta }_{k} \Delta t \hfill \\ 0 \hfill \\ \end{gathered} \right] + \left[ \begin{gathered} \frac{1}{2}v_{a,k} \cos \left( {\delta_{k} } \right)\left( {\Delta t} \right)^{2} \hfill \\ \frac{1}{2}v_{a,k} \sin \left( {\delta_{k} } \right)\left( {\Delta t} \right)^{2} \hfill \\ v_{a,k} \Delta t \hfill \\ \frac{1}{2}v_{{\ddot{\delta },k}} \Delta t^{2} \hfill \\ v_{{\ddot{\delta },k}} \Delta t \hfill \\ \end{gathered} \right].$$
(6)

There is a special case that we have to consider, that is, when \(\dot{\delta }_{2}\) = 0, (\(P_{x}\),\(P_{y}\)) in our state transition function formula will become infinite, and the vehicle we are tracking is actually traveling in a straight line, so our calculation formula of (\(P_{x}\),\(P_{y}\)) becomes:

$$P_{{x_{k + 1} }} = P_{{x_{k} }} + \cos (\delta_{k} ) \cdot v_{k} \cdot \Delta t,$$
(7)
$$P_{{y_{k + 1} }} = P_{{y_{k} }} + \cos (\delta_{k} ) \cdot v_{k} \cdot \Delta t.$$
(8)

Now that we’ve generated the prediction point, we need to predict the next state of the object because the object is going to move in a certain way. Here, the calculation is based on the state transition matrix. We only need to insert each prediction point into the process model Eq. (6).

3.2 Sensor Fusion Using UKF

Kalman filter [18,19,20] is a linear filter which can fuse multivariate uncertain information to obtain an optimal state estimation. Kalman filter in the continuous change of a linear system performance is very good, because it exists in the process of the system some interference, therefore, even if accompanied by some interference on the system and the Kalman filter can be more accurate to calculate the actual state, and can be on the system motions make reasonable forecast of the future. The prerequisite of Kalman filter is that the system is a linear Gaussian system. Generally speaking, the Gaussian noise will remain Gaussian after the state transition. If the prerequisite of linearity is not satisfied, Kalman filtering is no longer applicable.

Simple Kalman filtering must be applied to systems that conform to Gaussian distributions, but not all systems in reality conform to this. In addition, the transfer result of Gaussian distribution in nonlinear system will no longer be Gaussian distribution. At this point, you need to use an extended Kalman Filter or Unscented Kalman Filter instead. Extended Kalman Filter solves nonlinear problems with local linearity. The nonlinear prediction equation and observation equation are differentiated and linearized by means of tangent lines, that is, the first order Taylor expansion is performed at the mean value.

EKF and KF [21,22,23] have the same algorithm structure, and both describe the posterior probability density in the form of Gaussian, which is obtained by calculating Bayesian recurrence formula. The difference between EKF and KF is that both the state transition matrix and the observation matrix of EKF are Jacobian matrices of state information when calculating variance. In the prediction formula section [11, 24,25,26], the Jacobian matrix of \(F_{k}\) to f is extended by Kalman filtering. In the updating formula section, the Jacobian matrix of \(H_{k}\) to h is extended by Kalman filtering. EKF linearizes the model through Taylor decomposition to obtain the probability mean value and variance of the prediction model. UKF can calculate the mean value and variance of the prediction model by insensitivity transformation. UKF [27] can better solve nonlinear problems through insensitivity transformation (an approximation method to calculate the moments of nonlinear random variables). By sampling and weight of certain rules, the mean value and variance can be approximately obtained. Moreover, the UKF effect can reach the effect of second-order EKF due to the high approximate accuracy of insensitive transformation to statistical moments.

We have two LiDAR and radar sensors. Rs-LiDAR-16 LiDAR measures the position coordinate \((P_{x} ,P_{y} )\) of the target object. Delphi ESR measures the distance between the Target object and the host Vehicle in the coordinate system of host vehicle L as follows:

$$L = \sqrt {P_{x}^{2} + P_{y}^{2} } .$$
(7)

The Angle between the Target Object and the X-axis is \(\delta_{2}\), and the relative distance change rate between the Target Object and the host Vehicle is \(\dot{L}\). The measurement model of LiDAR is still linear, and its measurement matrix is:

$$H_{L} = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ \end{array} } \right].$$
(8)

The prediction is mapped to the LiDAR measurement space, that is:

$$H_{L} \vec{x} = (P_{x} ,P_{y} )^{{\text{T}}} .$$
(9)

The prediction mapping of radar to the measurement space is nonlinear, and its expression is:

$$h(x) = \left( {\begin{array}{*{20}c} L \\ {\delta_{2} } \\ {\dot{L}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\sqrt {P_{x}^{2} + P_{y}^{2} } } \\ {\arctan \frac{{P_{y} }}{{P_{x} }}} \\ {\frac{{P_{x} \cdot v_{2x} + P_{y} \cdot v_{2y} }}{{\sqrt {P_{x}^{2} + P_{y}^{2} } }}} \\ \end{array} } \right).$$
(10)

There are bound to be errors between the predicted data and the actual sensor data, so we need to correlate the state space vector with the data available from the sensor by measuring the transfer matrix h(x). Since the measurement transfer matrix h(x) of radar is a nonlinear function, we also need to find some prediction points and convert the predicted value \(\vec{z}_{k + 1}\) into the measurement space through the nonlinear function \(x_{k + 1|k}\). Our method is to substitute the generated prediction point \(x_{k + 1|k}\) into \(z_{k + 1} = h(x_{k + 1} + w_{k + 1} )\) to find the value of the measured space. Then according to each predicted value and weight, the Predicted measurement mean and measurement mean Covariance can be obtained. As shown in Eqs. (11) and (12):

$$z_{{k + 1{|}k}} = \sum\limits_{i = 1}^{{2n_{\sigma } }} {w_{i} Z_{{k + 1{|}k,i}} } ,$$
(11)
$$\begin{aligned} S_{{k + 1{|}k}}& = \sum\limits_{i = 0}^{{2n_{\sigma } }} {w_{i} (Z_{{k + 1{|}k,i}} - z_{{k + 1{|}k}} )} (Z_{{k + 1{|}k,i}} - z_{{k + 1{|}k}} )^{{\text{T}}} \hfill \\&\quad + E\{ w_{k} \cdot w_{k}^{{\text{T}}} \} , \hfill \\ \end{aligned}$$
(12)

where wi is the weight of the average value, and wk+1 is the noise in the measurement model.

Finally, Kalman gain \(K_{k + 1|k}\) and cross-correlation function \(T_{k + 1|k}\) were calculated based on the predicted values x and z, then update the state and the covariance. As shown in Eqs. (13) and (14):

$$K_{k + 1|k} = T_{k + 1|k} S_{k + 1|k}^{ - 1} ,$$
(13)
$$x_{k + 1|k + 1} = x_{k + 1|k} + K_{k + 1|k} (z_{k + 1} - z_{k + 1|k} ).$$
(14)

Figure 5 presents the LiDAR and radar data fusion technique employing the UKF. The resulted predicted sigma points are then used to compute the state mean and covariance matrices.

Figure 5
figure 5

LiDAR and radar data fusion using UKF

4 Results

In this paper, the results of Radar and LiDAR fusion are validated. The results section is divided into detection results and results analysis. The experimental platform is Intel dual-core processor, 4G memory, the operating system is Ubuntu16.04 LTS, the programming software is ROS (Robot Operating System). A test vehicle as shown in Figure 6 was used for both recording the internal data and evaluating the sensor fusion and tracking effects in practice. The sensor fusion runs at 20 Hz, synchronized with the Radar.

Figure 6
figure 6

Our self-driving car

Our methods have been evaluated on our autonomous vehicle. In the test section where the distance is measured in advance, data is collected and regarded as truth value. We designed a test section containing 6 working conditions for horizontal tracking and longitudinal tracking of surrounding targets of autonomous vehicles: (1) close to, (2) faraway, (3) turn left, (4) turn right, (5) left curve, (6) right curve. At the same time, the amount of target tracking time is counted, so as to facilitate the visual comparison of the tracking effects of LiDAR, Radar and fusion data. Figure 7 shows the interface of our algorithm in the real car test. The yellow and red point clouds in the figure are LiDAR point clouds, the white rectangle box is the vehicle surrounding targets detected after the LiDAR point cloud clustering, the bright yellow square box is the target tracked by radar, and the target velocity information is displayed in real time on the target tracked.

Figure 7
figure 7

Our algorithm is performing real car verification.

The horizontal and vertical tracking experiments in all six scenarios recorded about 80‒90 s of video. Our fusion method runs in real-time and is lightweight. After the tracking target is determined on the road, our self-driving car will drive within the range of 10‒80 m behind the target vertically and −10~10 m behind the target laterally, so as to track the target vehicle in front. Multiple tracking targets of each tracking test were classified according to their ID Numbers and then averaged.

Figures 8 and 9 are the effect experiments of longitudinal and lateral tracking of cars by intelligent driving vehicles, respectively. In Figure 8, it can be intuitively seen that the longitudinal target tracking provided by LiDAR is very stable with only a small range of fluctuation. In contrast, the longitudinal distance observed by the radar has obvious fluctuation. As can be seen in Figure 9, the horizontal target tracking provided by LiDAR is relatively stable, while radar fluctuates sharply when the horizontal distance is too large due to the limitation of field angle. In line with our initial expectations, radar and LiDAR have their advantages and disadvantages in both horizontal and vertical tracking.

Figure 8
figure 8

Vertical tracking results of autonomous vehicle

Figure 9
figure 9

Horizontal tracking results of autonomous vehicle

Table 3 lists the comparison of the final detection and tracking results of popular algorithms. As shown, the accuracy and detection rate of our approach performed best in these types of models.

Table 3 A comparison of several popular algorithms with our approach

Our fusion approach is especially applicable to some small networks, and the test speed improvement of small networks is more obvious.

However, it can be seen from Figures 8, 9 and Table 3 that our fusion approach detection and tracking effect is closer to the truth standard value, which shows the effectiveness of the fusion algorithm in this paper.

5 Conclusion and Future Work

  1. (1)

    Based on the research of single sensor environment awareness technology, a peripheral target detection and tracking method based on UKF LiDAR and Radar information fusion is proposed.

  2. (2)

    Our fusion method combines the millimeter-wave radar and LiDAR sensor in the vehicle-mounted environment.

  3. (3)

    Introduces the UKF nonlinear data fusion method to match the observed values, thus realizing the target detection and tracking based on the millimeter-wave radar and LiDAR, which can effectively reduce the problem of incomplete attributes of peripheral targets perceived by a single sensor.

  4. (4)

    The actual vehicle test and tracking test of peripheral target movement under six common conditions are carried out, and the effectiveness of the fusion algorithm is verified, which can effectively improve the intelligence level of the autonomous vehicle.

References

  1. Chao Yang, Feng Yao, Mingjun Zhang, et al. Adaptive backstepping terminal sliding mode control method based on recurrent neural networks for autonomous underwater vehicle. Chinese Journal of Mechanical Engineering, 2018, 31(6): 228-243.

    Google Scholar 

  2. Y Cai, L Dai, H Wang, et al. Pedestrian motion trajectory prediction in intelligent driving from far shot first-person perspective video. IEEE Transactions on Intelligent Transportation Systems, 2021, doi: https://doi.org/10.1109/TITS.2021.3052908.

    Article  Google Scholar 

  3. Z Liu, Y Cai, L Chen, et al. Vehicle license plate recognition method based on deep convolution network in complex road scene. Proceedings of the Institution of Mechanical Engineers Part D Journal of Automobile Engineering, 2019, 233(9): 2284-2292.

    Article  Google Scholar 

  4. Y Q Zhao, H Q Li, F Lin, et al. Estimation of road friction coefficient in different road conditions based on vehicle braking dynamics. Chinese Journal of Mechanical Engineering, 2017, 30(4): 982-990.

    Article  Google Scholar 

  5. Z Liu, Y Cai, H Wang, et al. Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE Transactions on Intelligent Transportation Systems, 2021, doi: https://doi.org/10.1109/TITS.2021.3059674.

    Article  Google Scholar 

  6. H Hajri, M C Rahal. Real time LiDAR and Radar High-Level Fusion for Obstacle Detection and Tracking with evaluation on a ground truth. International Journal of Mechanical & Mechatronics Engineering, 2018.

  7. J He, F Gao. Mechanism, actuation, perception, and control of highly dynamic multilegged robots: A review. Chinese Journal of Mechanical Engineering, 2020, 33: 79, https://doi.org/https://doi.org/10.1186/s10033-020-00485-9.

    Article  Google Scholar 

  8. M Merchant, C Haas, J Schroder, et al. High-latitude wetland mapping using multidate and multisensor Earth observation data: a case study in the Northwest Territories. Journal of Applied Remote Sensing, 2020, 14(3).

  9. G Sun, K J Ranson. Modeling LiDAR and radar returns of forest canopies for data fusion. Geoscience and Remote Sensing Symposium, IEEE, 2002.

  10. K Na, J Byun, M Roh, et al. Fusion of multiple 2D LiDAR and RADAR for object detection and tracking in all directions. International Conference on Connected Vehicles & Expo, IEEE, 2015.

  11. S K Kwon, E Hyun, J H Lee, et al. Detection scheme for a partially occluded pedestrian based on occluded depth in lidar-radar sensor fusion. Optical Engineering, 2017, 56(11):1.

    Article  Google Scholar 

  12. H Lee, H Chae, K Yi. A geometric model based 2D LiDAR/Radar sensor fusion for tracking surrounding vehicles - ScienceDirect. IFAC-PapersOnLine, 2019, 52(8): 130-135.

    Article  Google Scholar 

  13. W Farag. Road-objects tracking for autonomous driving using lidar and radar fusion. Intelligent Decision Technologies, 2020, 15(3): 1-14.

    Google Scholar 

  14. Y Cai, L Dai, H Wang, et al. DLnet with training task conversion stream for precise semantic segmentation in actual traffic scene. IEEE Transactions on Neural Networks and Learning Systems, doi: https://doi.org/10.1109/TNNLS.2021.3080261.

  15. Wuling Huang. Application of LiDAR in perception of autonomous driving environment. Microcontrollers & Embedded Systems, 2016.

  16. K Kidono, T Naito, J Miura. Reliable pedestrian recognition combining high-definition LIDAR and vision data. International IEEE Conference on Intelligent Transportation Systems, IEEE, 2012.

  17. S K Kwon, E Hyun, J H Lee, et al. A low-complexity scheme for partially occluded pedestrian detection using LiDAR-RADAR sensor fusion. IEEE International Conference on Embedded & Real-time Computing Systems & Applications, IEEE, 2016.

  18. M H Daraei, A Vu, R Manduchi. Region segmentation using LiDAR and Camera – eScholarship. IEEE International Conference on Intelligent Transportation Systems, IEEE, 2017.

  19. E Wan, Van D M R. The unscented Kalman filter. In: Kalman Filtering and Neural Networks. John Wiley & Sons, Inc. 2002.

  20. E Kraft. A quaternion-based unscented Kalman filter for orientation tracking. Sixth International Conference of Information Fusion, Proceedings of the IEEE, 2003.

  21. E N Chatzi, A W Smyth. The unscented Kalman filter and particle filter methods for nonlinear structural system identification with non‐collocated heterogeneous sensing. Structural Control & Health Monitoring, 2010, 16(1): 99-123.

    Article  Google Scholar 

  22. B Stenger, R S Paulo, R Mendonça Cipolla. Model-based hand tracking using an unscented Kalman filter. Proceedings of the British Machine Vision Conference 2001, BMVC 2001, Manchester, UK, 10-13 September 2001, 2001.

  23. X Ning, J Fang. An autonomous celestial navigation method for LEO satellite based on unscented Kalman filter and information fusion. Aerospace ence & Technology, 2007, 11(2-3): 222-228.

    Article  Google Scholar 

  24. T Kim, T H Park. Extended Kalman filter (EKF) design for vehicle position tracking using reliability function of radar and LiDAR. Sensors, 2020, 20(15): 4126.

    Article  Google Scholar 

  25. J Hollinger, B Kutscher, R Close. Fusion of LiDAR and radar for detection of partially obscured objects. SPIE Defense + Security, 2015.

  26. T N Ranjan, A Nherakkol, G Navelkar. Navigation of autonomous underwater vehicle using extended Kalman filter. Trends in Intelligent Robotics - 13th FIRA Robot World Congress, FIRA 2010, Bangalore, India, September 15-17, 2010, 2010.

  27. A Khitwongwattana, T Maneewarn. Extended Kalman filter with adaptive measurement noise characteristics for position estimation of an autonomous vehicle. IEEE/ASME International Conference on Mechtronic & Embedded Systems & Applications, IEEE, 2008.

Download references

Acknowledgements

Not applicable.

Authors’ Information

Ze Liu, born in 1990, received his M.S degree from Jiangsu University, China. He is pursuing his Ph.D degree at Jiangsu University, China. His research interests include sensor fusion, deep learning, and intelligent vehicles.

Yingfeng Cai, born in 1985, received B.S., M.S., and Ph.D. degrees from School of Instrument Science and Engineering, Southeast University, China, respectively. In 2013, she joined Automotive Engineering Research Institute, Jiangsu University, where now, she is working as a professor. Her research interests include computer vision, intelligent transportation systems, and intelligent automobiles.

Hai Wang, born in 1983, received B.S., M.S., and Ph.D. degrees from School of Instrument Science and Engineering, Southeast University, China, respectively. In 2012, he joined School of Automotive and Traffic Engineering, Jiangsu University, where now, he is working as an associate professor. His research interests include computer vision, intelligent transportation systems, and intelligent vehicles. He has published more than 50 papers in the field of machine vision-based environment sensing for intelligent vehicles.

Long Chen, born in 1958, received his Ph.D. degree in Vehicle Engineering from Jiangsu University, China, in 2002. His research interests include intelligent automobiles and vehicle control systems.

Funding

Supported by National Natural Science Foundation of China (Grant Nos. U20A20333, 61906076, 51875255, U1764257, U1762264), Jiangsu Provincial Natural Science Foundation of China (Grant Nos. BK20180100, BK20190853), Six Talent Peaks Project of Jiangsu Province (Grant No. 2018-TD-GDZB-022), China Postdoctoral Science Foundation (Grant No. 2020T130258), Jiangsu Provincial Key Research and Development Program of China (Grant No. BE2020083-2).

Author information

Authors and Affiliations

Authors

Contributions

YC was in charge of the whole trial; ZL wrote the manuscript; HW and LC assisted with sampling and laboratory analyses. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yingfeng Cai.

Ethics declarations

Competing Interests

The authors declare no competing financial interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z., Cai, Y., Wang, H. et al. Surrounding Objects Detection and Tracking for Autonomous Driving Using LiDAR and Radar Fusion. Chin. J. Mech. Eng. 34, 117 (2021). https://doi.org/10.1186/s10033-021-00630-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s10033-021-00630-y

Keywords