Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR

An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR Hindawi Journal of Robotics Volume 2022, Article ID 5264347, 18 pages https://doi.org/10.1155/2022/5264347 Research Article An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR Guoxin Jiang , Yi Xu , Xiaotong Gong , Shanshang Gao , Xiaoqing Sang , Ruoyu Zhu , Liming Wang , and Yuqiong Wang School of Transportation and Vehicle Engineering, Shandong University of Technology, Zibo 255000, China Correspondence should be addressed to Yi Xu; xuyisdut@163.com Received 6 January 2022; Accepted 18 March 2022; Published 15 April 2022 Academic Editor: Arturo Buscarino Copyright © 2022 Guoxin Jiang et al. *is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Environmental perception systems can provide information on the environment around a vehicle, which is key to active vehicle safety systems. However, these systems underperform in cases of sloped roads. Real-time obstacle detection using monocular vision is a challenging problem in this situation. In this study, an obstacle detection and distance measurement method for sloped roads based on Vision-IMU based detection and range method (VIDAR) is proposed. First, the road images are collected and processed. *en, the road distance and slope information provided by a digital map is input into the VIDAR to detect and eliminate false obstacles (i.e., those for which no height can be calculated). *e movement state of the obstacle is determined by tracking its lowest point. Finally, experimental analysis is carried out through simulation and real-vehicle experiments. *e results show that the proposed method has higher detection accuracy than YOLO v5s in a sloped road environment and is not susceptible to interference from false obstacles. *e most prominent contribution of this research work is to describe a sloped road obstacle detection method, which is capable of detecting all types of obstacles without prior knowledge to meet the needs of real-time and accurate detection of slope road obstacles. segmentation, and distance estimation, have become a key 1. Introduction component of autonomous vehicles. *ese systems can not With increasing public attention to the field of traffic safety, only provide important traffic parameters for autonomous the automobile industry is developing in the direction of driving but also perceive surrounding obstacles, such as intelligence, with many studies on autonomous driving by stationary or moving objects, including roadblocks, pedes- engineers and scientific researchers. Autonomous driving trians, and other elements [8]. During the vehicle’s move- does not refer to a single technological field, but it is a ment, radar (laser, millimeter wave), infrared and vision product of the development and integration of automotive sensors are used to collect environmental information to determine whether a target is in a safe area [9–11]. However, electronics, intelligent control, and breakthroughs related to the Internet of *ings [1, 2]. *e principle is that autono- the price of infrared sensors and radars is relatively high, and mous driving systems obtain information on the vehicle and most of them are limited to advanced vehicles [12]. Com- the surrounding environment through an environmental pared with other sensor systems, monocular vision requires perception system. *en, the information is analyzed and only one camera to capture images and analyze scenes, processed by the processor, and the obstacle information in thereby reducing the cost of detection solutions. Moreover, front of the vehicle is detected. Combining with the vehicle the camera can work at a high frame rate and provide rich dynamics model, the obstacle avoidance path planning and information from long distances under good lighting and lateral control of the vehicle are realized [3–7]. favorable weather conditions [13]; therefore, detection Environmental perception systems, which need to per- methods based on machine vision are being more and more form functions such as object classification, detection, widely adopted. 2 Journal of Robotics Machine learning can be used to achieve object classi- proposed a novel image classification framework that in- fication for vision-based obstacle detection [14, 15]. How- tegrates a convolutional neural network (CNN) and a kernel ever, traditional machine learning methods can only detect extreme learning machine to distinguish the categories of known types of obstacles (see Figure 1). If the vehicle cannot extracted features, thus improving the performance of image detect an unknown type of obstacles accurately, it is very classification [27]. Nguyen proposed an improved frame- likely that a traffic accident will occur. *is situation is not work based on fast response neural network (Fast R-CNN). conducive to the safe driving of the vehicle; therefore, in this *e basic convolution layer of Fast R-CNN was formed study, we propose an unsupervised learning-based obstacle using the MobileNet architecture, and the classifier was detection method, which allows the detection of both formed using the deep separable convolution structure of the known- and unknown-type obstacles in complex MobileNet architecture, which improved the accuracy of environments. vehicle detection [28]. Yi proposed the improved YOLO v3 Traditional obstacle detection methods, such as motion neural network model, which introduced the concept of compensation [16–18] and optical flow methods [19–22], Faster R-CNN’s anchor box, and used a multiscale strategy, allow the detection of obstacles of different shapes and at thus greatly improving the robustness of the network in various speeds. However, these methods require the ex- small object detection [29]. Wang K.W. proposed an efficient traction and matching of a large number of object points, fully convolutional neural network, which could predict the which increases the computational load. *erefore, in this occluded part of the road by analyzing foreground objects study, we adopt a Vision-IMU (inertial measurement unit)- and the existing road layout, thereby improving the per- based detection and ranging method, abbreviated as VIDAR, formance of the neural network [30]. Although the above which can realize fast matching and feature point processing methods improved the accuracy of obstacle detection, they of the detection area and improve the obstacle detection require a large number of sample data for network training speed and detection effectiveness. and the range of samples must cover all obstacle types; VIDAR is an obstacle detection method developed for otherwise, the obstacles cannot be detected. horizontal roads. When obstacles and test vehicles are lo- Monocular ranging pertains to the use of a single camera to capture images and perform distance calculations. Zhang cated on different slopes, there will be imaging parallax, which will lead to the detection of false obstacles as real ones, et al. used a stereo camera system to compute a disparity resulting in a large measurement error, thereby affecting the map and use it for obstacle detection. *ey applied different detection accuracy. To cope with the impact of slope computer vision methods to filter the disparity map and changes, in this study, we take the slope of road into account remove noise in detected obstacles, and a monocular camera during the model establishment, and analyze the specific in combination with the histogram of oriented gradients and situation according to the position relationship between the support vector machine algorithms to detect pedestrians and detected vehicle and the obstacle. We thus propose an vehicles [31]. Tkocz studied the ranging and positioning of a obstacle detection and distance measurement method for robot in motion, considering the scale ambiguity of mon- sloped roads based on VIDAR. In the proposed method, ocular cameras. However, only experimental research has slope and distance information are provided by digital maps been done on the speed and accuracy of measurement [32]. [23–26]. Meng C designed a distance measurement system based on a *e rest of this study is structured as follows: in Section fitting method, where a linear relationship between the pixel 2, we review the research on obstacle detection and visual value and the real distance is established according to the ranging. In Section 3, the conversion process from world pixel position of the vehicle in the imaging plane coordinate, coordinates to camera coordinates and the ranging principle thus realizing adaptive vehicle distance measurement under of VIDAR are introduced. In Section 4 the detection process monocular vision [33]. Zhe proposed a method for detecting of real obstacles on sloped roads is outlined and the ranging vehicles ahead, which combined machine learning and prior and speed measurement models are established. Simulated knowledge to detect vehicles based on the horizontal edge of and real experiments are presented in Section 5 and the the candidate area [34]. *ese methods were only used for experimental results are compared with the detection results the measurement of distance to other vehicles and are not of YOLO v5s to demonstrate the detection accuracy of the applicable to other types of obstacles. proposed method. In Section 6, the proposed method and Rosero proposed a method for sensor calibration and our findings are summarized, and the study is concluded. obstacle detection in an urban environment. *e data from a radar, 3D LIDAR, and stereo camera sensors were fused to- gether to detect obstacles and determine their shape [35]. 2. Related Work Garnett used a radar to determine the approximate location of Obstacle detection still forms one of the most significant obstacles, and then used bounding box regression to achieve research foci in the development of intelligent vehicles. With accurate positioning and identification [36]. Caltagirone pro- the improvement and optimization of monocular vision, posed a novel LIDAR-camera fusion fully convolutional net- obstacle detection based on monocular vision has attracted work and achieved the most advanced performance on the the attention of researchers. Most of the research on the KITTI road benchmark [37]. Although sensor fusion methods detection of obstacles using monocular vision is based on the reduce the processing load and achieve improved detection accuracy, these methods are based on flat roads and are not optimization of machine vision and digital image processing to improve the accuracy and speed of detection. S. Wang suitable for complex slope road environments. Journal of Robotics 3 Figure 1: Fast R-CNN. Normal cars are detected, but the overturned car and the box are not detected. To solve the above problems, we propose an obstacle equation from the world to the pixel coordinate system is detection and distance measurement method for sloped shown in roads based on VIDAR. *is method does not require a μ 1/d 0 μ f 0 0 X x 0 w priori knowledge of the scene and uses the road slope in- ⎡ ⎢ ⎤ ⎥ ⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎡ ⎢ ⎤ ⎥ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ Z � ⎢ v ⎥ � ⎢ 0 1/d v ⎥⎢ 0 f 0 ⎥⎢R⎢ Y ⎥ + T⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ formation provided by a digital map and the vehicle driving C ⎢ ⎥ ⎢ y 0 ⎥⎢ ⎥⎢ ⎢ w ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦⎣ ⎣ ⎦ ⎦ state provided by an IMU to construct distance measure- 1 0 0 1 0 0 1 Z ment and speed measurement models, which allow the (1) a 0 μ X x 0 w detection of obstacles in real time, as well as the distance and ⎢ ⎥⎢ ⎥ ⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎤ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ movement state of the obstacles. ⎢ ⎥⎢ ⎥ � ⎢ 0 a v ⎥⎢ R Y ⎥ + T, ⎢ ⎥⎢ ⎥ ⎢ y 0 ⎥⎢ w ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦ 0 0 1 Z 3. Methodology where R and T are the external parameters. *e internal and *e obstacle detection model of VIDAR is based on pinhole external parameters can be obtained through camera camera model, which can accurately calculate the distance calibration. between vehicles and obstacles. 3.2. Obstacle Ranging Method. *e obstacle ranging prin- 3.1. Coordinate Transformation. *e camera can map the ciple is also based on the pinhole model principle. For the coordinate points of the three-dimensional world to the two- convenience of expression, we installed the camera on a test dimensional imaging plane. *is imaging principle is con- vehicle and a vehicle on a sloped road was regarded as the obstacle. *e feature points of the obstacle were detected, sistent with the pinhole model principle, so camera imaging can be described by pinhole model. and the lowest point was taken as the intersection point If we want to determine the correspondence between the between the obstacle and the road surface (see Figure 3). In object point and the image point, we must establish the the case of normal detection by the system, the camera coordinate system needed by vision system, including world collects image information, and by processing the image coordinate system, camera coordinate system, imaging plane information, feature points in the image can be extracted. By coordinate system, and pixel coordinate system. *e measuring the distance of the feature point, it can be de- transformation process from the world coordinate system to termined whether the obstacle where the feature point is the pixel coordinate system is shown in Figure 2. located has a height. For real obstacles, tracking the feature point at the lowest position can calculate the moving speed Pixel coordinate (u, v) and image plane coordinate (x, y) are on the same plane, and the X and Y axes are of the obstacle, judge the motion state of the obstacle, and parallel. *e corresponding position of the original point provide data support for the safe driving of the vehicle. As in the image plane coordinate system is (u , v ). Both the long as the camera can capture images normally, all obstacles 0 0 world and the camera coordinate systems are 3D coor- in the captured scene can be detected. *e number of de- dinates, which are associated through the camera. tected obstacles is related to the number of extracted feature According to the principle of keyhole imaging, the camera points. coordinate system can be obtained through a transfor- Let f be the effective focal length of the camera, z be the mation of the coordinate axes of the world coordinate pitch angle, μ be the pixel size, h be the mounting height of system, so the conversion relation between the two co- the camera and the camera center be the optical center of the ordinate systems must be deduced. *e conversion lens. Let (x , y ) be the coordinate origin of imaging plane 0 0 4 Journal of Robotics World coordinate Camera coordinate Imaging plane Pixel coordinate (Xw, Yw, Zw) (Xc, Yc, Zc) coordinate (x, y) (u, v) Figure 2: Transformation between coordinate systems. Focal Length (f) (x, y) Len’ s Center (x , y ) x 0 0 Horizontal Line Image Plane Optic Axis Bottom point (P) Feature point detection Road Plane Figure 3: Schematic diagram of obstacle ranging model (in order to visualize the detection principle, the nonreal proportional relationship is shown in the figure). coordinate system, and (x, y) be the intersection coordinate distance between the camera and C , and d be the hori- ii of the obstacle and the road plane in the image plane co- zontal distance between the camera and C . ordinate system. *e horizontal distance between the Using triangle similarity, equation (3) can be obtained camera and the obstacle can be obtained using through the geometric relationships shown in Figure 4: ⎧ 1 � tan α, d � . (2) ′ ⎪ d − S ⎪ ii tan ϑ + arctan􏼂 y − y 􏼁 μ/f􏼃􏼁 ⎪ h + kh (3) � tan θ , 4. Research Approach ⎪ d ii In the traditional VIDAR model, it is assumed that the test arctan y − y μ vehicle and obstacles are on the same plane. However, when 0 i ⎩ θ � z + . the test vehicle and the obstacles are on roads with different slopes, this will cause a deviation of the distance measure- *e expression of d is further derived by ii ment. In order to enhance the visual detection accuracy and expand the visual ranging application scenarios, in this S ∗ tan α + kh d � . (4) ii study, we take the slope into account and establish an ob- tan α + k tan θ stacle detection model for the sloped road. When the slope of the road where the obstacle is located is larger than that of the test vehicle, k � −1. In the opposite 4.1. Establishment of the Distance Measurement Model. case, k � 1. *e sloped road mentioned in this study refers to a road where the test vehicle and the obstacles are not on the same slope. When measuring distance, the above situation can be 4.2. Determination of the Real Obstacle on Sloped Roads. simplified into two models. In the process of the test vehicle’s movement, road images *e distance model between the camera and obstacles on were collected twice. *e imaging diagram of the stationary ′ ′ a sloped road with obstacles in front of the test vehicle are obstacles is shown in Figure 5. Let A and B be points on the shown in Figure 4. Let the light blue line be the auxiliary line, road surface, and A and B be the corresponding image and the red dot on the obstacle be any detected object point. points. *e first point of the obstacle on the image plane is A. Let C be a point on the road’s surface, C be the image As the camera moves with the test vehicle, and the y axis on ′ ″ point of C on the sloped road’s surface, C be the inter- the image plane moves from axis y to the axis y , we obtain 1 2 ′ ″ section point where CC extends to the imaginary horizontal the point B of the obstacle on the image plane. A is the ′ ′ plane, and S be the distance from the camera to the be- intersection, where AA extends to the imaginary horizontal ginning of road slope change. Let d be the horizontal plane, and accordingly for B . Δ d is the movement distance h Journal of Robotics 5 C′ i ° S′ C″ dii di (a) S′ C″ ° i di C′ dii (b) Figure 4: Diagram of distance models. (a) Situation 1: the test vehicle on a flat road and obstacles on an uphill road. (b) Situation 2: the test vehicle on a flat road and obstacles on a downhill road. of the camera (i.e., the test vehicle), d is the horizontal Let the speeds of the detecting vehicles and obstacles be v ″ ′ distance from the camera to A , d is the same, d is the and v , respectively. When the imaging point of the road’s 2 11 horizontal distance from the camera to A , and accordingly intersection point and the obstacle’s object point passes for d . through the camera, the relationships between h , v, v , L 22 v A d and d can be calculated using equation (4). *e and L are as follows: 11 22 B relationship between d and d can be approximated as 11 22 ⎧ ⎪ d � d + Δ d, but the real relationship is � L , 11 22 ⎪ ⎪ tan α + bθ d � d + Δ d + Δl. If d ≠ d + Δ d, the object points are ⎪ 1 11 22 11 22 ⎪ not on the road surface. Using this method, it can be de- termined whether the obstacle has a height (i.e., it is a real ⎪ � L , obstacle). ⎪ B tan α + bθ 􏼁 (5) 4.3. Special Case of Obstacle Detection. A special case should ⎪ ′ L − L � v t, ⎪ A B be excluded during obstacle detection. When the test vehicle and the obstacles are moving at the same time, the imaging point of the camera light on the road surface through an Δ d � vt. object point of the obstacle coincides with each other. VIDAR is unable to detect obstacles in this case. When the slope of the road where the obstacle is located *e diagrams of obstacle detection in complex envi- is larger than that of the test vehicle, b � 1, while b � −1 in ronments are shown in Figure 6. Let L be the distance the opposite case. (along the road where the obstacle is located) between the *erefore, VIDAR can be used in all cases except when highest point of the obstacle and the object point of the road ′ L − L � v t, and A B surface when the test vehicle is moving for the first time. ′ v � v/Δ d(h /tan(α + bθ ) − h /tan(α + bθ )). *erefore, v 1 v 2 Similarly, L is the distance when the vehicle moves for the the proposed method using a monocular camera to detect second time. *e letters in Figure 6 have the same meaning obstacles on sloped roads is convenient and feasible. *e as the letters above. detection process only includes tracking and calculating the hv h1 hv 1 6 Journal of Robotics y1 y2 Obstacle’s Imagine Point B′ A′ First Imaging A″ Point 2 1 B″ Δd d22 Δl d11 (a) y1 y2 Obstacle’s Imagine Point First Imaging Point S 1 A″ 2 B″ Δd d2 B′ d1 A′ Δl d22 d11 (b) Figure 5: Schematic diagram of stationary obstacle imaging. (a) Situation 1: the test vehicle moving on a flat road and stationary obstacle on an uphill road. (b) Situation 2: the test vehicle moving on a flat road and stationary obstacle on a downhill road. position of the object point, which can shorten the detection where Δ d � v∗Δt, with v being the speed of the test vehicle. time and reduce computational resource consumption. When d � d + v∗Δt, the obstacle is stationary; other- 11 22 wise, it is moving with a speed of 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 d − d − v · Δt 4.4. Speed Measuring Model of the Sloped Road Obstacle. 􏼌 􏼌 11 22 (7) v � . Obstacles can be imaged in the camera photosensitive element. Δt∗ cos α By extracting and calculating the feature points of the collected obstacle images, we can calculate the feature points that are not on the road surface, that is, the feature points whose height is 4.5. Obstacle Detection on Sloped Roads Using VIDAR. In this not zero. *e object points with nonzero height are mor- study, an obstacle detection and distance measurement phologically processed to obtain the obstacles’ areas. *e method for sloped roads based on VIDAR is proposed, movement state of the obstacles can be determined through which can quickly judge and eliminate false obstacles that tracking and calculating the speed of the lowest point. without height, and at the same time identify real obstacles When the test vehicle is moving, the obstacles, camera, and judge their movement state. *e detection process is as and the lowest point of the road will form images (see follows (see Figure 8). Figure 7). At this time, the horizontal distance between the lowest point of the obstacle and the camera can be expressed as d . Step 1. Update camera parameters using the IMU: ii Let A be the image plane point corresponding to the (1) Calibration of the camera’s initial internal and ex- lowest point of the obstacle at time t and B corresponding ternal parameters: the camera’s parameters, such as point at t + Δt. *e relationship between v d , d and Δ d 11 22 the focal length f, mounting height h, pixel μ, and is as follows: pitch angle z are obtained through calibration. 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 d − d + Δ d 􏼌 􏼁 􏼌 11 22 (2) Data acquisition: the camera is used to collect images (6) � v · Δt, and the IMU is used to collect inertial data. *e cos α hv hv h Journal of Robotics 7 y1 y2 Obstacle’s Imagine Point A′ (B′) First Imaging Δd Point LB A″ 1 2 LA B″ (a) y1 y2 Obstacle’s Imagine Point First Imaging Point 2 B″ Δd 1 A″ LA LB A′ (B′) (b) Figure 6: Schematic diagram of obstacles and camera imaging in complex environments, (a) Situation 1: Test vehicle driving on a flat road and dynamic obstacles on an uphill road, (b) Situation 2: Test vehicle driving on a flat road and dynamic obstacles on a downhill road. acquisition frequency of the IMU is larger than that Step 4. Image processing and obstacle detection: of the camera. (1) Object points are extracted from the background (3) Update of camera parameters: the frequency rela- areas N and N of two consecutive images. With i i+1 tionship between the IMU and camera is established N as the background region template map and N i i+1 and the camera parameters at time t are calculated as the background region real-time map, the periodically according to the inertial data. matching regions M and M are obtained using a i i+1 fast image region matching method based on region feature extraction, as shown in Figure 9(c). Step 2. Obtain the road information. (2) *e object points set P of matching area M is Acquire the road slope α and the distance S from the test i i+1 extracted, as shown in Figure 9(d). vehicle to the sloped road using the digital map. (3) *e distance between the test vehicle and the object Step 3. Regional background extraction: point is calculated. *e horizontal distance between the camera and the imaged object point on the (1) Two consecutive images are taken as the total ob- imaginary road is d � h/tan θ . *e horizontal i i stacle detection area B during the running of the test distance between the camera and the imaged object vehicle (see Figures 9(a) and 9(b)). point on the real road is d � S · tan α + kh/ ii (2) *e lane line is detected and the image within the tan α + k tan θ . *e calculation process of d is i ii lane line is extracted as G . shown in Figure 10. First, the pixel coordinates of (3) Machine learning is used to process the images, object points are obtained through the transforma- detect and classify specific types of obstacles. *e tion of the coordinate axes. *en the slope infor- area set F of known types of obstacles is obtained, mation is obtained through Step 2, and finally the where F � 􏼈f , f , . . . , f 􏼉, and k is the number of distance is obtained through the ranging model. i 1 2 k known obstacles. (4) *e object points with height in set P are extracted (4) *e known obstacle area F in the total obstacle (see Figure 9(e)). Calculate d and d as the i ii i+1i+1 detection area G is eliminated and the background vehicle is moving continuously. If d � d + Δ d, i ii i+1i+1 area N (N � G − F ) is extracted as the VIDAR then the object points are on the road surface i i i i data to be detected. (without height), so the object points p are ij hv hv h 8 Journal of Robotics y1 y2 Obstacle’s Imagine Point First Imaging Point B′ A′ d22 Δd d11 (a) y1 y2 Obstacle’s Imagine Point First Imaging Point 1 B″ 2 A″ Δd A′ d22 d11 B′ (b) Figure 7: Schematic diagram of the camera and the lowest point on the road. (a) Situation 1: the test vehicle moving on a flat road and dynamic obstacles on an uphill road. (b) Situation 2: the test vehicle moving on a flat road and dynamic obstacles on a downhill road. eliminated. If d ≠ d + Δ d, then the object moving with an instantaneous speed ii i+1i+1 points are not on the road surface (i.e., they have a v � |d − d − v · Δt|/Δt∗ cos α . i+1 i+1i+1 i+2i+2 i+1 i nonzero height). *e object points are extracted to *e proposed obstacle detection method can be used to obtain the object point set p . detect real obstacles in complex environments and deter- (5) Morphological processing is applied to the image of mine their movement state, which is beneficial for vehicles to the object point set p (Figure 9(f)). *e target image i take timely measures and avoid accidents. is E , and the structural element is B, which is used to apply a closing operation on E and obtain C con- 5. Experiment and Evaluation nected regions. *e real obstacle region O is thus obtained, where O � E · B � (E ⊕B )⊖B , and *e proposed method can be used for obstacle detection in i i i i i i O � (o , o , . . . , o ). complex environments with improved accuracy, as well as i i,1 i,2 i,c distance and speed measurement of obstacles. Obstacle (6) Edge detection of real obstacles, shown in detection and distance measurement were realized in Figure 9(g). Matlab, whereas all experiments were performed on a (7) According to the detection result of (6), the lowest desktop PC with the Intel(R) Xeon(R) Silver 4210 CPU. object point of each obstacle area is extracted, as shown in Figure 9(h). *e lowest object point set P 5.1. Simulation Experiment. In this study, experimental constitutes the obstacle area. equipment was used to simulate a detection environment so (8) Each object point in P is tracked during the i as to verify the detection effect of obstacles on sloped roads movement of the test vehicle. based on VIDAR. *e experimental equipment included: a (9) Get the movement state of the obstacles is obtained. test vehicle equipped with an OV5640 camera unit and a *e movement speed of the obstacle where the object JY61p IMU (Figure 11(a)), vehicle scale models point is located can be obtained by tracking each (Figure 11(b)), bottle caps and paper (Figure 11(c)), and object point in P . If |d − d | � v · Δt, the simulated sloped road (Figure 11(d)). Among them, the test i i+1i+1 i+2i+2 i+1 obstacle on which these object points are located is vehicle was used to analyze the road environment and detect static. If |d − d |≠ v · Δt, the obstacle is its own driving state, scaled vehicle models were used to i+1i+1 i+2i+2 i+1 h Journal of Robotics 9 Extract the background Fast matching Set of matched Machine learning to detect Lane detection and Image (Bi) area Ni and Ni+1 area Mi+1 known types of obstacles Fi feature point Pi range extraction (Gi) generation DIgital map output a Camera parameter update and S Calculate di and dii Inertial navigation t = 0 Calculation of Δd data collection Camera parameters Calculation of at t Camera angle t=t+1/f Calculate di+1 and Vechicular movement di+1i+1 at time t+Δt Image morphology processing Eliminate road The lowest feature N Obstacle area Generate the Extract highly feature surface feature dii=di+li+1+Δt vi ? point set Pi’’ collection Oi target image Ei points set Pi’ points pij Kalman filter tracking Calculate di+2 and Vehicular movement ⁎ Static obstacles di+li+1‑di+2i+2 =Δt vi+1 ? the point di+2i+2 at time t+2Δt Dynamic obstacles, and the moving speed is vi+1’ Figure 8: Flow chart of slope road obstacle detection method based on VIDAR (see the following for specific steps). (d) Matched points 1 (a) (b) (c) Matched points 2 (h) (g) (f ) (e) Figure 9: Obstacle detection based on VIDAR and digital map. (a) Two images collected during the movement of the vehicle. (b) Lane line detection on the image and extraction of the detection range. (c) Feature point detection and matching on the images within the range. (d) Extraction of feature points. (e) Determination of whether the extracted feature points have height. (f) Removal of feature points without height. (g) Morphological processing on feature points. (h) Tracking of the lowest point of obstacles. simulate known obstacles, while the bottle caps and paper acceleration and acceleration data of the vehicle were ob- were used to simulate unknown obstacles. *e road slope tained by the IMU installed by the vehicle. Quaternion was set to 13 . method is used to solve the camera attitude, and the pitching *e bottle cap was taken as the real obstacle of unknown angle of the camera is updated. *e velocity data are used to type, and the paper pasted on the simulated road was taken calculate the horizontal distance between the vehicle and the as the pseudoobstacle of unknown type. *e angular obstacle. *e height of the obstacle is calculated by the 10 Journal of Robotics Imagine plane Focal Length (f ) coordinate Distance (x, y) measurement Len’ s Center model (Px, Py ) (x , y ) Y o o hoeizontal Line Image Plane Optic Axis Pixel coordinate Image and object bulding boxes Yw Road Plane (u, v ) Zw Distance U Calculate Xw World Slope Zc Yc coordinate Camera Extrinsic coordinate parameters Xc High precision Bottom point (P) electronic map our constructed data Figure 10: Flow chart of horizontal distance calculation. IMU Movable platform Camera unit (a) (b) (c) (d) Figure 11: Equipment for simulating experiment. (a) Test vehicle. (b) Vehicle scale models. (c) Bottle caps and paper. (d) Simulated sloping road. change of the distance before and after the movement, so as to *e test results of the simulation experiment is shown in determine whether the detected obstacle is a real obstacle. *e Table 1. video collected using the OV5640 camera comprised an image It can be seen from Figure 12 that the original VIDAR sequence at 12FPS, which was used for obstacle detection. *e can detect unknown types of obstacles such as bottle caps, results obtained using the original VIDAR and VIDAR on but it will detect false obstacles as real obstacles, resulting in sloped roads are shown in Figure 12, while the test results of the low accuracy of obstacle detection. However, the obstacle simulation experiment are summarized in Table 1. detection method for sloping roads based on VIDAR can h Journal of Robotics 11 (a) (b) Figure 12: Comparison of obstacle detection effect in simulated environment. (a) Original VIDAR. (b) VIDAR on sloped roads. Table 1: Test results of the simulation experiment. *e i th movement of the test Detection distance *e test vehicle moving Whether it has Whether to Obstacle vehicle (cm) distance height exclude 1 16.5 1 3 No Yes 2 19.5 1 18.5 2 3 Yes No 2 21.2 1 24.5 3 3 Yes No 2 27.0 eliminate false obstacles, which makes up for the wrong Camera distortion includes radial distortion, thin lens detection of unknown types of obstacles on sloping roads; distortion, and centrifugal distortion. *e superposition of therefore, compared with the original VIDAR, our proposed the three kinds of distortion results in a nonlinear distortion, method can detect obstacles more accurately. the model of which can be expressed in the image coordinate system as follows: 2 2 3 2 2 ⎧δ (x, y) � s x􏼐x + y 􏼑 + 2p xy + p y + k x􏼐x + y 􏼑 x 1 1 2 1 5.2. Real Environment Experiment. In the real environment, ⎩ 2 2 3 2 2 purely electric vehicles were used as test vehicles (see Fig- δ (x, y) � s y􏼐x + y 􏼑 + 2p xy + p y + k x􏼐x + y 􏼑 y 2 2 1 1 ure 13). As a sensor, the camera can adapt to complex (8) environments and collect environmental information in real time (we only used the left camera). *e camera was installed where s and s are centrifugal distortion coefficients; k and 1 2 1 at a height of 1.60 m. *e IMU used for locating the test k are radial distortion coefficients, and p and p are the 2 1 2 vehicle and reading of its movement state in real time was distortion coefficients of thin lenses. installed at the bottom of the test vehicle. GPS was used for Because the centrifugal distortion of the camera is not accurate location positioning. *rough the combination of considered in this study, the internal reference matrix of the GPS and IMU, the real-time position information of the test camera can be expressed as shown in vehicles and obstacles can be obtained, and then the tra- 5.9774e + 03 0 949.8843 jectory information of vehicles and obstacles can be ob- ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ tained. A digital map was used to obtain accurate road ⎢ ⎥ M � ⎢ 0 5.9880e + 03 357.0539 ⎥. (9) ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ information such as distance and slope. A calculation unit 0 0 1 was used to process the data in real time. Accurate calibration of camera parameters was a pre- *e calibration of the camera’s external parameters can requisite for the whole experiment and is a very important be calculated by taking the edge object points of lane lines. task for obstacle detection methods. In this paper, Zhang *e calibration results are shown in Table 2. Zhengyou’s camera calibration method was adopted to Since the images in the public data set were all data calibrate the DaYing camera. First, the camera was fixed to collected by other cameras, different camera parameters will capture images of a checkerboard at different positions and affect the accuracy of ranging, so we used the VIDAR-Slope angles. *en, the key points of the checkerboard were se- database (Figure 15), the images in which where collected lected and used to establish a relationship equation. Finally, using a DaYing camera. *e collection frequency was 77 the internal parameter calibration was realized. *e camera frames/min, and there are 2270 images in total. *e ex- calibration result is shown in Figure 14. periment and image collection took place in Shandong 12 Journal of Robotics Camera Computing processing unit and digital map GPS+IMU Figure 13: Schematic diagram of the test vehicle. Reprojection Errors 0.1 Drag to select outliers 0.08 0.06 0.04 (0, 0) 0.02 Overal Mean Error: 0.07 pixels 12 3 4 5 6 7 8 Images Pattern‑centric Camera‑centric Z (millimeters) X (millimeters) Detected points Checkerboard origin Reprojected points Figure 14: Schematic diagram of camera calibration results. Y (millimeters) Mean Error in Pixels Journal of Robotics 13 Table 2: Calibration results of camera external parameters. External parameter type Parameter size Pitch angle 1.20 Yaw angle 3.85 Rotation angle 2.37 2021 8 16 2021 8 16 2021 8 16 2021 8 16 2021 8 16 910 910 910 910 910 21.697.jpg 26.892.jpg 41.852.jpg 52.335.jpg 56.298.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 16 2021 8 16 911 10 0 10 0 10 0 10 0 0.213.jpg 38.7.jpg 12.74.jpg 21.62.jpg 22.77.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 23 2021 8 23 10 10 10 0 11 7 11 7 11 9 30.84.jpg 27.503.jpg 12.934.jpg 57.736.jpg 25.916.jpg Figure 15: *e VIDAR-Slope database (part of data). University of Technology’s driving school and experimental YOLO v5s lacks training in unknown types of obstacles, and building. We selected the downhill section of the parking lot will consequently offer reduced safety when used in realistic for the experiment. In the process of obstacle detection, the vehicle situations. However, the proposed obstacle detection test vehicle moves at a constant speed of 25 km/h. method does not require training and can detect all types of *e detection results of YOLO v5s and the method obstacles, thus ensuring its effectiveness of obstacle detection proposed in this study are shown in Figure 16. *e ac- results on sloped roads. *e total number of obstacles in the curacy of obstacle detection was measured through the target area in the VIDAR-Slope database was 9526. *e results number of true positives (TP), false positives (FP), true of YOLO v5s and proposed method are shown in Table 3. negatives (TN), and false negatives (FN). Let a be an In the results’ analysis, Accuracy (A), Recall (R), and obstacle that is correctly classified as a positive example, b Precision (P) were used as evaluation indices for the two be an obstacle that is wrongly classified as a positive obstacle detection methods, calculated through the follow- example, c be an obstacle that is correctly classified as a ing equations: negative example, and d be an obstacle that is incorrectly TP + TN identified as a negative example. *en, TP � 􏽐 a , A � , (10) i�1 n n n TP + TN + FP + FN FP � 􏽐 b , TN � 􏽐 c , FN � 􏽐 d . i�1 i i�1 i i�1 i *e YOLO series is a representative target detection TP (11) framework based on deep learning. *ere are four versions R � , TP + FN of the target detection network: namely YOLO v5s, YOLO v5m, YOLO v5l, and YOLO v5x. Among them, YOLO v5s is TP (12) P � . the smallest and has the fastest speed, so we choose it for TP + FP comparative experiments. Comparing the two methods, it can be seen that the sta- *e Accuracy, Recall, and Precision of YOLO v5s and the method proposed in this study are shown in Table 4. bility of the proposed method is higher than that of YOLO v5s. 14 Journal of Robotics 2021 8 16 2021 8 16 2021 8 16 2021 8 16 910 910 910 910 21.697.jpg 26.892.jpg 41.852.jpg 52.335.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 16 911 10 0 10 0 56.298.jpg 0.213.jpg 38.7.jpg 44.8.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 16 10 0 10 0 10 0 10 0 12.74.jpg 21.62.jpg 22.77.jpg 30.84.jpg (a) 2021 8 16 9 2021 8 16 9 2021 8 16 9 2021 8 16 9 10 21.697.jpg 10 26.892.jpg 10 41.852.jpg 10 52.335.jpg 2021 8 16 9 2021 816 9 2021 8 16 10 2021 8 16 10 10 56.298.jpg 11 0.213.jpg 0 38.7.jpg 0 44.8.jpg 2021 816 10 2021 816 10 2021 8 16 10 2021 8 16 10 00 12.74.jpg 21.62.jpg 0 22.77.jpg 0 30.84.jpg (b) Figure 16: Comparison of partial test results of YOLO v5s and proposed method. (a) YOLO v5s can only detect known type obstacles. (b) *e obstacle method proposed in this article can detect pedestrians (known types) and boxes (unknown types). Journal of Robotics 15 Table 3: Obstacle detection results in sloped roads of YOLO v5s and the proposed method. Detection method Input value TP FP TN FN YOLO v5s 9526 6716 2808 915 1214 Proposed method 9526 9124 402 530 485 Table 4: Evaluation indices of YOLO v5s and the proposed method. Detection method A (%) R (%) P (%) YOLO v5s 81.73 84.68 70.50 Proposed method 90.36 94.94 95.77 Since the average detection time of the method proposed Table 5: Evaluation indices of YOLO v5s and the proposed by us is 0.201s, in order to ensure the detection of obstacles method. under normal driving conditions, the speed of the detected vehicle must be less than or equal to the ratio of detection Detection method A (%) distance to the average detection time. Fast R-CNN 76.53 Compared with YOLO v5s, the method proposed in this SSD 71.28 study saves the training step of data set. *e modified Fast YOLO 78.95 method firstly uses machine learning to detect obstacles of SSD 300 73.96 known types, but it needs to process feature points of ob- YOLO v5s 81.73 Proposed method 90.36 stacles of unknown types, so the final detection time is longer than that of YOLO v5s. But it can still meet the demand of real-time detection. In order to verify the reliability of the distance mea- Table 6: Evaluation indices of detections of YOLO v5s and the surement method proposed in this study and the feasibility proposed method. of practical application, we have done a set of obstacle Detection method Detection time (s) detection experiments. We first use a fixed camera to take YOLO v5s 0.164 pictures of the real road environment ahead and record the Proposed method 0.201 process. *e result of IMU data processing is shown in Figure 17. *en we select a few frames of images during the progress of the obstacle for processing. Finally, the distance *e experimental results in Tables 3 and 4 show that between the camera and the obstacle in front is calculated, due to vehicle fluctuations and other factors, misjudgment and the detection result is shown in Figure 18. or misdetection may occur during vehicle movement. *e comparison results are shown in Table 7. Analyzing the difference between the actual and mea- Compared with YOLO v5s, the accuracy of the obstacle detection method proposed in this study is increased by sured distance results, it was found that the difference lied mostly between 0.013 and 0.191. *is phenomenon is caused 8% and its precision is increased by 26.4%, which dem- onstrates its improved obstacle detection capability on by the slight change in the posture of the vehicle. *is study is based on the obstacle detection method of sloped roads. In terms of detection accuracy, we also compared our VIDAR and the use of a digital map for distance mea- surement experiments. *e experimental results show that method with other commonly used target detection methods. *e detection results are shown in Table 5. It is the error of this method is less than 2% at short distances evident that the proposed obstacle detection method ach- (<20 m), and the distance measurement effect is better ieves an accuracy higher than state of the art methods. than that reported by Guo Lei’s. Moreover, existing vi- *e real-time nature of obstacle detection refers to the sion-based ranging requirements call for a measurement error of less than 5% [36]. *erefore, from the distance ability to process every image frame collected in time. In terms of detection speed, YOLO v5s and the proposed measurement results, the vision-based ranging algorithm proposed in this article meets the requirements in mea- method were used to process 2270 images and the respective average obstacle detection times were calculated. *e results surement accuracy and can achieve accurate distance measurement to obstacles. are shown in Table 6. 16 Journal of Robotics 0.300 0.250 0.200 0.150 0.100 0.050 0.000 Speed (m/s) Distance (m) Figure 17: IMU data processing result (speed and distance of the test vehicle). (a) (b) (c) (d) Figure 18: Obstacle distance detection. (a) 2021 8 16 10 0 32.825. (b) 2021 8 16 10 0 34.327. (c) 2021 8 16 10 0 35.766. (d) 2021 8 16 10 0 38.630. 10:00:10.738 10:00:11.939 10:00:13.189 10:00:14.360 10:00:15.569 10:00:16.776 10:00:17.975 10:00:19.176 10:00:20.383 10:00:21.579 10:00:22.684 10:00:23.889 10:00:25.083 10:00:26.286 10:00:27.489 10:00:28.694 10:00:29.893 10:00:31.120 10:00:32.294 10:00:33.499 10:00:34.701 10:00:35.917 10:00:37.112 10:00:38.325 10:00:39.521 10:00:40.731 10:00:41.929 10:00:43.027 10:00:44.234 10:00:45.440 Journal of Robotics 17 Table 7: Distance measurement results based on monocular vision. Time Slope ( ) Obstacle Actual distance (m) Measuring distance (m) Distance difference (m) Error (%) 1 13.793 13.926 0.133 0.964 2 13.609 13.800 0.191 1.403 2021 8 16 10 0 32.825 7.35 3 12.651 12.637 0.014 0.111 4 14.198 14.052 0.146 1.028 1 13.719 13.826 0.107 0.780 2 13.499 13.594 0.095 0.704 2021 8 16 10 0 34.327 7.21 3 12.265 12.350 0.085 0.693 4 14.270 14.091 0.179 1.254 1 13.532 13.422 0.110 0.813 2 13.358 13.196 0.162 1.213 2021 8 16 10 0 35.766 7.10 3 12.309 12.131 0.178 1.446 4 13.998 13.875 0.123 0.879 1 13.321 13.228 0.093 0.698 2 13.102 13.089 0.013 0.099 2021 8 16 10 0 38.630 6.88 3 12.037 11.931 0.106 0.881 4 13.713 13.655 −0.058 0.423 6. Conclusion References [1] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A In this study, an obstacle detection method based on VIDAR survey of deep learning techniques for autonomous driving,” is applied to complex environments, avoiding the drawbacks Journal of Field Robotics, vol. 37, no. 3, pp. 362–386, 2020. of machine learning methods that can only detect known [2] H. Fujiyoshi, T. Hirakawa, and T. Yamashita, “Deep learning- obstacles. Moreover, by integrating slope information into based image recognition for autonomous driving,” IATSS the VIDAR detection method, real obstacles can be detected Research, vol. 43, no. 4, pp. 244–252, 2019. on sloped roads, and distance and speed measurement of [3] I.-S. Weon and S.-G. Lee, “Environment recognition based on obstacles can be realized, which has important research multi-sensor fusion for autonomous driving vehicles,” Journal value for autonomous vehicles and active safety systems. It of Institute of Control, Robotics and Systems, vol. 25, no. 2, can be seen from the results that the proposed method is pp. 125–131, 2019. effective in improving the accuracy and speed of obstacles [4] Z. Sun, Q. Zhao, N. Zhang, B. Zhu, and Wang, “Intelligent detection and can meet the requirements of obstacle de- vehicle multi-objective tracking based on particle swarm al- tection in complex environments. Obstacle detection in gorithm,” Forest Engineering, vol. 36, no. 4, pp. 70–75, 2020. [5] M. Bucolo, A. Buscarino, C. Famoso, L. Fortuna, and complex road environment is the basis for safe driving of M. Frasca, “Control of imperfect dynamical systems,” Non- vehicles. *erefore, obstacle avoidance path planning and linear Dynamics, vol. 98, no. 4, pp. 2989–2999, 2019. speed control based on obstacle detection are our future [6] L. Xiong, X. Xia, Y. Lu et al., “IMU-based automated vehicle research directions. slip angle and attitude estimation aided by vehicle dynamics,” Sensors, vol. 19, no. 8, p. 1930, 2019. [7] M. Shang, B. Rosenblad, and R. Stern, “A novel asymmetric Data Availability car following model for driver-assist enabled vehicle dy- Data are available on request to the corresponding author. namics,” IEEE Transactions on Intelligent Transportation Systems, 2022. [8] X. Zhao, P. Sun, Z. Xu, H. Min, and H. Yu, “Fusion of 3D Conflicts of Interest LIDAR and camera data for object detection in autonomous vehicle applications,” IEEE Sensors Journal, vol. 20, no. 9, *e authors declare that they have no conflicts of interest. pp. 4901–4913, 2020. [9] W. Huang, Z. Zhang, W. Li, and J. Tian, “Moving object tracking based on millimeter-wave radar and vision sensor,” Acknowledgments Journal of Applied Science and Engineering, vol. 21, no. 4, pp. 609–614, 2018. *is work was supported in part by the National Natural [10] C.-C. Lin, W.-L. Mao, T.-W. Chang, C.-Y. Chang, and Science Foundation of China under Grant 51905320, the S. S. S. Abdullah, “Fast obstacle detection using 3D-to-2D China Postdoctoral Science Foundation under Grants LiDAR point cloud segmentation for collision-free path 2018M632696 and 2018M642684, the Shandong Key R&D planning,” Sensors and Materials, vol. 32, no. 7, Plan Project under Grant 2019GGX104066, the Shandong pp. 2365–2374, 2020. Province Major Science and Technology Innovation [11] A. Dairi, F. Harrou, M. Senouci, and Y. Sun, “Unsupervised Project under Grant 2019JZZY010911, and SDUT and obstacle detection in driving environments using deep- Zibo City Integration Development Project under Grant learning-based stereovision,” Robotics and Autonomous Sys- 2017ZBXC133. tems, vol. 100, pp. 287–301, 2018. 18 Journal of Robotics [12] H. Zhu, K.-V. Yuen, L. Mihaylova, and H. Leung, “Overview [28] H. Nguyen, “Improving YOLO v5 framework for fast vehicle of environment perception for intelligent vehicles,” IEEE detection,” Mathematical Problems in Engineering, vol. 2019, Article ID 3808064, 11 pages, 2019. Transactions on Intelligent Transportation Systems, vol. 18, [29] Z. Yi, S. Yongliang, and Z. Jun, “An improved tiny-YOLO v3 no. 10, pp. 2584–2601, 2017. pedestrian detection algorithm,” Optik, vol. 183, pp. 17–23, [13] W. Song, Y. Yang, M. Fu, Y. Li, and M. Wang, “Lane detection and classification for forward collision warning system based [30] K. Wang, F. Yan, B. Zou, L. Tang, Q. Yuan, and C. Lv, on stereo vision,” IEEE Sensors Journal, vol. 18, no. 12, “Occlusion-free road segmentation leveraging semantics for pp. 5151–5163, 2018. autonomous vehicles,” Sensors, vol. 19, no. 21, p. 4711, 2019. [14] L. Sun, K. Yang, X. Hu, W. Hu, and K. Wang, “Real-time [31] M. Kristan, V. Sulic Kenk, S. Kovacic, and J. Pers, “Fast image- fusion network for RGB-D semantic segmentation incorpo- based obstacle detection from unmanned surface vehicles,” rating unexpected obstacle detection for road-driving im- IEEE Transactions on Cybernetics, vol. 46, no. 3, pp. 641–654, ages,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5558–5565, 2020. [32] M. Tkocz and K. Janschek, “Vehicle speed measurement [15] Y. Qian, W. Zhou, J. Yan, W. Li, and L. Han, “Comparing based on binocular stereovision system,” Journal of Intelligent machine learning classifiers for object-based land cover & Robotic Systems, vol. 80, no. 3, pp. 475–489, 2015. classification using very high resolution imagery,” Remote [33] C. Meng, H. Bao, Y. Ma, X. Xu, and Y. Li, “Visual Meterstick: Sensing, vol. 7, no. 1, pp. 153–168, 2015. preceding vehicle ranging using monocular vision based on [16] Y. Yu, L. Kurnianggoro, and K.-H. Jo, “Comaring machine the fitting method,” Symmetry, vol. 11, no. 9, p. 1081, 2019. learning classifiers for object-based land cover classification [34] T. Zhe, L. Huang, Q. Wu, J. Zhang, C. Pei, and L. Li, “Inter- using very high resolution imagery,” International Journal of Vehicle distance estimation method based on monocular Control, Automation and Systems, vol. 17, no. 7, pp. 1866– vision using 3D detection,” IEEE Transactions on Vehicular 1874, 2019. Technology, vol. 69, no. 5, pp. 4907–4919, 2020. [17] K. Dokka, P. R. MacNeilage, G. C. DeAngelis, and [35] L. A. Rosero and F. S. Osorio, ´ “Calibration and multi-sensor D. E. Angelaki, “Multisensory self-motion compensation fusion for on-road obstacle detection,” in Proceedings of the during object trajectory judgments,” Cerebral Cortex, vol. 25, 2017 Latin American Robotics Symposium (LARS) and 2017 no. 3, pp. 619–630, 2015. Brazilian Symposium on Robotics (SBR), pp. 1–6, Curitiba, [18] Z. Li, S. E. Dosso, and D. Sun, “Motion-compensated acoustic Brazil, November 2017. localization for underwater vehicles,” IEEE Journal of Oceanic [36] N. Garnett, S. Silberstein, and S. Oron, “Real-time category- Engineering, vol. 41, no. 4, pp. 840–851, 2016. based and general obstacle detection for autonomous driv- [19] P. Agrawal, R. Kaur, V. Madaan, and M. Babu, “Moving object ing,” in Proceedings of the IEEE International Conference on detection and recognition using optical flow and eigen face Computer Vision Workshops, pp. 198–205, Venice, Italy, using low resolution video,” Recent Patents on Computer October 2017. Science, vol. 11, no. 2, pp. 1–10, 2018. [37] L. Caltagirone, M. Bellone, L. Svensson, and M. Wahde, [20] J. Cho, Y. Jung, D.-S. Kim, S. Lee, and Y. Jung, “Moving object “LIDAR–camera fusion for road detection using fully con- detection based on optical flow estimation and a Gaussian volutional neural networks,” Robotics and Autonomous Sys- mixture model for advanced driver assistance systems,” tems, vol. 111, pp. 125–131, 2019. Sensors, vol. 19, no. 14, p. 3217, 2019. [21] X. Zhao, F. Pu, Z. Wang, H. Chen, and Z. Xu, “Detection, tracking, and geolocation of moving vehicle from uav using monocular camera,” IEEE Access, vol. 7, pp. 101160–101170, [22] L. Chengmei, B. A. I. Hongyang, G. Hongwei, and L. Huaju, “Moving object detection and tracking based on improved optical flow method,” Chinese Journal of Scientific Instrumen, vol. 39, pp. 249–256, 2018. [23] I. Yang and W. H. Jeon, “Development of lane-level location data exchange framework based on high-precision digital map,” Journal of Digital Contents Society, vol. 19, no. 8, pp. 1617–1623, 2018. [24] M. ElMikaty and T. Stathaki, “Detection of cars in high- resolution aerial images of complex urban environments,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 10, pp. 5913–5924, 2017. [25] J. Li, H. Cheng, H. Guo, and S. Qiu, “Survey on artificial intelligence for vehicles,” Automotive Innovation, vol. 1, no. 1, pp. 2–14, 2018. [26] K. Choi, J. K. Suhr, and H. G. Jung, “Map-matching-based cascade landmark detection and vehicle localization,” IEEE Access, vol. 7, pp. 127874–127894, 2019. [27] S. H. Wang and X. X. Li, “A real-time monocular vision-based obstacle detection,” in Proceedings of the Sept. 2020 6th In- ternational Conference on Control, Automation and Robotics (ICCAR), pp. 695–699, Singapore, April 2020. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Robotics Hindawi Publishing Corporation

An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR

Loading next page...
 
/lp/hindawi-publishing-corporation/an-obstacle-detection-and-distance-measurement-method-for-sloped-roads-uInYo2Qec6

References (38)

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2022 Guoxin Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-9600
eISSN
1687-9619
DOI
10.1155/2022/5264347
Publisher site
See Article on Publisher Site

Abstract

Hindawi Journal of Robotics Volume 2022, Article ID 5264347, 18 pages https://doi.org/10.1155/2022/5264347 Research Article An Obstacle Detection and Distance Measurement Method for Sloped Roads Based on VIDAR Guoxin Jiang , Yi Xu , Xiaotong Gong , Shanshang Gao , Xiaoqing Sang , Ruoyu Zhu , Liming Wang , and Yuqiong Wang School of Transportation and Vehicle Engineering, Shandong University of Technology, Zibo 255000, China Correspondence should be addressed to Yi Xu; xuyisdut@163.com Received 6 January 2022; Accepted 18 March 2022; Published 15 April 2022 Academic Editor: Arturo Buscarino Copyright © 2022 Guoxin Jiang et al. *is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Environmental perception systems can provide information on the environment around a vehicle, which is key to active vehicle safety systems. However, these systems underperform in cases of sloped roads. Real-time obstacle detection using monocular vision is a challenging problem in this situation. In this study, an obstacle detection and distance measurement method for sloped roads based on Vision-IMU based detection and range method (VIDAR) is proposed. First, the road images are collected and processed. *en, the road distance and slope information provided by a digital map is input into the VIDAR to detect and eliminate false obstacles (i.e., those for which no height can be calculated). *e movement state of the obstacle is determined by tracking its lowest point. Finally, experimental analysis is carried out through simulation and real-vehicle experiments. *e results show that the proposed method has higher detection accuracy than YOLO v5s in a sloped road environment and is not susceptible to interference from false obstacles. *e most prominent contribution of this research work is to describe a sloped road obstacle detection method, which is capable of detecting all types of obstacles without prior knowledge to meet the needs of real-time and accurate detection of slope road obstacles. segmentation, and distance estimation, have become a key 1. Introduction component of autonomous vehicles. *ese systems can not With increasing public attention to the field of traffic safety, only provide important traffic parameters for autonomous the automobile industry is developing in the direction of driving but also perceive surrounding obstacles, such as intelligence, with many studies on autonomous driving by stationary or moving objects, including roadblocks, pedes- engineers and scientific researchers. Autonomous driving trians, and other elements [8]. During the vehicle’s move- does not refer to a single technological field, but it is a ment, radar (laser, millimeter wave), infrared and vision product of the development and integration of automotive sensors are used to collect environmental information to determine whether a target is in a safe area [9–11]. However, electronics, intelligent control, and breakthroughs related to the Internet of *ings [1, 2]. *e principle is that autono- the price of infrared sensors and radars is relatively high, and mous driving systems obtain information on the vehicle and most of them are limited to advanced vehicles [12]. Com- the surrounding environment through an environmental pared with other sensor systems, monocular vision requires perception system. *en, the information is analyzed and only one camera to capture images and analyze scenes, processed by the processor, and the obstacle information in thereby reducing the cost of detection solutions. Moreover, front of the vehicle is detected. Combining with the vehicle the camera can work at a high frame rate and provide rich dynamics model, the obstacle avoidance path planning and information from long distances under good lighting and lateral control of the vehicle are realized [3–7]. favorable weather conditions [13]; therefore, detection Environmental perception systems, which need to per- methods based on machine vision are being more and more form functions such as object classification, detection, widely adopted. 2 Journal of Robotics Machine learning can be used to achieve object classi- proposed a novel image classification framework that in- fication for vision-based obstacle detection [14, 15]. How- tegrates a convolutional neural network (CNN) and a kernel ever, traditional machine learning methods can only detect extreme learning machine to distinguish the categories of known types of obstacles (see Figure 1). If the vehicle cannot extracted features, thus improving the performance of image detect an unknown type of obstacles accurately, it is very classification [27]. Nguyen proposed an improved frame- likely that a traffic accident will occur. *is situation is not work based on fast response neural network (Fast R-CNN). conducive to the safe driving of the vehicle; therefore, in this *e basic convolution layer of Fast R-CNN was formed study, we propose an unsupervised learning-based obstacle using the MobileNet architecture, and the classifier was detection method, which allows the detection of both formed using the deep separable convolution structure of the known- and unknown-type obstacles in complex MobileNet architecture, which improved the accuracy of environments. vehicle detection [28]. Yi proposed the improved YOLO v3 Traditional obstacle detection methods, such as motion neural network model, which introduced the concept of compensation [16–18] and optical flow methods [19–22], Faster R-CNN’s anchor box, and used a multiscale strategy, allow the detection of obstacles of different shapes and at thus greatly improving the robustness of the network in various speeds. However, these methods require the ex- small object detection [29]. Wang K.W. proposed an efficient traction and matching of a large number of object points, fully convolutional neural network, which could predict the which increases the computational load. *erefore, in this occluded part of the road by analyzing foreground objects study, we adopt a Vision-IMU (inertial measurement unit)- and the existing road layout, thereby improving the per- based detection and ranging method, abbreviated as VIDAR, formance of the neural network [30]. Although the above which can realize fast matching and feature point processing methods improved the accuracy of obstacle detection, they of the detection area and improve the obstacle detection require a large number of sample data for network training speed and detection effectiveness. and the range of samples must cover all obstacle types; VIDAR is an obstacle detection method developed for otherwise, the obstacles cannot be detected. horizontal roads. When obstacles and test vehicles are lo- Monocular ranging pertains to the use of a single camera to capture images and perform distance calculations. Zhang cated on different slopes, there will be imaging parallax, which will lead to the detection of false obstacles as real ones, et al. used a stereo camera system to compute a disparity resulting in a large measurement error, thereby affecting the map and use it for obstacle detection. *ey applied different detection accuracy. To cope with the impact of slope computer vision methods to filter the disparity map and changes, in this study, we take the slope of road into account remove noise in detected obstacles, and a monocular camera during the model establishment, and analyze the specific in combination with the histogram of oriented gradients and situation according to the position relationship between the support vector machine algorithms to detect pedestrians and detected vehicle and the obstacle. We thus propose an vehicles [31]. Tkocz studied the ranging and positioning of a obstacle detection and distance measurement method for robot in motion, considering the scale ambiguity of mon- sloped roads based on VIDAR. In the proposed method, ocular cameras. However, only experimental research has slope and distance information are provided by digital maps been done on the speed and accuracy of measurement [32]. [23–26]. Meng C designed a distance measurement system based on a *e rest of this study is structured as follows: in Section fitting method, where a linear relationship between the pixel 2, we review the research on obstacle detection and visual value and the real distance is established according to the ranging. In Section 3, the conversion process from world pixel position of the vehicle in the imaging plane coordinate, coordinates to camera coordinates and the ranging principle thus realizing adaptive vehicle distance measurement under of VIDAR are introduced. In Section 4 the detection process monocular vision [33]. Zhe proposed a method for detecting of real obstacles on sloped roads is outlined and the ranging vehicles ahead, which combined machine learning and prior and speed measurement models are established. Simulated knowledge to detect vehicles based on the horizontal edge of and real experiments are presented in Section 5 and the the candidate area [34]. *ese methods were only used for experimental results are compared with the detection results the measurement of distance to other vehicles and are not of YOLO v5s to demonstrate the detection accuracy of the applicable to other types of obstacles. proposed method. In Section 6, the proposed method and Rosero proposed a method for sensor calibration and our findings are summarized, and the study is concluded. obstacle detection in an urban environment. *e data from a radar, 3D LIDAR, and stereo camera sensors were fused to- gether to detect obstacles and determine their shape [35]. 2. Related Work Garnett used a radar to determine the approximate location of Obstacle detection still forms one of the most significant obstacles, and then used bounding box regression to achieve research foci in the development of intelligent vehicles. With accurate positioning and identification [36]. Caltagirone pro- the improvement and optimization of monocular vision, posed a novel LIDAR-camera fusion fully convolutional net- obstacle detection based on monocular vision has attracted work and achieved the most advanced performance on the the attention of researchers. Most of the research on the KITTI road benchmark [37]. Although sensor fusion methods detection of obstacles using monocular vision is based on the reduce the processing load and achieve improved detection accuracy, these methods are based on flat roads and are not optimization of machine vision and digital image processing to improve the accuracy and speed of detection. S. Wang suitable for complex slope road environments. Journal of Robotics 3 Figure 1: Fast R-CNN. Normal cars are detected, but the overturned car and the box are not detected. To solve the above problems, we propose an obstacle equation from the world to the pixel coordinate system is detection and distance measurement method for sloped shown in roads based on VIDAR. *is method does not require a μ 1/d 0 μ f 0 0 X x 0 w priori knowledge of the scene and uses the road slope in- ⎡ ⎢ ⎤ ⎥ ⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎡ ⎢ ⎤ ⎥ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ Z � ⎢ v ⎥ � ⎢ 0 1/d v ⎥⎢ 0 f 0 ⎥⎢R⎢ Y ⎥ + T⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ formation provided by a digital map and the vehicle driving C ⎢ ⎥ ⎢ y 0 ⎥⎢ ⎥⎢ ⎢ w ⎥ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎢ ⎥ ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦⎣ ⎣ ⎦ ⎦ state provided by an IMU to construct distance measure- 1 0 0 1 0 0 1 Z ment and speed measurement models, which allow the (1) a 0 μ X x 0 w detection of obstacles in real time, as well as the distance and ⎢ ⎥⎢ ⎥ ⎡ ⎢ ⎤ ⎥⎡ ⎢ ⎤ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ movement state of the obstacles. ⎢ ⎥⎢ ⎥ � ⎢ 0 a v ⎥⎢ R Y ⎥ + T, ⎢ ⎥⎢ ⎥ ⎢ y 0 ⎥⎢ w ⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦⎣ ⎦ 0 0 1 Z 3. Methodology where R and T are the external parameters. *e internal and *e obstacle detection model of VIDAR is based on pinhole external parameters can be obtained through camera camera model, which can accurately calculate the distance calibration. between vehicles and obstacles. 3.2. Obstacle Ranging Method. *e obstacle ranging prin- 3.1. Coordinate Transformation. *e camera can map the ciple is also based on the pinhole model principle. For the coordinate points of the three-dimensional world to the two- convenience of expression, we installed the camera on a test dimensional imaging plane. *is imaging principle is con- vehicle and a vehicle on a sloped road was regarded as the obstacle. *e feature points of the obstacle were detected, sistent with the pinhole model principle, so camera imaging can be described by pinhole model. and the lowest point was taken as the intersection point If we want to determine the correspondence between the between the obstacle and the road surface (see Figure 3). In object point and the image point, we must establish the the case of normal detection by the system, the camera coordinate system needed by vision system, including world collects image information, and by processing the image coordinate system, camera coordinate system, imaging plane information, feature points in the image can be extracted. By coordinate system, and pixel coordinate system. *e measuring the distance of the feature point, it can be de- transformation process from the world coordinate system to termined whether the obstacle where the feature point is the pixel coordinate system is shown in Figure 2. located has a height. For real obstacles, tracking the feature point at the lowest position can calculate the moving speed Pixel coordinate (u, v) and image plane coordinate (x, y) are on the same plane, and the X and Y axes are of the obstacle, judge the motion state of the obstacle, and parallel. *e corresponding position of the original point provide data support for the safe driving of the vehicle. As in the image plane coordinate system is (u , v ). Both the long as the camera can capture images normally, all obstacles 0 0 world and the camera coordinate systems are 3D coor- in the captured scene can be detected. *e number of de- dinates, which are associated through the camera. tected obstacles is related to the number of extracted feature According to the principle of keyhole imaging, the camera points. coordinate system can be obtained through a transfor- Let f be the effective focal length of the camera, z be the mation of the coordinate axes of the world coordinate pitch angle, μ be the pixel size, h be the mounting height of system, so the conversion relation between the two co- the camera and the camera center be the optical center of the ordinate systems must be deduced. *e conversion lens. Let (x , y ) be the coordinate origin of imaging plane 0 0 4 Journal of Robotics World coordinate Camera coordinate Imaging plane Pixel coordinate (Xw, Yw, Zw) (Xc, Yc, Zc) coordinate (x, y) (u, v) Figure 2: Transformation between coordinate systems. Focal Length (f) (x, y) Len’ s Center (x , y ) x 0 0 Horizontal Line Image Plane Optic Axis Bottom point (P) Feature point detection Road Plane Figure 3: Schematic diagram of obstacle ranging model (in order to visualize the detection principle, the nonreal proportional relationship is shown in the figure). coordinate system, and (x, y) be the intersection coordinate distance between the camera and C , and d be the hori- ii of the obstacle and the road plane in the image plane co- zontal distance between the camera and C . ordinate system. *e horizontal distance between the Using triangle similarity, equation (3) can be obtained camera and the obstacle can be obtained using through the geometric relationships shown in Figure 4: ⎧ 1 � tan α, d � . (2) ′ ⎪ d − S ⎪ ii tan ϑ + arctan􏼂 y − y 􏼁 μ/f􏼃􏼁 ⎪ h + kh (3) � tan θ , 4. Research Approach ⎪ d ii In the traditional VIDAR model, it is assumed that the test arctan y − y μ vehicle and obstacles are on the same plane. However, when 0 i ⎩ θ � z + . the test vehicle and the obstacles are on roads with different slopes, this will cause a deviation of the distance measure- *e expression of d is further derived by ii ment. In order to enhance the visual detection accuracy and expand the visual ranging application scenarios, in this S ∗ tan α + kh d � . (4) ii study, we take the slope into account and establish an ob- tan α + k tan θ stacle detection model for the sloped road. When the slope of the road where the obstacle is located is larger than that of the test vehicle, k � −1. In the opposite 4.1. Establishment of the Distance Measurement Model. case, k � 1. *e sloped road mentioned in this study refers to a road where the test vehicle and the obstacles are not on the same slope. When measuring distance, the above situation can be 4.2. Determination of the Real Obstacle on Sloped Roads. simplified into two models. In the process of the test vehicle’s movement, road images *e distance model between the camera and obstacles on were collected twice. *e imaging diagram of the stationary ′ ′ a sloped road with obstacles in front of the test vehicle are obstacles is shown in Figure 5. Let A and B be points on the shown in Figure 4. Let the light blue line be the auxiliary line, road surface, and A and B be the corresponding image and the red dot on the obstacle be any detected object point. points. *e first point of the obstacle on the image plane is A. Let C be a point on the road’s surface, C be the image As the camera moves with the test vehicle, and the y axis on ′ ″ point of C on the sloped road’s surface, C be the inter- the image plane moves from axis y to the axis y , we obtain 1 2 ′ ″ section point where CC extends to the imaginary horizontal the point B of the obstacle on the image plane. A is the ′ ′ plane, and S be the distance from the camera to the be- intersection, where AA extends to the imaginary horizontal ginning of road slope change. Let d be the horizontal plane, and accordingly for B . Δ d is the movement distance h Journal of Robotics 5 C′ i ° S′ C″ dii di (a) S′ C″ ° i di C′ dii (b) Figure 4: Diagram of distance models. (a) Situation 1: the test vehicle on a flat road and obstacles on an uphill road. (b) Situation 2: the test vehicle on a flat road and obstacles on a downhill road. of the camera (i.e., the test vehicle), d is the horizontal Let the speeds of the detecting vehicles and obstacles be v ″ ′ distance from the camera to A , d is the same, d is the and v , respectively. When the imaging point of the road’s 2 11 horizontal distance from the camera to A , and accordingly intersection point and the obstacle’s object point passes for d . through the camera, the relationships between h , v, v , L 22 v A d and d can be calculated using equation (4). *e and L are as follows: 11 22 B relationship between d and d can be approximated as 11 22 ⎧ ⎪ d � d + Δ d, but the real relationship is � L , 11 22 ⎪ ⎪ tan α + bθ d � d + Δ d + Δl. If d ≠ d + Δ d, the object points are ⎪ 1 11 22 11 22 ⎪ not on the road surface. Using this method, it can be de- termined whether the obstacle has a height (i.e., it is a real ⎪ � L , obstacle). ⎪ B tan α + bθ 􏼁 (5) 4.3. Special Case of Obstacle Detection. A special case should ⎪ ′ L − L � v t, ⎪ A B be excluded during obstacle detection. When the test vehicle and the obstacles are moving at the same time, the imaging point of the camera light on the road surface through an Δ d � vt. object point of the obstacle coincides with each other. VIDAR is unable to detect obstacles in this case. When the slope of the road where the obstacle is located *e diagrams of obstacle detection in complex envi- is larger than that of the test vehicle, b � 1, while b � −1 in ronments are shown in Figure 6. Let L be the distance the opposite case. (along the road where the obstacle is located) between the *erefore, VIDAR can be used in all cases except when highest point of the obstacle and the object point of the road ′ L − L � v t, and A B surface when the test vehicle is moving for the first time. ′ v � v/Δ d(h /tan(α + bθ ) − h /tan(α + bθ )). *erefore, v 1 v 2 Similarly, L is the distance when the vehicle moves for the the proposed method using a monocular camera to detect second time. *e letters in Figure 6 have the same meaning obstacles on sloped roads is convenient and feasible. *e as the letters above. detection process only includes tracking and calculating the hv h1 hv 1 6 Journal of Robotics y1 y2 Obstacle’s Imagine Point B′ A′ First Imaging A″ Point 2 1 B″ Δd d22 Δl d11 (a) y1 y2 Obstacle’s Imagine Point First Imaging Point S 1 A″ 2 B″ Δd d2 B′ d1 A′ Δl d22 d11 (b) Figure 5: Schematic diagram of stationary obstacle imaging. (a) Situation 1: the test vehicle moving on a flat road and stationary obstacle on an uphill road. (b) Situation 2: the test vehicle moving on a flat road and stationary obstacle on a downhill road. position of the object point, which can shorten the detection where Δ d � v∗Δt, with v being the speed of the test vehicle. time and reduce computational resource consumption. When d � d + v∗Δt, the obstacle is stationary; other- 11 22 wise, it is moving with a speed of 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 d − d − v · Δt 4.4. Speed Measuring Model of the Sloped Road Obstacle. 􏼌 􏼌 11 22 (7) v � . Obstacles can be imaged in the camera photosensitive element. Δt∗ cos α By extracting and calculating the feature points of the collected obstacle images, we can calculate the feature points that are not on the road surface, that is, the feature points whose height is 4.5. Obstacle Detection on Sloped Roads Using VIDAR. In this not zero. *e object points with nonzero height are mor- study, an obstacle detection and distance measurement phologically processed to obtain the obstacles’ areas. *e method for sloped roads based on VIDAR is proposed, movement state of the obstacles can be determined through which can quickly judge and eliminate false obstacles that tracking and calculating the speed of the lowest point. without height, and at the same time identify real obstacles When the test vehicle is moving, the obstacles, camera, and judge their movement state. *e detection process is as and the lowest point of the road will form images (see follows (see Figure 8). Figure 7). At this time, the horizontal distance between the lowest point of the obstacle and the camera can be expressed as d . Step 1. Update camera parameters using the IMU: ii Let A be the image plane point corresponding to the (1) Calibration of the camera’s initial internal and ex- lowest point of the obstacle at time t and B corresponding ternal parameters: the camera’s parameters, such as point at t + Δt. *e relationship between v d , d and Δ d 11 22 the focal length f, mounting height h, pixel μ, and is as follows: pitch angle z are obtained through calibration. 􏼌 􏼌 􏼌 􏼌 􏼌 􏼌 d − d + Δ d 􏼌 􏼁 􏼌 11 22 (2) Data acquisition: the camera is used to collect images (6) � v · Δt, and the IMU is used to collect inertial data. *e cos α hv hv h Journal of Robotics 7 y1 y2 Obstacle’s Imagine Point A′ (B′) First Imaging Δd Point LB A″ 1 2 LA B″ (a) y1 y2 Obstacle’s Imagine Point First Imaging Point 2 B″ Δd 1 A″ LA LB A′ (B′) (b) Figure 6: Schematic diagram of obstacles and camera imaging in complex environments, (a) Situation 1: Test vehicle driving on a flat road and dynamic obstacles on an uphill road, (b) Situation 2: Test vehicle driving on a flat road and dynamic obstacles on a downhill road. acquisition frequency of the IMU is larger than that Step 4. Image processing and obstacle detection: of the camera. (1) Object points are extracted from the background (3) Update of camera parameters: the frequency rela- areas N and N of two consecutive images. With i i+1 tionship between the IMU and camera is established N as the background region template map and N i i+1 and the camera parameters at time t are calculated as the background region real-time map, the periodically according to the inertial data. matching regions M and M are obtained using a i i+1 fast image region matching method based on region feature extraction, as shown in Figure 9(c). Step 2. Obtain the road information. (2) *e object points set P of matching area M is Acquire the road slope α and the distance S from the test i i+1 extracted, as shown in Figure 9(d). vehicle to the sloped road using the digital map. (3) *e distance between the test vehicle and the object Step 3. Regional background extraction: point is calculated. *e horizontal distance between the camera and the imaged object point on the (1) Two consecutive images are taken as the total ob- imaginary road is d � h/tan θ . *e horizontal i i stacle detection area B during the running of the test distance between the camera and the imaged object vehicle (see Figures 9(a) and 9(b)). point on the real road is d � S · tan α + kh/ ii (2) *e lane line is detected and the image within the tan α + k tan θ . *e calculation process of d is i ii lane line is extracted as G . shown in Figure 10. First, the pixel coordinates of (3) Machine learning is used to process the images, object points are obtained through the transforma- detect and classify specific types of obstacles. *e tion of the coordinate axes. *en the slope infor- area set F of known types of obstacles is obtained, mation is obtained through Step 2, and finally the where F � 􏼈f , f , . . . , f 􏼉, and k is the number of distance is obtained through the ranging model. i 1 2 k known obstacles. (4) *e object points with height in set P are extracted (4) *e known obstacle area F in the total obstacle (see Figure 9(e)). Calculate d and d as the i ii i+1i+1 detection area G is eliminated and the background vehicle is moving continuously. If d � d + Δ d, i ii i+1i+1 area N (N � G − F ) is extracted as the VIDAR then the object points are on the road surface i i i i data to be detected. (without height), so the object points p are ij hv hv h 8 Journal of Robotics y1 y2 Obstacle’s Imagine Point First Imaging Point B′ A′ d22 Δd d11 (a) y1 y2 Obstacle’s Imagine Point First Imaging Point 1 B″ 2 A″ Δd A′ d22 d11 B′ (b) Figure 7: Schematic diagram of the camera and the lowest point on the road. (a) Situation 1: the test vehicle moving on a flat road and dynamic obstacles on an uphill road. (b) Situation 2: the test vehicle moving on a flat road and dynamic obstacles on a downhill road. eliminated. If d ≠ d + Δ d, then the object moving with an instantaneous speed ii i+1i+1 points are not on the road surface (i.e., they have a v � |d − d − v · Δt|/Δt∗ cos α . i+1 i+1i+1 i+2i+2 i+1 i nonzero height). *e object points are extracted to *e proposed obstacle detection method can be used to obtain the object point set p . detect real obstacles in complex environments and deter- (5) Morphological processing is applied to the image of mine their movement state, which is beneficial for vehicles to the object point set p (Figure 9(f)). *e target image i take timely measures and avoid accidents. is E , and the structural element is B, which is used to apply a closing operation on E and obtain C con- 5. Experiment and Evaluation nected regions. *e real obstacle region O is thus obtained, where O � E · B � (E ⊕B )⊖B , and *e proposed method can be used for obstacle detection in i i i i i i O � (o , o , . . . , o ). complex environments with improved accuracy, as well as i i,1 i,2 i,c distance and speed measurement of obstacles. Obstacle (6) Edge detection of real obstacles, shown in detection and distance measurement were realized in Figure 9(g). Matlab, whereas all experiments were performed on a (7) According to the detection result of (6), the lowest desktop PC with the Intel(R) Xeon(R) Silver 4210 CPU. object point of each obstacle area is extracted, as shown in Figure 9(h). *e lowest object point set P 5.1. Simulation Experiment. In this study, experimental constitutes the obstacle area. equipment was used to simulate a detection environment so (8) Each object point in P is tracked during the i as to verify the detection effect of obstacles on sloped roads movement of the test vehicle. based on VIDAR. *e experimental equipment included: a (9) Get the movement state of the obstacles is obtained. test vehicle equipped with an OV5640 camera unit and a *e movement speed of the obstacle where the object JY61p IMU (Figure 11(a)), vehicle scale models point is located can be obtained by tracking each (Figure 11(b)), bottle caps and paper (Figure 11(c)), and object point in P . If |d − d | � v · Δt, the simulated sloped road (Figure 11(d)). Among them, the test i i+1i+1 i+2i+2 i+1 obstacle on which these object points are located is vehicle was used to analyze the road environment and detect static. If |d − d |≠ v · Δt, the obstacle is its own driving state, scaled vehicle models were used to i+1i+1 i+2i+2 i+1 h Journal of Robotics 9 Extract the background Fast matching Set of matched Machine learning to detect Lane detection and Image (Bi) area Ni and Ni+1 area Mi+1 known types of obstacles Fi feature point Pi range extraction (Gi) generation DIgital map output a Camera parameter update and S Calculate di and dii Inertial navigation t = 0 Calculation of Δd data collection Camera parameters Calculation of at t Camera angle t=t+1/f Calculate di+1 and Vechicular movement di+1i+1 at time t+Δt Image morphology processing Eliminate road The lowest feature N Obstacle area Generate the Extract highly feature surface feature dii=di+li+1+Δt vi ? point set Pi’’ collection Oi target image Ei points set Pi’ points pij Kalman filter tracking Calculate di+2 and Vehicular movement ⁎ Static obstacles di+li+1‑di+2i+2 =Δt vi+1 ? the point di+2i+2 at time t+2Δt Dynamic obstacles, and the moving speed is vi+1’ Figure 8: Flow chart of slope road obstacle detection method based on VIDAR (see the following for specific steps). (d) Matched points 1 (a) (b) (c) Matched points 2 (h) (g) (f ) (e) Figure 9: Obstacle detection based on VIDAR and digital map. (a) Two images collected during the movement of the vehicle. (b) Lane line detection on the image and extraction of the detection range. (c) Feature point detection and matching on the images within the range. (d) Extraction of feature points. (e) Determination of whether the extracted feature points have height. (f) Removal of feature points without height. (g) Morphological processing on feature points. (h) Tracking of the lowest point of obstacles. simulate known obstacles, while the bottle caps and paper acceleration and acceleration data of the vehicle were ob- were used to simulate unknown obstacles. *e road slope tained by the IMU installed by the vehicle. Quaternion was set to 13 . method is used to solve the camera attitude, and the pitching *e bottle cap was taken as the real obstacle of unknown angle of the camera is updated. *e velocity data are used to type, and the paper pasted on the simulated road was taken calculate the horizontal distance between the vehicle and the as the pseudoobstacle of unknown type. *e angular obstacle. *e height of the obstacle is calculated by the 10 Journal of Robotics Imagine plane Focal Length (f ) coordinate Distance (x, y) measurement Len’ s Center model (Px, Py ) (x , y ) Y o o hoeizontal Line Image Plane Optic Axis Pixel coordinate Image and object bulding boxes Yw Road Plane (u, v ) Zw Distance U Calculate Xw World Slope Zc Yc coordinate Camera Extrinsic coordinate parameters Xc High precision Bottom point (P) electronic map our constructed data Figure 10: Flow chart of horizontal distance calculation. IMU Movable platform Camera unit (a) (b) (c) (d) Figure 11: Equipment for simulating experiment. (a) Test vehicle. (b) Vehicle scale models. (c) Bottle caps and paper. (d) Simulated sloping road. change of the distance before and after the movement, so as to *e test results of the simulation experiment is shown in determine whether the detected obstacle is a real obstacle. *e Table 1. video collected using the OV5640 camera comprised an image It can be seen from Figure 12 that the original VIDAR sequence at 12FPS, which was used for obstacle detection. *e can detect unknown types of obstacles such as bottle caps, results obtained using the original VIDAR and VIDAR on but it will detect false obstacles as real obstacles, resulting in sloped roads are shown in Figure 12, while the test results of the low accuracy of obstacle detection. However, the obstacle simulation experiment are summarized in Table 1. detection method for sloping roads based on VIDAR can h Journal of Robotics 11 (a) (b) Figure 12: Comparison of obstacle detection effect in simulated environment. (a) Original VIDAR. (b) VIDAR on sloped roads. Table 1: Test results of the simulation experiment. *e i th movement of the test Detection distance *e test vehicle moving Whether it has Whether to Obstacle vehicle (cm) distance height exclude 1 16.5 1 3 No Yes 2 19.5 1 18.5 2 3 Yes No 2 21.2 1 24.5 3 3 Yes No 2 27.0 eliminate false obstacles, which makes up for the wrong Camera distortion includes radial distortion, thin lens detection of unknown types of obstacles on sloping roads; distortion, and centrifugal distortion. *e superposition of therefore, compared with the original VIDAR, our proposed the three kinds of distortion results in a nonlinear distortion, method can detect obstacles more accurately. the model of which can be expressed in the image coordinate system as follows: 2 2 3 2 2 ⎧δ (x, y) � s x􏼐x + y 􏼑 + 2p xy + p y + k x􏼐x + y 􏼑 x 1 1 2 1 5.2. Real Environment Experiment. In the real environment, ⎩ 2 2 3 2 2 purely electric vehicles were used as test vehicles (see Fig- δ (x, y) � s y􏼐x + y 􏼑 + 2p xy + p y + k x􏼐x + y 􏼑 y 2 2 1 1 ure 13). As a sensor, the camera can adapt to complex (8) environments and collect environmental information in real time (we only used the left camera). *e camera was installed where s and s are centrifugal distortion coefficients; k and 1 2 1 at a height of 1.60 m. *e IMU used for locating the test k are radial distortion coefficients, and p and p are the 2 1 2 vehicle and reading of its movement state in real time was distortion coefficients of thin lenses. installed at the bottom of the test vehicle. GPS was used for Because the centrifugal distortion of the camera is not accurate location positioning. *rough the combination of considered in this study, the internal reference matrix of the GPS and IMU, the real-time position information of the test camera can be expressed as shown in vehicles and obstacles can be obtained, and then the tra- 5.9774e + 03 0 949.8843 jectory information of vehicles and obstacles can be ob- ⎡ ⎢ ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ tained. A digital map was used to obtain accurate road ⎢ ⎥ M � ⎢ 0 5.9880e + 03 357.0539 ⎥. (9) ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ information such as distance and slope. A calculation unit 0 0 1 was used to process the data in real time. Accurate calibration of camera parameters was a pre- *e calibration of the camera’s external parameters can requisite for the whole experiment and is a very important be calculated by taking the edge object points of lane lines. task for obstacle detection methods. In this paper, Zhang *e calibration results are shown in Table 2. Zhengyou’s camera calibration method was adopted to Since the images in the public data set were all data calibrate the DaYing camera. First, the camera was fixed to collected by other cameras, different camera parameters will capture images of a checkerboard at different positions and affect the accuracy of ranging, so we used the VIDAR-Slope angles. *en, the key points of the checkerboard were se- database (Figure 15), the images in which where collected lected and used to establish a relationship equation. Finally, using a DaYing camera. *e collection frequency was 77 the internal parameter calibration was realized. *e camera frames/min, and there are 2270 images in total. *e ex- calibration result is shown in Figure 14. periment and image collection took place in Shandong 12 Journal of Robotics Camera Computing processing unit and digital map GPS+IMU Figure 13: Schematic diagram of the test vehicle. Reprojection Errors 0.1 Drag to select outliers 0.08 0.06 0.04 (0, 0) 0.02 Overal Mean Error: 0.07 pixels 12 3 4 5 6 7 8 Images Pattern‑centric Camera‑centric Z (millimeters) X (millimeters) Detected points Checkerboard origin Reprojected points Figure 14: Schematic diagram of camera calibration results. Y (millimeters) Mean Error in Pixels Journal of Robotics 13 Table 2: Calibration results of camera external parameters. External parameter type Parameter size Pitch angle 1.20 Yaw angle 3.85 Rotation angle 2.37 2021 8 16 2021 8 16 2021 8 16 2021 8 16 2021 8 16 910 910 910 910 910 21.697.jpg 26.892.jpg 41.852.jpg 52.335.jpg 56.298.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 16 2021 8 16 911 10 0 10 0 10 0 10 0 0.213.jpg 38.7.jpg 12.74.jpg 21.62.jpg 22.77.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 23 2021 8 23 10 10 10 0 11 7 11 7 11 9 30.84.jpg 27.503.jpg 12.934.jpg 57.736.jpg 25.916.jpg Figure 15: *e VIDAR-Slope database (part of data). University of Technology’s driving school and experimental YOLO v5s lacks training in unknown types of obstacles, and building. We selected the downhill section of the parking lot will consequently offer reduced safety when used in realistic for the experiment. In the process of obstacle detection, the vehicle situations. However, the proposed obstacle detection test vehicle moves at a constant speed of 25 km/h. method does not require training and can detect all types of *e detection results of YOLO v5s and the method obstacles, thus ensuring its effectiveness of obstacle detection proposed in this study are shown in Figure 16. *e ac- results on sloped roads. *e total number of obstacles in the curacy of obstacle detection was measured through the target area in the VIDAR-Slope database was 9526. *e results number of true positives (TP), false positives (FP), true of YOLO v5s and proposed method are shown in Table 3. negatives (TN), and false negatives (FN). Let a be an In the results’ analysis, Accuracy (A), Recall (R), and obstacle that is correctly classified as a positive example, b Precision (P) were used as evaluation indices for the two be an obstacle that is wrongly classified as a positive obstacle detection methods, calculated through the follow- example, c be an obstacle that is correctly classified as a ing equations: negative example, and d be an obstacle that is incorrectly TP + TN identified as a negative example. *en, TP � 􏽐 a , A � , (10) i�1 n n n TP + TN + FP + FN FP � 􏽐 b , TN � 􏽐 c , FN � 􏽐 d . i�1 i i�1 i i�1 i *e YOLO series is a representative target detection TP (11) framework based on deep learning. *ere are four versions R � , TP + FN of the target detection network: namely YOLO v5s, YOLO v5m, YOLO v5l, and YOLO v5x. Among them, YOLO v5s is TP (12) P � . the smallest and has the fastest speed, so we choose it for TP + FP comparative experiments. Comparing the two methods, it can be seen that the sta- *e Accuracy, Recall, and Precision of YOLO v5s and the method proposed in this study are shown in Table 4. bility of the proposed method is higher than that of YOLO v5s. 14 Journal of Robotics 2021 8 16 2021 8 16 2021 8 16 2021 8 16 910 910 910 910 21.697.jpg 26.892.jpg 41.852.jpg 52.335.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 16 911 10 0 10 0 56.298.jpg 0.213.jpg 38.7.jpg 44.8.jpg 2021 8 16 2021 8 16 2021 8 16 2021 8 16 10 0 10 0 10 0 10 0 12.74.jpg 21.62.jpg 22.77.jpg 30.84.jpg (a) 2021 8 16 9 2021 8 16 9 2021 8 16 9 2021 8 16 9 10 21.697.jpg 10 26.892.jpg 10 41.852.jpg 10 52.335.jpg 2021 8 16 9 2021 816 9 2021 8 16 10 2021 8 16 10 10 56.298.jpg 11 0.213.jpg 0 38.7.jpg 0 44.8.jpg 2021 816 10 2021 816 10 2021 8 16 10 2021 8 16 10 00 12.74.jpg 21.62.jpg 0 22.77.jpg 0 30.84.jpg (b) Figure 16: Comparison of partial test results of YOLO v5s and proposed method. (a) YOLO v5s can only detect known type obstacles. (b) *e obstacle method proposed in this article can detect pedestrians (known types) and boxes (unknown types). Journal of Robotics 15 Table 3: Obstacle detection results in sloped roads of YOLO v5s and the proposed method. Detection method Input value TP FP TN FN YOLO v5s 9526 6716 2808 915 1214 Proposed method 9526 9124 402 530 485 Table 4: Evaluation indices of YOLO v5s and the proposed method. Detection method A (%) R (%) P (%) YOLO v5s 81.73 84.68 70.50 Proposed method 90.36 94.94 95.77 Since the average detection time of the method proposed Table 5: Evaluation indices of YOLO v5s and the proposed by us is 0.201s, in order to ensure the detection of obstacles method. under normal driving conditions, the speed of the detected vehicle must be less than or equal to the ratio of detection Detection method A (%) distance to the average detection time. Fast R-CNN 76.53 Compared with YOLO v5s, the method proposed in this SSD 71.28 study saves the training step of data set. *e modified Fast YOLO 78.95 method firstly uses machine learning to detect obstacles of SSD 300 73.96 known types, but it needs to process feature points of ob- YOLO v5s 81.73 Proposed method 90.36 stacles of unknown types, so the final detection time is longer than that of YOLO v5s. But it can still meet the demand of real-time detection. In order to verify the reliability of the distance mea- Table 6: Evaluation indices of detections of YOLO v5s and the surement method proposed in this study and the feasibility proposed method. of practical application, we have done a set of obstacle Detection method Detection time (s) detection experiments. We first use a fixed camera to take YOLO v5s 0.164 pictures of the real road environment ahead and record the Proposed method 0.201 process. *e result of IMU data processing is shown in Figure 17. *en we select a few frames of images during the progress of the obstacle for processing. Finally, the distance *e experimental results in Tables 3 and 4 show that between the camera and the obstacle in front is calculated, due to vehicle fluctuations and other factors, misjudgment and the detection result is shown in Figure 18. or misdetection may occur during vehicle movement. *e comparison results are shown in Table 7. Analyzing the difference between the actual and mea- Compared with YOLO v5s, the accuracy of the obstacle detection method proposed in this study is increased by sured distance results, it was found that the difference lied mostly between 0.013 and 0.191. *is phenomenon is caused 8% and its precision is increased by 26.4%, which dem- onstrates its improved obstacle detection capability on by the slight change in the posture of the vehicle. *is study is based on the obstacle detection method of sloped roads. In terms of detection accuracy, we also compared our VIDAR and the use of a digital map for distance mea- surement experiments. *e experimental results show that method with other commonly used target detection methods. *e detection results are shown in Table 5. It is the error of this method is less than 2% at short distances evident that the proposed obstacle detection method ach- (<20 m), and the distance measurement effect is better ieves an accuracy higher than state of the art methods. than that reported by Guo Lei’s. Moreover, existing vi- *e real-time nature of obstacle detection refers to the sion-based ranging requirements call for a measurement error of less than 5% [36]. *erefore, from the distance ability to process every image frame collected in time. In terms of detection speed, YOLO v5s and the proposed measurement results, the vision-based ranging algorithm proposed in this article meets the requirements in mea- method were used to process 2270 images and the respective average obstacle detection times were calculated. *e results surement accuracy and can achieve accurate distance measurement to obstacles. are shown in Table 6. 16 Journal of Robotics 0.300 0.250 0.200 0.150 0.100 0.050 0.000 Speed (m/s) Distance (m) Figure 17: IMU data processing result (speed and distance of the test vehicle). (a) (b) (c) (d) Figure 18: Obstacle distance detection. (a) 2021 8 16 10 0 32.825. (b) 2021 8 16 10 0 34.327. (c) 2021 8 16 10 0 35.766. (d) 2021 8 16 10 0 38.630. 10:00:10.738 10:00:11.939 10:00:13.189 10:00:14.360 10:00:15.569 10:00:16.776 10:00:17.975 10:00:19.176 10:00:20.383 10:00:21.579 10:00:22.684 10:00:23.889 10:00:25.083 10:00:26.286 10:00:27.489 10:00:28.694 10:00:29.893 10:00:31.120 10:00:32.294 10:00:33.499 10:00:34.701 10:00:35.917 10:00:37.112 10:00:38.325 10:00:39.521 10:00:40.731 10:00:41.929 10:00:43.027 10:00:44.234 10:00:45.440 Journal of Robotics 17 Table 7: Distance measurement results based on monocular vision. Time Slope ( ) Obstacle Actual distance (m) Measuring distance (m) Distance difference (m) Error (%) 1 13.793 13.926 0.133 0.964 2 13.609 13.800 0.191 1.403 2021 8 16 10 0 32.825 7.35 3 12.651 12.637 0.014 0.111 4 14.198 14.052 0.146 1.028 1 13.719 13.826 0.107 0.780 2 13.499 13.594 0.095 0.704 2021 8 16 10 0 34.327 7.21 3 12.265 12.350 0.085 0.693 4 14.270 14.091 0.179 1.254 1 13.532 13.422 0.110 0.813 2 13.358 13.196 0.162 1.213 2021 8 16 10 0 35.766 7.10 3 12.309 12.131 0.178 1.446 4 13.998 13.875 0.123 0.879 1 13.321 13.228 0.093 0.698 2 13.102 13.089 0.013 0.099 2021 8 16 10 0 38.630 6.88 3 12.037 11.931 0.106 0.881 4 13.713 13.655 −0.058 0.423 6. Conclusion References [1] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A In this study, an obstacle detection method based on VIDAR survey of deep learning techniques for autonomous driving,” is applied to complex environments, avoiding the drawbacks Journal of Field Robotics, vol. 37, no. 3, pp. 362–386, 2020. of machine learning methods that can only detect known [2] H. Fujiyoshi, T. Hirakawa, and T. Yamashita, “Deep learning- obstacles. Moreover, by integrating slope information into based image recognition for autonomous driving,” IATSS the VIDAR detection method, real obstacles can be detected Research, vol. 43, no. 4, pp. 244–252, 2019. on sloped roads, and distance and speed measurement of [3] I.-S. Weon and S.-G. Lee, “Environment recognition based on obstacles can be realized, which has important research multi-sensor fusion for autonomous driving vehicles,” Journal value for autonomous vehicles and active safety systems. It of Institute of Control, Robotics and Systems, vol. 25, no. 2, can be seen from the results that the proposed method is pp. 125–131, 2019. effective in improving the accuracy and speed of obstacles [4] Z. Sun, Q. Zhao, N. Zhang, B. Zhu, and Wang, “Intelligent detection and can meet the requirements of obstacle de- vehicle multi-objective tracking based on particle swarm al- tection in complex environments. Obstacle detection in gorithm,” Forest Engineering, vol. 36, no. 4, pp. 70–75, 2020. [5] M. Bucolo, A. Buscarino, C. Famoso, L. Fortuna, and complex road environment is the basis for safe driving of M. Frasca, “Control of imperfect dynamical systems,” Non- vehicles. *erefore, obstacle avoidance path planning and linear Dynamics, vol. 98, no. 4, pp. 2989–2999, 2019. speed control based on obstacle detection are our future [6] L. Xiong, X. Xia, Y. Lu et al., “IMU-based automated vehicle research directions. slip angle and attitude estimation aided by vehicle dynamics,” Sensors, vol. 19, no. 8, p. 1930, 2019. [7] M. Shang, B. Rosenblad, and R. Stern, “A novel asymmetric Data Availability car following model for driver-assist enabled vehicle dy- Data are available on request to the corresponding author. namics,” IEEE Transactions on Intelligent Transportation Systems, 2022. [8] X. Zhao, P. Sun, Z. Xu, H. Min, and H. Yu, “Fusion of 3D Conflicts of Interest LIDAR and camera data for object detection in autonomous vehicle applications,” IEEE Sensors Journal, vol. 20, no. 9, *e authors declare that they have no conflicts of interest. pp. 4901–4913, 2020. [9] W. Huang, Z. Zhang, W. Li, and J. Tian, “Moving object tracking based on millimeter-wave radar and vision sensor,” Acknowledgments Journal of Applied Science and Engineering, vol. 21, no. 4, pp. 609–614, 2018. *is work was supported in part by the National Natural [10] C.-C. Lin, W.-L. Mao, T.-W. Chang, C.-Y. Chang, and Science Foundation of China under Grant 51905320, the S. S. S. Abdullah, “Fast obstacle detection using 3D-to-2D China Postdoctoral Science Foundation under Grants LiDAR point cloud segmentation for collision-free path 2018M632696 and 2018M642684, the Shandong Key R&D planning,” Sensors and Materials, vol. 32, no. 7, Plan Project under Grant 2019GGX104066, the Shandong pp. 2365–2374, 2020. Province Major Science and Technology Innovation [11] A. Dairi, F. Harrou, M. Senouci, and Y. Sun, “Unsupervised Project under Grant 2019JZZY010911, and SDUT and obstacle detection in driving environments using deep- Zibo City Integration Development Project under Grant learning-based stereovision,” Robotics and Autonomous Sys- 2017ZBXC133. tems, vol. 100, pp. 287–301, 2018. 18 Journal of Robotics [12] H. Zhu, K.-V. Yuen, L. Mihaylova, and H. Leung, “Overview [28] H. Nguyen, “Improving YOLO v5 framework for fast vehicle of environment perception for intelligent vehicles,” IEEE detection,” Mathematical Problems in Engineering, vol. 2019, Article ID 3808064, 11 pages, 2019. Transactions on Intelligent Transportation Systems, vol. 18, [29] Z. Yi, S. Yongliang, and Z. Jun, “An improved tiny-YOLO v3 no. 10, pp. 2584–2601, 2017. pedestrian detection algorithm,” Optik, vol. 183, pp. 17–23, [13] W. Song, Y. Yang, M. Fu, Y. Li, and M. Wang, “Lane detection and classification for forward collision warning system based [30] K. Wang, F. Yan, B. Zou, L. Tang, Q. Yuan, and C. Lv, on stereo vision,” IEEE Sensors Journal, vol. 18, no. 12, “Occlusion-free road segmentation leveraging semantics for pp. 5151–5163, 2018. autonomous vehicles,” Sensors, vol. 19, no. 21, p. 4711, 2019. [14] L. Sun, K. Yang, X. Hu, W. Hu, and K. Wang, “Real-time [31] M. Kristan, V. Sulic Kenk, S. Kovacic, and J. Pers, “Fast image- fusion network for RGB-D semantic segmentation incorpo- based obstacle detection from unmanned surface vehicles,” rating unexpected obstacle detection for road-driving im- IEEE Transactions on Cybernetics, vol. 46, no. 3, pp. 641–654, ages,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5558–5565, 2020. [32] M. Tkocz and K. Janschek, “Vehicle speed measurement [15] Y. Qian, W. Zhou, J. Yan, W. Li, and L. Han, “Comparing based on binocular stereovision system,” Journal of Intelligent machine learning classifiers for object-based land cover & Robotic Systems, vol. 80, no. 3, pp. 475–489, 2015. classification using very high resolution imagery,” Remote [33] C. Meng, H. Bao, Y. Ma, X. Xu, and Y. Li, “Visual Meterstick: Sensing, vol. 7, no. 1, pp. 153–168, 2015. preceding vehicle ranging using monocular vision based on [16] Y. Yu, L. Kurnianggoro, and K.-H. Jo, “Comaring machine the fitting method,” Symmetry, vol. 11, no. 9, p. 1081, 2019. learning classifiers for object-based land cover classification [34] T. Zhe, L. Huang, Q. Wu, J. Zhang, C. Pei, and L. Li, “Inter- using very high resolution imagery,” International Journal of Vehicle distance estimation method based on monocular Control, Automation and Systems, vol. 17, no. 7, pp. 1866– vision using 3D detection,” IEEE Transactions on Vehicular 1874, 2019. Technology, vol. 69, no. 5, pp. 4907–4919, 2020. [17] K. Dokka, P. R. MacNeilage, G. C. DeAngelis, and [35] L. A. Rosero and F. S. Osorio, ´ “Calibration and multi-sensor D. E. Angelaki, “Multisensory self-motion compensation fusion for on-road obstacle detection,” in Proceedings of the during object trajectory judgments,” Cerebral Cortex, vol. 25, 2017 Latin American Robotics Symposium (LARS) and 2017 no. 3, pp. 619–630, 2015. Brazilian Symposium on Robotics (SBR), pp. 1–6, Curitiba, [18] Z. Li, S. E. Dosso, and D. Sun, “Motion-compensated acoustic Brazil, November 2017. localization for underwater vehicles,” IEEE Journal of Oceanic [36] N. Garnett, S. Silberstein, and S. Oron, “Real-time category- Engineering, vol. 41, no. 4, pp. 840–851, 2016. based and general obstacle detection for autonomous driv- [19] P. Agrawal, R. Kaur, V. Madaan, and M. Babu, “Moving object ing,” in Proceedings of the IEEE International Conference on detection and recognition using optical flow and eigen face Computer Vision Workshops, pp. 198–205, Venice, Italy, using low resolution video,” Recent Patents on Computer October 2017. Science, vol. 11, no. 2, pp. 1–10, 2018. [37] L. Caltagirone, M. Bellone, L. Svensson, and M. Wahde, [20] J. Cho, Y. Jung, D.-S. Kim, S. Lee, and Y. Jung, “Moving object “LIDAR–camera fusion for road detection using fully con- detection based on optical flow estimation and a Gaussian volutional neural networks,” Robotics and Autonomous Sys- mixture model for advanced driver assistance systems,” tems, vol. 111, pp. 125–131, 2019. Sensors, vol. 19, no. 14, p. 3217, 2019. [21] X. Zhao, F. Pu, Z. Wang, H. Chen, and Z. Xu, “Detection, tracking, and geolocation of moving vehicle from uav using monocular camera,” IEEE Access, vol. 7, pp. 101160–101170, [22] L. Chengmei, B. A. I. Hongyang, G. Hongwei, and L. Huaju, “Moving object detection and tracking based on improved optical flow method,” Chinese Journal of Scientific Instrumen, vol. 39, pp. 249–256, 2018. [23] I. Yang and W. H. Jeon, “Development of lane-level location data exchange framework based on high-precision digital map,” Journal of Digital Contents Society, vol. 19, no. 8, pp. 1617–1623, 2018. [24] M. ElMikaty and T. Stathaki, “Detection of cars in high- resolution aerial images of complex urban environments,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 10, pp. 5913–5924, 2017. [25] J. Li, H. Cheng, H. Guo, and S. Qiu, “Survey on artificial intelligence for vehicles,” Automotive Innovation, vol. 1, no. 1, pp. 2–14, 2018. [26] K. Choi, J. K. Suhr, and H. G. Jung, “Map-matching-based cascade landmark detection and vehicle localization,” IEEE Access, vol. 7, pp. 127874–127894, 2019. [27] S. H. Wang and X. X. Li, “A real-time monocular vision-based obstacle detection,” in Proceedings of the Sept. 2020 6th In- ternational Conference on Control, Automation and Robotics (ICCAR), pp. 695–699, Singapore, April 2020.

Journal

Journal of RoboticsHindawi Publishing Corporation

Published: Apr 15, 2022

There are no references for this article.