Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Fast Obstacle Detection Using 3D-to-2D LiDAR Point Cloud Segmentation for Collision-free Path Planning

Fast Obstacle Detection Using 3D-to-2D LiDAR Point Cloud Segmentation for Collision-free Path... Sensors and Materials, Vol. 32, No. 7 (2020) 2365–2374 2365 MYU Tokyo S & M 2265 Fast Obstacle Detection Using 3D-to-2D LiDAR Point Cloud Segmentation for Collision-free Path Planning 1,2* 3 4 Chien-Chou Lin, Wei-Lung Mao, Teng-Wen Chang, 1,2 1 Chuan-Yu Chang, and Salah Sohaib Saleh Abdullah Dept. of Computer Science and Information Engineering, National Yunlin University of Science & Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. Intelligence Recognition Industry Service Research Center, National Yunlin University of Science and Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. Dept. of Electrical Engineering, National Yunlin University of Science & Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. Dept. of Digital Media Design, National Yunlin University of Science & Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. (Received November 5, 2019; accepted April 20, 2020) Keywords: LiDAR, obstacle detection, Bug algorithm, collision-free path planning, point cloud segmentation Whereas many existing computer vision algorithms based on color images work well in robot navigation, most of them are sensitive to illumination and the reflectance of objects. Furthermore, according to the Oren–Nayar reflectance model, the reflectance depends on the material and surface of objects. Therefore, different sensors, passive sensors and active sensors, are used simultaneously to scan objects of different materials. In this paper, an integrated sensor system, including light detection and ranging (LiDAR), global positioning system (GPS), gyroscopes (Gyro), and a camera, is proposed for an autonomous vehicle. The GPS and Gyros are used for locating the robot position and identifying its orientation, which are used for global path planning to move toward a goal. The camera is used for remote video monitoring. LiDAR is used to capture the point clouds of the current environment for use in planning the local path. In this paper, a fast segmentation method is proposed for obstacle detection. The proposed method includes ground point removal, region of interest (ROI) detection, 3D-to-2D projection, and clustering by grids. The purpose of ROI detection is to determine whether the points are candidate obstacle points. The experimental results show that the proposed segmentation method can reduce the size of the point cloud and computation complexity significantly. The integrated multisensor system is expected to be practically used in the field. 1. Introduction Since autonomous robots are often used for navigating unknown or dangerous environments, multiple sensing devices are required for autonomous robots to plan a collision-free path. Autonomous vehicles use various sensors for the detection of the environment, and then extract features to avoid obstacles. Common sensors are cameras, radars, ultrasonic sensors, global Corresponding author: e-mail: linchien@yuntech.edu.tw https://doi.org/10.18494/SAM.2020.2810 ISSN 0914-4935 © MYU K.K. https://myukk.org/ 2366 Sensors and Materials, Vol. 32, No. 7 (2020) positioning system (GPS) sensors, light perception sensors, and infrared ray sensors. Each sensor has its advantages and disadvantages. For robot localization, the proposed methods can be divided into two types: indoor-based methods and outdoor-based methods. While Wi-Fi, iBeacon, and Li-Fi are used in indoor environments, GPS or a location-based service (LBS) is usually used in outdoor areas. Regarding the self-driving ability of autonomous vehicles, most applications of sensors focus on computer vision, which often uses visual images. Existing state-of-the-art approaches for robot navigation use both passive sensors and active sensors simultaneously. A passive sensor can receive signals from the environment. Passive sensors can be used for most scenarios without other facilities. Active sensors sense the environment by the reflected signals emitted by the sensors themselves. The main benefit of active sensors is that they are robust to the weather and the illumination of the environment. Thus, multitype sensors, which include passive sensors and active sensors, are used in robot navigation to scan objects of different materials. Light detection and ranging (LiDAR) has recently been widely used in robot navigation since it can sense the surface of objects accurately. Recently, similarly to visual images, depth images have been widely used for self-driving applications since depth images are not affected by illumination. Therefore, in this paper, 3D images acquired by LiDAR are used for obstacle detection. However, a point cloud has a large amount of data, e.g., there are over 200000 3D points captured with a 10 frames/s scanning rate. It is a big challenge to process such massive data in real-time applications. Therefore, reducing the number of points and removing the background points rapidly are very important for most applications of LiDAR. The main contribution of this paper is to propose a small autonomous vehicle system with the ability to detect obstacles and to plan a collision-free path simultaneously by using various sensors. The key technology is rapid obstacle detection by using 3D-to-2D projection and clustering by grids. The rest of this paper is organized as follows. Related works are reviewed in Sect. 2. In Sect. 3, the proposed algorithm is introduced. The methods of the proposed algorithm, including ground detection, segmentation, and path planning, are explained individually. The experimental results and conclusion are given in Sects. 4 and 5, respectively. 2. Related Research Recently, point clouds have been widely used in many applications. Generally, an important procedure is to extract the foreground objects from a point cloud. Therefore, many segmentation approaches have been proposed. The goal of point cloud segmentation is to separate the 3D points into several individual objects and backgrounds. In self-driving, the ground points are usually considered as background. Since the number of points of a scene frame is extremely large and ground points are a large proportion of the point cloud, most approaches first remove (1–3) ground points. A simple way to identify ground points is to use the height of points. In Ref. 4, the points were projected onto a 2D plane to view the surrounding traffic. Some approaches (5,6) used voxel segmentation, in which the Euclidean space is discretized into multiple one-unit cubes and points are assigned to grid points. By grouping related voxels to form an object, Sensors and Materials, Vol. 32, No. 7 (2020) 2367 points can be classified as object points or ground points. In Ref. 7, a line-based method in a polar grid map was proposed. The points were clustered into grid cells and a grid map was (8) divided into sectors. Then, the ground was detected in every sector by line extraction. Some approaches used the gradient to detect ground points. The segmentation was based on the (9–11) deviation of height z with lateral position y. Although in self-driving applications, the goal of segmentation is to remove ground points, some segmentation algorithms have been proposed for clustering 3D objects. In Ref. 12, a label-equivalence-based labeling algorithm for 3D binary medical images was proposed. The algorithm checked the connectivity of the neighbors of the current voxel for labeling individual objects in optical order and was efficient for images with complicated connected components. Despite the fact that the existing approaches work well in segmentation, most of them are complex and have a long computation time. In the case of autonomous cars, the goal of segmentation is to identify whether points are obstacles. Therefore, in this paper, a fast segmentation approach that removes the ground points and the points over the vehicle is proposed for the obstacle detection of autonomous cars. 3. Proposed Obstacle Detection Algorithm The small autonomous vehicle system proposed in this paper is a four-wheel electric scooter with four independent motors and a joint between the front part and the rear part. The location and orientation of the scooter are provided by the combination of GPS and a gyroscope (Gyro), so that the system can record the moving trajectory and follow the planned path to the goal. A LiDAR scans the point cloud of the surrounding obstacles and the camera allows remote monitoring. The collision-free path planning is divided into five steps: (1) Use GPS to find the straight path between the current position and the end point. (2) Use the Gyro information to turn the front of the car to the straight line. (3) Use the sensor to find the nearest obstacle. (4) Move in the direction of the tangent line of the edge of the obstacle. (5) Return to the path after avoiding the obstacle, then repeat the above steps until reaching the target end point. In this paper, various sensors are combined with the above-mentioned electric scooter to enhance the ability to monitor the surrounding environment. The goal of local path planning is obstacle avoidance. Before detouring to avoid an obstacle, the robot first has to detect obstacles by sensing the environment. In this paper, a fast point cloud segmentation is proposed for obstacle detection. The proposed algorithm uses the point cloud data captured by the LiDAR to detect obstacles. The point cloud is processed by the following steps: (1) ground object detection, (2) projection and clustering, (3) obstacle detection, and (4) moving direction decision. Details are given in the following subsections. 3.1 Ground object detection Since, in practice, the scan rate of the LiDAR is 10 frames/s, the total processing time of point clouds should be within 0.1 s. In this paper, not only the ground points but also the points 2368 Sensors and Materials, Vol. 32, No. 7 (2020) over the object are removed. The proposed algorithm uses a vertical threshold; thus, the object shapes are ignored and only object distances are considered. The height of the vehicle, H , is considered as the appropriate threshold and a point, P , located within this range is considered as a region of interest (ROI) point. As shown in Fig. 1, the ROI criteria can be expressed as Px , y , z ∈ P if (H ≤≤ zH + H )( ∪ zH ≤ ), (1) ( ) i i i i ROI T iT O iTl and z = γ sinω, (2) i i i where z is the height of a scan point P , H represents the vertical height obstacle threshold, and i i T H represents the vertical pit threshold. Tl 3.2 Grid clustering To reduce the number of points, the points out of the range are discarded and the candidate points are projected onto a horizontal plane. The projected points are grouped into several grids to form a grid map. The grid-based grouping aggregates the points of the grids within a fixed distance; thus, the massive point cloud is compressed and represented as a grid map. Figures 2(a) and 2(b) show the point cloud of ground objects and the corresponding aggregate grid map, respectively. The projection can be expressed as x γ cosωα sin    , (3)    y γ cosωα cos    where γ is the measured distance, ω is the elevation angle of the scanning line, and α is the azimuth angle of the scanning line. Fig. 1. (Color online) Threshold of obstacles. H and H are the vehicle height and the obstacle threshold, O T respectively. Sensors and Materials, Vol. 32, No. 7 (2020) 2369 (a) (b) (c) Fig. 2. (Color) (a) Point cloud of ground objects and (b) aggregate grid map. (c) Grouping result. 3.3 Obstacle detection based on grid feature After reducing the data, obstacles can be easily found by the connected-component labeling (CCL) method or a region-growing approach. In this paper, the region-growing approach is adopted. The seed points are selected from nonempty grids from near to far. The obstacles are then grown from these seed points to adjacent points depending on the 8-connected neighborhood. The region growing stops when no more nonempty grids are neighbors. If an unclassified object grid is found, it is regarded as the seed point of the next cluster. This process continues until all the nonempty grids are grouped. The grouping result is shown in Fig. 2(c). Consider a grid belonging to an object, O , and its 8-connected neighbors, g , k m,n where i − 1 ≤ m ≤ i + 1 and j − 1 ≤ n ≤ j + 1. Then, the 8-connected neighbors are checked for connectivity by using g ∈ Og if ∈ O and num( g )> 0 mn ,, k i j k mn , (4) g ∉ Og if ∈ O and num( g )= 0 .  mn ,, k i j k mn , 3.4 Moving direction decision Robot navigation has been a very popular research topic for the last few decades. The navigation function usually depends on various sensors, and many path planning algorithms (13–19) have been proposed. In this paper, the Bug algorithm is adopted for local path planning to avoid obstacles. The term “Bug algorithms” first came into existence in the late 1980s. Bug algorithms are classical and widely used for sensor-based path finders. There are different (13) (13) (14) (15) (16) types of algorithms: Bug 1, Bug 2, Tangent Bug, Dist Bug, and Wedge Bug, all of which are usually called Bug algorithms. The classic Bug algorithm consists of two main steps: (1) moving toward the target mode, (2) walking along the edge mode. In Step 1, the algorithm determines the movement and termination of the path, and in Step 2, the path is used to avoid obstacles then the robot returns to the path. Since the vehicle system proposed in this paper uses a LiDAR to improve the sensing range, obstacles can be detected within a certain distance, 2370 Sensors and Materials, Vol. 32, No. 7 (2020) and a detour can be taken in advance to shorten the path. The detour method first finds the tangent point O (x , y ) on the nearest obstacle and changes the direction toward the tangent e e e point of the obstacle. 4. Experimental Results The proposed integrated multisensor system is mounted on a four-wheel electric scooter driven by four independent motors. As shown in Fig. 3, the scooter has a joint between the front part and the rear part. The joint makes the scooter turning more flexible. The advantages of this mobile platform are a small gyration radius, strong grip, and high transmission efficiency. The LiDAR used in this study is a 16-channel multilayered VLP-16 LiDAR. The scanning rate is 10 frames/s and a frame has more than 30000 points. With the proposed method, the ROI of a frame can be reduced to 30% of its original size, and there are about 1000 clustered grids per frame. The results obtained in three test environments are shown in Table 1. The obstacles can be detected rapidly because the amount of data is reduced by 95%. In Figs. 4(a) and 4(b), the original point cloud data and the grid map are shown. In Fig. 4(a), there are 12797 ROI points, which are selected from 27605 points of a point cloud. Then, these points are grouped into 806 grids. Only 4% of the data are used for obstacle detection, significantly increasing the detection speed. Figure 5 shows the object labeling results for three test environments. The software simulation of the proposed algorithm showed good results since the planned path was very similar to the shortest path. However, in the field test, the accuracy of GPS and the control strategy decrease the performance of the proposed algorithm. The first testing Fig. 3. (Color online) Four-wheel scooter used in this research. The proposed autonomous car consists of a LiDAR, GPS, Gyro, and color camera. Table 1 Results for three test environments. Scenes Original ROI Grids Percentage (%) A 27605 12797 806 97.08 B 26943 13832 1304 95.16 C 27302 12867 772 97.17 Sensors and Materials, Vol. 32, No. 7 (2020) 2371 (a) (b) (c) Fig. 4. (Color) Grid maps for three test environments. (a) Scene A. (b) Scene B. (c) Scene C. (a) (b) (c) Fig. 5. (Color) Object labeling results for three test environments. (a) Scene A. (b) Scene B. (c) Scene C. (a) (b) Fig. 6. (Color) Two test environments. The obstacles are marked in red. The shortest path is the yellow one and the planned path is the red one. (a) Parking area with smooth road. (b) Trail. environment shown in Fig. 6(a) is a parking area with a smooth road. The distance from the start to the goal is about 30 m and there are five fixed obstacles in the area. The shortest path is 32 m, whereas the planned path is 85.8 m. The second testing environment shown in Fig. 6(b) is a trail with a rough surface. The distance from the start to the goal is about 30 m and there 2372 Sensors and Materials, Vol. 32, No. 7 (2020) Table 2 Comparison of the planned path with die ff rent test environments. Speed (m/s) Time (s) Moving distance (m) Shortest path (m) Simulation 0.8 38 30.4 27.5 Parking area 0.6 143 85.8 32 Trail 0.5 182 94.1 32 are five fixed obstacles in the area. The shortest path is 32 m, whereas the planned path is 94.1 m. Again, because of the accuracy of GPS and the control strategy, the proposed system in practical experiments not only plans a longer path but also takes more time to find a path than the software simulation. A comparison of the planned path with the different test environments is given in Table 2. 5. Conclusions In the past decade, multitype sensors have usually been used in robot navigation to gather information on objects of different materials in environments that were sometimes dark or contain shadows. While the performances of passive sensors, e.g., cameras and thermal sensors, are usually affected by environmental conditions, active sensors, e.g., LiDAR and radar, are robust to the weather and illumination. Recently, 3D depth images acquired by LiDAR have been widely used in self-driving systems. However, the massive amount of data in a point cloud results in a long computation time, and it is a big challenge to process such massive data in real-time applications. Therefore, in this paper, an autonomous vehicle system with the ability to detect obstacles and to plan a collision-free path simultaneously by using various sensors is proposed. The key technology used is rapid obstacle detection using 3D-to-2D projection and clustering by grids. Obstacles can be detected rapidly because the amount of data is reduced by 95%. After applying the point cloud preprocessing method proposed in this paper, the real-time detection performance of obstacles was greatly improved, and each data analysis was completed within 10 ms. Our software simulation of the proposed algorithm showed good results since the planned path was very similar to the shortest path. However, in the field test, the accuracy of GPS and the control strategy decreased the performance of the proposed algorithm. In future work, improving the accuracy of GPS and the control strategy is expected to reduce the path length planned by the proposed system and increase the processing speed. Acknowledgments The authors wish to thank Dr. Shangde Wu, a professor with the Department of Mechanical Engineering, Yunlin University of Science and Technology, Taiwan, R.O.C., for kindly providing the mobile platform. Part of this work was financially supported by the “Intelligent Recognition Industry Service Research Center” from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan. Sensors and Materials, Vol. 32, No. 7 (2020) 2373 References 1 Y. Yu, J. Li, J. Yu, H. Guan, and C. Wang: IEEE Geosci. Remote Sens. Lett. 11 (2014) 1019. https://doi. org/10.1109/LGRS.2013.2285237 2 Y. Yu, J. Li, H. Guan, F. Jia, and C. Wang: IEEE Geosci. Remote Sens. Lett. 12 (2015) 492. https://doi. org/10.1109/LGRS.2014.2347347 3 H. Cheng, N. Zheng, X. Zhang, J. Qin, and H. van de Wetering: IEEE Trans. Intell. Transp. Syst. 8 (2007) 157. https://doi.org/10.1109/TITS.2006.890073 4 D. O. Rubio, A. Lenskiy, and J. Ryu: Proc. 2013 7th Asia Modelling Symp. 160–165. https://doi.org/10.1109/ AMS.2013.31 5 Y. Yu, J. Li, H. Guan, and C. Wang: IEEE Trans. Intell. Transp. Syst. 16 (2015) 2167. https://doi.org/10.1109/ TITS.2015.2399492 6 G. Zhao and J. Yuan: Proc. 2012 19th IEEE Int. Conf. Image Processing (IEEE 2012) 437–440. https://doi. org/10.1109/ICIP.2012.6466890 7 M. Himmelsbach, F. v. Hundelshausen, and H. J. Wuensche: 2010 IEEE Intell. Veh. Symp. (IV) (USA) 560– 565. https://doi.org/10.1109/IVS.2010.5548059. 8 V. Nguyen, A. Martinelli, N. Tomatis, and R. Siegwart: IEEE Int. Conf. Intelligent Robots and Systems (IROS 2005) 1929–1934. https://doi.org/10.1109/IROS.2005.1545234 9 J. Choi, J. Lee, D. Kim, G. Soprani, P. Cerri, A. Broggi, and K. Yi: IEEE Trans. Intell. Transp. Syst. 13 (2015) 974. https://doi.org/10.1109/TITS.2011.2179802 10 J. Weber and L. Matthies: Proc. Intell. Veh. Symp. (1996) 345–350. https://doi.org/10.1109/IVS.1996.566404 11 F. Maurelli, D. Droeschel, T. Wisspeintner, S. May, and H. Surmann: Proc. IEEE Adv. Robot. (IEEE 2009) 1–6. 12 L. He, Y. Chao, and K. Suzuki: IEEE Trans. Image Processing 20 (2011) 2122. https://doi.org/10.1109/ TIP.2011.2114352 13 H. Choset, K. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. Kavraki, and S. Thrun: Principle of Robot Motion: Theory, Algorithms, and Implementation. (MIT Press, Massachusetts, 2005) Chap. 2. 14 I. Kamon, E. Rimon, and E. Rivlin: Int. J. Rob. Res. 17 (1998) 934. https://doi.org/10.1177% 2F027836499801700903 15 I. Kamon, E. Rivlin, and E. Rimon: Proc. IEEE Int. Conf. Robotics and Automation (ICRA 1996) 429–435. https://doi.org/10.1109/ROBOT.1996.503814 16 I. Kamon and E. Rivlin: IEEE Trans. Robotics and Automation 13 (1997) 814. https://doi.org/10.1109/70.650160 17 F. Shahzad and R. Q. Shahzad: Proc. IEEE Int. Conf. Emerging Technologies (ICET, 2006) 575. https://doi. org/10.1109/ICET.2006.335934 18 A. D. K. Lam, S. D. Prior, S. Shen, S. Young, and L. Ji: Engineering Innovation and Design (CRC Press, Florida, 2019). https://doi.org/10.1201/9780429019777 19 A. D. K. Lam, S. D. Prior, S. Shen, S. Young, and L. Ji: Smart Science, Design & Technology (CRC Press, Florida, 2020). https://doi.org/10.1201/9780429058127 About the Authors Chien-Chou Lin received his M.S. and Ph.D. degrees from National Chiao- Tung University, Taiwan, in 1994 and 2004, respectively. From 2010 to 2013, he was an assistant professor, and since 2013, he has been an associate professor at National Yunlin University of Science and Technology, Taiwan. His research interests are robotics, point cloud processing, surface matching, and object recognition. (linchien@yuntech.edu.tw) 2374 Sensors and Materials, Vol. 32, No. 7 (2020) Wei-Lung Mao was born in Taiwan, R.O.C., in 1972. He received his B.S. degree in electrical engineering from National Taiwan University of Science and Technology in 1994 and his M.S. and Ph.D. degrees in electrical engineering from National Taiwan University in 1996 and 2004, respectively. He is now a professor in the Department of Electrical Engineering and Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology. His research interests are precision motion control, intelligent and adaptive control systems, satellite navigation systems, adaptive signal processing, neural networks, and communication electronics. (wlmao@yuntech.edu.tw) Teng-Wen Chang received his Ph.D. degree from the University of Adelaide, Australia, in 1999, his M.Arch. degree from the University of Pennsylvania, USA, in 1993, and his M.S. degree in computational design from Carnegie Mellon University, USA, in 1995. He is currently a professor in the Department of Digital Media Design and director of SOFTLab and Idea Factory at National Yunlin University of Science and Technology, Taiwan. His current research interests are boundaryless design environments, human–machine interaction, sensible intelligent machines, and design space exploration. (tengwen@yuntech.edu.tw) Chuan-Yu Chang received his B.S. degree in nautical technology and his M.S. degree in electrical engineering from National Taiwan Ocean University, Keelung, Taiwan, in 1993 and 1995, respectively, and his Ph.D. degree in electrical engineering from National Cheng Kung University, Tainan, Taiwan, in 2000. From 2007 to 2010, he was an associate professor, and since 2010, he has been a full professor at National Yunlin University of Science and Technology, Taiwan. His research interests include neural networks and their application to medical image processing, wafer defect inspection, digital watermarking, and pattern recognition. (chuanyu@yuntech.edu.tw) Salah Sohaib Saleh Abdullah received his B.S. and M.S. degrees in computer science and information engineering from National Yunlin University of Science and Technology, Yunlin, Taiwan, in 2016 and 2018, respectively. Since 2018, he has been a research assistant at National Yunlin University of Science and Technology. His research interests include robotics, path planning, and object recognition. (salah88868@gmail.com) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Sensors and Materials Unpaywall

Fast Obstacle Detection Using 3D-to-2D LiDAR Point Cloud Segmentation for Collision-free Path Planning

Loading next page...
 
/lp/unpaywall/fast-obstacle-detection-using-3d-to-2d-lidar-point-cloud-segmentation-x3PoMVhqhF

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Unpaywall
ISSN
0914-4935
DOI
10.18494/sam.2020.2810
Publisher site
See Article on Publisher Site

Abstract

Sensors and Materials, Vol. 32, No. 7 (2020) 2365–2374 2365 MYU Tokyo S & M 2265 Fast Obstacle Detection Using 3D-to-2D LiDAR Point Cloud Segmentation for Collision-free Path Planning 1,2* 3 4 Chien-Chou Lin, Wei-Lung Mao, Teng-Wen Chang, 1,2 1 Chuan-Yu Chang, and Salah Sohaib Saleh Abdullah Dept. of Computer Science and Information Engineering, National Yunlin University of Science & Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. Intelligence Recognition Industry Service Research Center, National Yunlin University of Science and Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. Dept. of Electrical Engineering, National Yunlin University of Science & Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. Dept. of Digital Media Design, National Yunlin University of Science & Technology, No. 123, University Rd., Section 3, Douliou, Yunlin 64002, Taiwan, R.O.C. (Received November 5, 2019; accepted April 20, 2020) Keywords: LiDAR, obstacle detection, Bug algorithm, collision-free path planning, point cloud segmentation Whereas many existing computer vision algorithms based on color images work well in robot navigation, most of them are sensitive to illumination and the reflectance of objects. Furthermore, according to the Oren–Nayar reflectance model, the reflectance depends on the material and surface of objects. Therefore, different sensors, passive sensors and active sensors, are used simultaneously to scan objects of different materials. In this paper, an integrated sensor system, including light detection and ranging (LiDAR), global positioning system (GPS), gyroscopes (Gyro), and a camera, is proposed for an autonomous vehicle. The GPS and Gyros are used for locating the robot position and identifying its orientation, which are used for global path planning to move toward a goal. The camera is used for remote video monitoring. LiDAR is used to capture the point clouds of the current environment for use in planning the local path. In this paper, a fast segmentation method is proposed for obstacle detection. The proposed method includes ground point removal, region of interest (ROI) detection, 3D-to-2D projection, and clustering by grids. The purpose of ROI detection is to determine whether the points are candidate obstacle points. The experimental results show that the proposed segmentation method can reduce the size of the point cloud and computation complexity significantly. The integrated multisensor system is expected to be practically used in the field. 1. Introduction Since autonomous robots are often used for navigating unknown or dangerous environments, multiple sensing devices are required for autonomous robots to plan a collision-free path. Autonomous vehicles use various sensors for the detection of the environment, and then extract features to avoid obstacles. Common sensors are cameras, radars, ultrasonic sensors, global Corresponding author: e-mail: linchien@yuntech.edu.tw https://doi.org/10.18494/SAM.2020.2810 ISSN 0914-4935 © MYU K.K. https://myukk.org/ 2366 Sensors and Materials, Vol. 32, No. 7 (2020) positioning system (GPS) sensors, light perception sensors, and infrared ray sensors. Each sensor has its advantages and disadvantages. For robot localization, the proposed methods can be divided into two types: indoor-based methods and outdoor-based methods. While Wi-Fi, iBeacon, and Li-Fi are used in indoor environments, GPS or a location-based service (LBS) is usually used in outdoor areas. Regarding the self-driving ability of autonomous vehicles, most applications of sensors focus on computer vision, which often uses visual images. Existing state-of-the-art approaches for robot navigation use both passive sensors and active sensors simultaneously. A passive sensor can receive signals from the environment. Passive sensors can be used for most scenarios without other facilities. Active sensors sense the environment by the reflected signals emitted by the sensors themselves. The main benefit of active sensors is that they are robust to the weather and the illumination of the environment. Thus, multitype sensors, which include passive sensors and active sensors, are used in robot navigation to scan objects of different materials. Light detection and ranging (LiDAR) has recently been widely used in robot navigation since it can sense the surface of objects accurately. Recently, similarly to visual images, depth images have been widely used for self-driving applications since depth images are not affected by illumination. Therefore, in this paper, 3D images acquired by LiDAR are used for obstacle detection. However, a point cloud has a large amount of data, e.g., there are over 200000 3D points captured with a 10 frames/s scanning rate. It is a big challenge to process such massive data in real-time applications. Therefore, reducing the number of points and removing the background points rapidly are very important for most applications of LiDAR. The main contribution of this paper is to propose a small autonomous vehicle system with the ability to detect obstacles and to plan a collision-free path simultaneously by using various sensors. The key technology is rapid obstacle detection by using 3D-to-2D projection and clustering by grids. The rest of this paper is organized as follows. Related works are reviewed in Sect. 2. In Sect. 3, the proposed algorithm is introduced. The methods of the proposed algorithm, including ground detection, segmentation, and path planning, are explained individually. The experimental results and conclusion are given in Sects. 4 and 5, respectively. 2. Related Research Recently, point clouds have been widely used in many applications. Generally, an important procedure is to extract the foreground objects from a point cloud. Therefore, many segmentation approaches have been proposed. The goal of point cloud segmentation is to separate the 3D points into several individual objects and backgrounds. In self-driving, the ground points are usually considered as background. Since the number of points of a scene frame is extremely large and ground points are a large proportion of the point cloud, most approaches first remove (1–3) ground points. A simple way to identify ground points is to use the height of points. In Ref. 4, the points were projected onto a 2D plane to view the surrounding traffic. Some approaches (5,6) used voxel segmentation, in which the Euclidean space is discretized into multiple one-unit cubes and points are assigned to grid points. By grouping related voxels to form an object, Sensors and Materials, Vol. 32, No. 7 (2020) 2367 points can be classified as object points or ground points. In Ref. 7, a line-based method in a polar grid map was proposed. The points were clustered into grid cells and a grid map was (8) divided into sectors. Then, the ground was detected in every sector by line extraction. Some approaches used the gradient to detect ground points. The segmentation was based on the (9–11) deviation of height z with lateral position y. Although in self-driving applications, the goal of segmentation is to remove ground points, some segmentation algorithms have been proposed for clustering 3D objects. In Ref. 12, a label-equivalence-based labeling algorithm for 3D binary medical images was proposed. The algorithm checked the connectivity of the neighbors of the current voxel for labeling individual objects in optical order and was efficient for images with complicated connected components. Despite the fact that the existing approaches work well in segmentation, most of them are complex and have a long computation time. In the case of autonomous cars, the goal of segmentation is to identify whether points are obstacles. Therefore, in this paper, a fast segmentation approach that removes the ground points and the points over the vehicle is proposed for the obstacle detection of autonomous cars. 3. Proposed Obstacle Detection Algorithm The small autonomous vehicle system proposed in this paper is a four-wheel electric scooter with four independent motors and a joint between the front part and the rear part. The location and orientation of the scooter are provided by the combination of GPS and a gyroscope (Gyro), so that the system can record the moving trajectory and follow the planned path to the goal. A LiDAR scans the point cloud of the surrounding obstacles and the camera allows remote monitoring. The collision-free path planning is divided into five steps: (1) Use GPS to find the straight path between the current position and the end point. (2) Use the Gyro information to turn the front of the car to the straight line. (3) Use the sensor to find the nearest obstacle. (4) Move in the direction of the tangent line of the edge of the obstacle. (5) Return to the path after avoiding the obstacle, then repeat the above steps until reaching the target end point. In this paper, various sensors are combined with the above-mentioned electric scooter to enhance the ability to monitor the surrounding environment. The goal of local path planning is obstacle avoidance. Before detouring to avoid an obstacle, the robot first has to detect obstacles by sensing the environment. In this paper, a fast point cloud segmentation is proposed for obstacle detection. The proposed algorithm uses the point cloud data captured by the LiDAR to detect obstacles. The point cloud is processed by the following steps: (1) ground object detection, (2) projection and clustering, (3) obstacle detection, and (4) moving direction decision. Details are given in the following subsections. 3.1 Ground object detection Since, in practice, the scan rate of the LiDAR is 10 frames/s, the total processing time of point clouds should be within 0.1 s. In this paper, not only the ground points but also the points 2368 Sensors and Materials, Vol. 32, No. 7 (2020) over the object are removed. The proposed algorithm uses a vertical threshold; thus, the object shapes are ignored and only object distances are considered. The height of the vehicle, H , is considered as the appropriate threshold and a point, P , located within this range is considered as a region of interest (ROI) point. As shown in Fig. 1, the ROI criteria can be expressed as Px , y , z ∈ P if (H ≤≤ zH + H )( ∪ zH ≤ ), (1) ( ) i i i i ROI T iT O iTl and z = γ sinω, (2) i i i where z is the height of a scan point P , H represents the vertical height obstacle threshold, and i i T H represents the vertical pit threshold. Tl 3.2 Grid clustering To reduce the number of points, the points out of the range are discarded and the candidate points are projected onto a horizontal plane. The projected points are grouped into several grids to form a grid map. The grid-based grouping aggregates the points of the grids within a fixed distance; thus, the massive point cloud is compressed and represented as a grid map. Figures 2(a) and 2(b) show the point cloud of ground objects and the corresponding aggregate grid map, respectively. The projection can be expressed as x γ cosωα sin    , (3)    y γ cosωα cos    where γ is the measured distance, ω is the elevation angle of the scanning line, and α is the azimuth angle of the scanning line. Fig. 1. (Color online) Threshold of obstacles. H and H are the vehicle height and the obstacle threshold, O T respectively. Sensors and Materials, Vol. 32, No. 7 (2020) 2369 (a) (b) (c) Fig. 2. (Color) (a) Point cloud of ground objects and (b) aggregate grid map. (c) Grouping result. 3.3 Obstacle detection based on grid feature After reducing the data, obstacles can be easily found by the connected-component labeling (CCL) method or a region-growing approach. In this paper, the region-growing approach is adopted. The seed points are selected from nonempty grids from near to far. The obstacles are then grown from these seed points to adjacent points depending on the 8-connected neighborhood. The region growing stops when no more nonempty grids are neighbors. If an unclassified object grid is found, it is regarded as the seed point of the next cluster. This process continues until all the nonempty grids are grouped. The grouping result is shown in Fig. 2(c). Consider a grid belonging to an object, O , and its 8-connected neighbors, g , k m,n where i − 1 ≤ m ≤ i + 1 and j − 1 ≤ n ≤ j + 1. Then, the 8-connected neighbors are checked for connectivity by using g ∈ Og if ∈ O and num( g )> 0 mn ,, k i j k mn , (4) g ∉ Og if ∈ O and num( g )= 0 .  mn ,, k i j k mn , 3.4 Moving direction decision Robot navigation has been a very popular research topic for the last few decades. The navigation function usually depends on various sensors, and many path planning algorithms (13–19) have been proposed. In this paper, the Bug algorithm is adopted for local path planning to avoid obstacles. The term “Bug algorithms” first came into existence in the late 1980s. Bug algorithms are classical and widely used for sensor-based path finders. There are different (13) (13) (14) (15) (16) types of algorithms: Bug 1, Bug 2, Tangent Bug, Dist Bug, and Wedge Bug, all of which are usually called Bug algorithms. The classic Bug algorithm consists of two main steps: (1) moving toward the target mode, (2) walking along the edge mode. In Step 1, the algorithm determines the movement and termination of the path, and in Step 2, the path is used to avoid obstacles then the robot returns to the path. Since the vehicle system proposed in this paper uses a LiDAR to improve the sensing range, obstacles can be detected within a certain distance, 2370 Sensors and Materials, Vol. 32, No. 7 (2020) and a detour can be taken in advance to shorten the path. The detour method first finds the tangent point O (x , y ) on the nearest obstacle and changes the direction toward the tangent e e e point of the obstacle. 4. Experimental Results The proposed integrated multisensor system is mounted on a four-wheel electric scooter driven by four independent motors. As shown in Fig. 3, the scooter has a joint between the front part and the rear part. The joint makes the scooter turning more flexible. The advantages of this mobile platform are a small gyration radius, strong grip, and high transmission efficiency. The LiDAR used in this study is a 16-channel multilayered VLP-16 LiDAR. The scanning rate is 10 frames/s and a frame has more than 30000 points. With the proposed method, the ROI of a frame can be reduced to 30% of its original size, and there are about 1000 clustered grids per frame. The results obtained in three test environments are shown in Table 1. The obstacles can be detected rapidly because the amount of data is reduced by 95%. In Figs. 4(a) and 4(b), the original point cloud data and the grid map are shown. In Fig. 4(a), there are 12797 ROI points, which are selected from 27605 points of a point cloud. Then, these points are grouped into 806 grids. Only 4% of the data are used for obstacle detection, significantly increasing the detection speed. Figure 5 shows the object labeling results for three test environments. The software simulation of the proposed algorithm showed good results since the planned path was very similar to the shortest path. However, in the field test, the accuracy of GPS and the control strategy decrease the performance of the proposed algorithm. The first testing Fig. 3. (Color online) Four-wheel scooter used in this research. The proposed autonomous car consists of a LiDAR, GPS, Gyro, and color camera. Table 1 Results for three test environments. Scenes Original ROI Grids Percentage (%) A 27605 12797 806 97.08 B 26943 13832 1304 95.16 C 27302 12867 772 97.17 Sensors and Materials, Vol. 32, No. 7 (2020) 2371 (a) (b) (c) Fig. 4. (Color) Grid maps for three test environments. (a) Scene A. (b) Scene B. (c) Scene C. (a) (b) (c) Fig. 5. (Color) Object labeling results for three test environments. (a) Scene A. (b) Scene B. (c) Scene C. (a) (b) Fig. 6. (Color) Two test environments. The obstacles are marked in red. The shortest path is the yellow one and the planned path is the red one. (a) Parking area with smooth road. (b) Trail. environment shown in Fig. 6(a) is a parking area with a smooth road. The distance from the start to the goal is about 30 m and there are five fixed obstacles in the area. The shortest path is 32 m, whereas the planned path is 85.8 m. The second testing environment shown in Fig. 6(b) is a trail with a rough surface. The distance from the start to the goal is about 30 m and there 2372 Sensors and Materials, Vol. 32, No. 7 (2020) Table 2 Comparison of the planned path with die ff rent test environments. Speed (m/s) Time (s) Moving distance (m) Shortest path (m) Simulation 0.8 38 30.4 27.5 Parking area 0.6 143 85.8 32 Trail 0.5 182 94.1 32 are five fixed obstacles in the area. The shortest path is 32 m, whereas the planned path is 94.1 m. Again, because of the accuracy of GPS and the control strategy, the proposed system in practical experiments not only plans a longer path but also takes more time to find a path than the software simulation. A comparison of the planned path with the different test environments is given in Table 2. 5. Conclusions In the past decade, multitype sensors have usually been used in robot navigation to gather information on objects of different materials in environments that were sometimes dark or contain shadows. While the performances of passive sensors, e.g., cameras and thermal sensors, are usually affected by environmental conditions, active sensors, e.g., LiDAR and radar, are robust to the weather and illumination. Recently, 3D depth images acquired by LiDAR have been widely used in self-driving systems. However, the massive amount of data in a point cloud results in a long computation time, and it is a big challenge to process such massive data in real-time applications. Therefore, in this paper, an autonomous vehicle system with the ability to detect obstacles and to plan a collision-free path simultaneously by using various sensors is proposed. The key technology used is rapid obstacle detection using 3D-to-2D projection and clustering by grids. Obstacles can be detected rapidly because the amount of data is reduced by 95%. After applying the point cloud preprocessing method proposed in this paper, the real-time detection performance of obstacles was greatly improved, and each data analysis was completed within 10 ms. Our software simulation of the proposed algorithm showed good results since the planned path was very similar to the shortest path. However, in the field test, the accuracy of GPS and the control strategy decreased the performance of the proposed algorithm. In future work, improving the accuracy of GPS and the control strategy is expected to reduce the path length planned by the proposed system and increase the processing speed. Acknowledgments The authors wish to thank Dr. Shangde Wu, a professor with the Department of Mechanical Engineering, Yunlin University of Science and Technology, Taiwan, R.O.C., for kindly providing the mobile platform. Part of this work was financially supported by the “Intelligent Recognition Industry Service Research Center” from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan. Sensors and Materials, Vol. 32, No. 7 (2020) 2373 References 1 Y. Yu, J. Li, J. Yu, H. Guan, and C. Wang: IEEE Geosci. Remote Sens. Lett. 11 (2014) 1019. https://doi. org/10.1109/LGRS.2013.2285237 2 Y. Yu, J. Li, H. Guan, F. Jia, and C. Wang: IEEE Geosci. Remote Sens. Lett. 12 (2015) 492. https://doi. org/10.1109/LGRS.2014.2347347 3 H. Cheng, N. Zheng, X. Zhang, J. Qin, and H. van de Wetering: IEEE Trans. Intell. Transp. Syst. 8 (2007) 157. https://doi.org/10.1109/TITS.2006.890073 4 D. O. Rubio, A. Lenskiy, and J. Ryu: Proc. 2013 7th Asia Modelling Symp. 160–165. https://doi.org/10.1109/ AMS.2013.31 5 Y. Yu, J. Li, H. Guan, and C. Wang: IEEE Trans. Intell. Transp. Syst. 16 (2015) 2167. https://doi.org/10.1109/ TITS.2015.2399492 6 G. Zhao and J. Yuan: Proc. 2012 19th IEEE Int. Conf. Image Processing (IEEE 2012) 437–440. https://doi. org/10.1109/ICIP.2012.6466890 7 M. Himmelsbach, F. v. Hundelshausen, and H. J. Wuensche: 2010 IEEE Intell. Veh. Symp. (IV) (USA) 560– 565. https://doi.org/10.1109/IVS.2010.5548059. 8 V. Nguyen, A. Martinelli, N. Tomatis, and R. Siegwart: IEEE Int. Conf. Intelligent Robots and Systems (IROS 2005) 1929–1934. https://doi.org/10.1109/IROS.2005.1545234 9 J. Choi, J. Lee, D. Kim, G. Soprani, P. Cerri, A. Broggi, and K. Yi: IEEE Trans. Intell. Transp. Syst. 13 (2015) 974. https://doi.org/10.1109/TITS.2011.2179802 10 J. Weber and L. Matthies: Proc. Intell. Veh. Symp. (1996) 345–350. https://doi.org/10.1109/IVS.1996.566404 11 F. Maurelli, D. Droeschel, T. Wisspeintner, S. May, and H. Surmann: Proc. IEEE Adv. Robot. (IEEE 2009) 1–6. 12 L. He, Y. Chao, and K. Suzuki: IEEE Trans. Image Processing 20 (2011) 2122. https://doi.org/10.1109/ TIP.2011.2114352 13 H. Choset, K. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L. Kavraki, and S. Thrun: Principle of Robot Motion: Theory, Algorithms, and Implementation. (MIT Press, Massachusetts, 2005) Chap. 2. 14 I. Kamon, E. Rimon, and E. Rivlin: Int. J. Rob. Res. 17 (1998) 934. https://doi.org/10.1177% 2F027836499801700903 15 I. Kamon, E. Rivlin, and E. Rimon: Proc. IEEE Int. Conf. Robotics and Automation (ICRA 1996) 429–435. https://doi.org/10.1109/ROBOT.1996.503814 16 I. Kamon and E. Rivlin: IEEE Trans. Robotics and Automation 13 (1997) 814. https://doi.org/10.1109/70.650160 17 F. Shahzad and R. Q. Shahzad: Proc. IEEE Int. Conf. Emerging Technologies (ICET, 2006) 575. https://doi. org/10.1109/ICET.2006.335934 18 A. D. K. Lam, S. D. Prior, S. Shen, S. Young, and L. Ji: Engineering Innovation and Design (CRC Press, Florida, 2019). https://doi.org/10.1201/9780429019777 19 A. D. K. Lam, S. D. Prior, S. Shen, S. Young, and L. Ji: Smart Science, Design & Technology (CRC Press, Florida, 2020). https://doi.org/10.1201/9780429058127 About the Authors Chien-Chou Lin received his M.S. and Ph.D. degrees from National Chiao- Tung University, Taiwan, in 1994 and 2004, respectively. From 2010 to 2013, he was an assistant professor, and since 2013, he has been an associate professor at National Yunlin University of Science and Technology, Taiwan. His research interests are robotics, point cloud processing, surface matching, and object recognition. (linchien@yuntech.edu.tw) 2374 Sensors and Materials, Vol. 32, No. 7 (2020) Wei-Lung Mao was born in Taiwan, R.O.C., in 1972. He received his B.S. degree in electrical engineering from National Taiwan University of Science and Technology in 1994 and his M.S. and Ph.D. degrees in electrical engineering from National Taiwan University in 1996 and 2004, respectively. He is now a professor in the Department of Electrical Engineering and Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology. His research interests are precision motion control, intelligent and adaptive control systems, satellite navigation systems, adaptive signal processing, neural networks, and communication electronics. (wlmao@yuntech.edu.tw) Teng-Wen Chang received his Ph.D. degree from the University of Adelaide, Australia, in 1999, his M.Arch. degree from the University of Pennsylvania, USA, in 1993, and his M.S. degree in computational design from Carnegie Mellon University, USA, in 1995. He is currently a professor in the Department of Digital Media Design and director of SOFTLab and Idea Factory at National Yunlin University of Science and Technology, Taiwan. His current research interests are boundaryless design environments, human–machine interaction, sensible intelligent machines, and design space exploration. (tengwen@yuntech.edu.tw) Chuan-Yu Chang received his B.S. degree in nautical technology and his M.S. degree in electrical engineering from National Taiwan Ocean University, Keelung, Taiwan, in 1993 and 1995, respectively, and his Ph.D. degree in electrical engineering from National Cheng Kung University, Tainan, Taiwan, in 2000. From 2007 to 2010, he was an associate professor, and since 2010, he has been a full professor at National Yunlin University of Science and Technology, Taiwan. His research interests include neural networks and their application to medical image processing, wafer defect inspection, digital watermarking, and pattern recognition. (chuanyu@yuntech.edu.tw) Salah Sohaib Saleh Abdullah received his B.S. and M.S. degrees in computer science and information engineering from National Yunlin University of Science and Technology, Yunlin, Taiwan, in 2016 and 2018, respectively. Since 2018, he has been a research assistant at National Yunlin University of Science and Technology. His research interests include robotics, path planning, and object recognition. (salah88868@gmail.com)

Journal

Sensors and MaterialsUnpaywall

Published: Jul 20, 2020

There are no references for this article.