Access the full text.
Sign up today, get DeepDyve free for 14 days.
Rupeng Yuan, Fuhai Zhang, Yu Wang, Yili Fu, Shuguo Wang (2018)
A Q-learning approach based on human reasoning for navigation in a dynamic environmentRobotica, 37
Haobin Shi, Lin Shi, Meng Xu, K. Hwang (2020)
End-to-End Navigation Strategy With Deep Reinforcement Learning for Mobile RobotsIEEE Transactions on Industrial Informatics, 16
Ulises Orozco-Rosas, O. Montiel, R. Sepúlveda (2019)
Mobile robot path planning using membrane evolutionary artificial potential fieldAppl. Soft Comput., 77
P. Giordano, M. Vendittelli (2009)
Shortest Paths to Obstacles for a Polygonal Dubins CarIEEE Transactions on Robotics, 25
S. Aradi (2020)
Survey of Deep Reinforcement Learning for Motion Planning of Autonomous VehiclesIEEE Transactions on Intelligent Transportation Systems, 23
Christoph Rösmann, F. Hoffmann, T. Bertram (2017)
Integrated online trajectory planning and optimization in distinctive topologiesRobotics Auton. Syst., 88
Junjie Zeng, Rusheng Ju, Long Qin, Yue Hu, Quanjun Yin, Cong Hu (2019)
Navigation in Unknown Dynamic Environments Based on Deep Reinforcement LearningSensors (Basel, Switzerland), 19
Ángel Madridano, Abdulla Al-Kaff, David Martín, A. Escalera (2020)
3D Trajectory Planning Method for UAVs Swarm in Building EmergenciesSensors (Basel, Switzerland), 20
Ki-Seo Kim, Dong-Eon Kim, Jangmyung Lee (2018)
Deep Learning Based on Smooth Driving for Autonomous Navigation2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)
Zhaoyang Yang, K. Merrick, Lianwen Jin, H. Abbass (2018)
Hierarchical Deep Reinforcement Learning for Continuous Action ControlIEEE Transactions on Neural Networks and Learning Systems, 29
Diederik Kingma, Jimmy Ba (2014)
Adam: A Method for Stochastic OptimizationCoRR, abs/1412.6980
Weiran Yao, N. Qi, Chengfei Yue, Neng Wan (2020)
Curvature-Bounded Lengthening and Shortening for Restricted Vehicle Path PlanningIEEE Transactions on Automation Science and Engineering, 17
A. Savkin, Michael Hoy (2012)
Reactive and the shortest path navigation of a wheeled mobile robot in cluttered environmentsRobotica, 31
Hang Li, A. Savkin (2018)
An algorithm for safe navigation of mobile robots by a sensor network in dynamic cluttered industrial environmentsRobotics and Computer-Integrated Manufacturing
C. Qixin, Huang Yanwen, Zhou Jingliang (2006)
An Evolutionary Artificial Potential Field Algorithm for Dynamic Path Planning of Mobile Robot2006 IEEE/RSJ International Conference on Intelligent Robots and Systems
P. Cieślak, R. Simoni, Pere Rodriguez, Dina Youakim (2020)
Practical formulation of obstacle avoidance in the Task-Priority framework for use in robotic inspection and intervention scenariosRobotics Auton. Syst., 124
Aleksandra Faust, Oscar Ramirez, Marek Fiser, Kenneth Oslund, Anthony Francis, James Davidson, Lydia Tapia (2017)
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-Based Planning2018 IEEE International Conference on Robotics and Automation (ICRA)
Junren Shi, Dongye Sun, D. Qin, Minghui Hu, Yingzhe Kan, Ke Ma, Ruibo Chen (2020)
Planning the trajectory of an autonomous wheel loader and tracking its trajectory via adaptive model predictive controlRobotics Auton. Syst., 131
Xun Li, Dandan Wu, Jingjing He, Muhammad Bashir, Liping Ma (2020)
An Improved Method of Particle Swarm Optimization for Path Planning of Mobile RobotJ. Control. Sci. Eng., 2020
Jonggu Lee, Taewan Kim, H. Kim (2017)
Autonomous lane keeping based on approximate Q-learning2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)
Efe Camci, E. Kayacan (2019)
End-to-End Motion Planning of Quadrotors Using Deep Reinforcement LearningArXiv, abs/1909.13599
Xiaojing Zhang, Alexander Liniger, F. Borrelli (2017)
Optimization-Based Collision AvoidanceIEEE Transactions on Control Systems Technology, 29
Volodymyr Mnih, K. Kavukcuoglu, David Silver, Andrei Rusu, J. Veness, Marc Bellemare, Alex Graves, Martin Riedmiller, A. Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, D. Kumaran, Daan Wierstra, S. Legg, D. Hassabis (2015)
Human-level control through deep reinforcement learningNature, 518
A. Matveev, V. Magerkin, A. Savkin (2020)
A method of reactive control for 3D navigation of a nonholonomic robot in tunnel-like environmentsAutom., 114
Jikun Kang, Miao Liu, Abhinav Gupta, Chris Pal, Xue Liu, Jie Fu (2021)
Learning Multi-Objective Curricula for Deep Reinforcement LearningArXiv, abs/2110.03032
H. Li, A.V. Savkin (2018)
An algorithm for safe navigation of mobile robots by a sensor network in dynamic cluttered industrial environmentsRob. Comput. Integr. Manuf., 54
R. Yuan, F. Zhang, Yu Wang (2019)
Fu, Yi., and Wang, Sh., A Q-learning approach based on human reasoning for navigation in a dynamic environmentRobotica, 37
Xuanang Chen, Peijun Gao (2019)
Path planning and control of soccer robot based on genetic algorithmJournal of Ambient Intelligence and Humanized Computing, 11
Li Zhou, Wei Li (2014)
Adaptive Artificial Potential Field Approach for Obstacle Avoidance Path Planning2014 Seventh International Symposium on Computational Intelligence and Design, 2
Junjie Zeng, Long Qin, Yue Hu, Quanjun Yin, Cong Hu (2019)
Integrating a Path Planner and an Adaptive Motion Controller for Navigation in Dynamic EnvironmentsApplied Sciences
Han-ye Zhang, Wei-ming Lin, Ai-xia Chen (2018)
Path Planning for the Mobile Robot: A ReviewSymmetry, 10
H. Hasselt, A. Guez, David Silver (2015)
Deep Reinforcement Learning with Double Q-Learning
L. Tai, Giuseppe Paolo, Ming Liu (2017)
Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Yuanda Wang, Haibo He, Changyin Sun (2018)
Learning to Navigate Through Complex Dynamic Environment With Modular Deep Reinforcement LearningIEEE Transactions on Games, 10
Minggao Wei, Song Wang, Jinfang Zheng, Dan Chen (2018)
UGV Navigation Optimization Aided by Reinforcement Learning-Based Path TrackingIEEE Access, 6
In this paper, an improved path planning algorithm with two stages is proposed for indoor mobile robots to help them navigate across dynamic crowded areas. First, an environmental model is built with a restricted tangent graph, and the shortest path is determined before the robot starts to move. Second, the obtained path is employed to create virtual corridor distances. This featured data along with local range-finder readings are used to train a deep Q-learning agent to efficiently plan the robot’s motion. Simulation results show above 70% success rate with significant improvements compared with a counterpart algorithm in terms of path length, learning cost, and generalization capability.
Automatic Control and Computer Sciences – Springer Journals
Published: Feb 1, 2023
Keywords: path planning; obstacle avoidance; tangent graph; virtual corridor; deep Q-learning
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.