Access the full text.
Sign up today, get DeepDyve free for 14 days.
C. Hoel, Krister Wolff, L. Laine (2018)
Automated Speed and Lane Change Decision Making using Deep Reinforcement Learning2018 21st International Conference on Intelligent Transportation Systems (ITSC)
Guofa Li, Shen Li, Shen Li, Yechen Qin, Dongpu Cao, Xin-cai Qu, B. Cheng (2020)
Deep Reinforcement Learning Enabled Decision-Making for Autonomous Driving at IntersectionsAutomotive Innovation, 3
H. Kita (1999)
A merging–giveway interaction model of cars in a merging section: a game theoretic analysisTransportation Research Part A-policy and Practice, 33
M Rahman (2013)
1942IEEE Trans Intell Transp Syst, 14
Mizanur Rahman, M. Chowdhury, Yuanchang Xie, Yiming He (2013)
Review of Microscopic Lane-Changing Models and Future Research OpportunitiesIEEE Transactions on Intelligent Transportation Systems, 14
Y. Pei, Huizhi Xu (2006)
The Control Mechanism of Lane changing in Jam Condition2006 6th World Congress on Intelligent Control and Automation, 2
M Hasenjager (2020)
335IEEE Trans Intell Veh, 5
S. Moridpour, M. Sarvi, G. Rose (2010)
Lane changing models: a critical reviewTransportation Letters, 2
V. Belitsky, J. Krug, E. Neves, G. Schütz (2001)
A Cellular Automaton Model for Two-Lane TrafficJournal of Statistical Physics, 103
E. Olsen, B. Kleiner, Suzanne Lee, T. Smith-Jackson, W. Wierwille (2003)
Modeling Slow Lead Vehicle Lane Changing
(2019)
Tesla’s Navigate on Autopilot wins Germany’s Connected Car Innovation Award, September
M. Hasenjäger, M. Heckmann, H. Wersing (2020)
A Survey of Personalization for Advanced Driver Assistance SystemsIEEE Transactions on Intelligent Vehicles, 5
A. Kesting, M. Treiber, D. Helbing (2007)
General Lane-Changing Model MOBIL for Car-Following ModelsTransportation Research Record, 1999
Vinod Nair, Geoffrey Hinton (2010)
Rectified Linear Units Improve Restricted Boltzmann Machines
(2020)
NIO announces Navigate on Pilot, new charging service at Auto China
A. Halati, H. Lieu, S. Walker (1997)
CORSIM - CORRIDOR TRAFFIC SIMULATION MODEL
Volodymyr Mnih, K. Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller (2013)
Playing Atari with Deep Reinforcement LearningArXiv, abs/1312.5602
PG Gipps (1986)
403Transp Res B: Methodol, 20
S. Bogard, P. Fancher (1999)
ANALYSIS OF DATA ON SPEED-CHANGE AND LANE-CHANGE BEHAVIOR IN MANUAL AND ACC DRIVING
T Toledo (2007)
65Transp Rev, 27
(2020)
Waymo is opening its fully driverless service to the general public in Phoenix
V Butakov (2015)
4422IEEE Trans Veh Technol, 64
Batuhan Eroglu, M. Sahin, N. Ure (2020)
Autolanding control system design with deep learning based fault estimationAerospace Science and Technology, 102
CC Macadam (2003)
101Veh Syst Dyn, 40
V. Butakov, Petros Ioannou (2015)
Personalized Driver/Vehicle Lane Change Models for ADASIEEE Transactions on Vehicular Technology, 64
P. Gipps (1986)
A MODEL FOR THE STRUCTURE OF LANE-CHANGING DECISIONSTransportation Research Part B-methodological, 20
Branka Mirchevska, Christian Pek, M. Werling, M. Althoff, J. Boedecker (2018)
High-level Decision Making for Safe and Reasonable Autonomous Lane Changing using Reinforcement Learning2018 21st International Conference on Intelligent Transportation Systems (ITSC)
Charlott Vallon, Ziya Ercan, Ashwin Carvalho, F. Borrelli (2017)
A machine learning approach for personalized autonomous lane change initiation and control2017 IEEE Intelligent Vehicles Symposium (IV)
P. Gipps (1981)
A behavioural car-following model for computer simulationTransportation Research Part B-methodological, 15
C. MacAdam (2003)
Understanding and Modeling the Human DriverVehicle System Dynamics, 40
Volodymyr Mnih, K. Kavukcuoglu, David Silver, Andrei Rusu, J. Veness, Marc Bellemare, Alex Graves, Martin Riedmiller, A. Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, D. Kumaran, Daan Wierstra, S. Legg, D. Hassabis (2015)
Human-level control through deep reinforcement learningNature, 518
Linda Berry, M. Moeller, P. Darcy, P. Darcy (2009)
Adoptive immunotherapy for cancer: the next generation of gene-engineered immune cells.Tissue antigens, 74 4
R. Krajewski, Julian Bock, Laurent Kloeker, L. Eckstein (2018)
The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems2018 21st International Conference on Intelligent Transportation Systems (ITSC)
Alexiadis, J. Colyar, J. Halkias, R. Hranac, G. McHale (2004)
The next generation simulation programIte Journal-institute of Transportation Engineers, 74
M. Littman (2015)
Reinforcement learning improves behaviour from evaluative feedbackNature, 521
(2020)
Deep Evaluation: Three Key Words for Popularization of Robotaxi
P. Holm, D. Tomich, Jaimie Sloboden, Cheryl Lowrance (2007)
Traffic Analysis Toolbox Volume IV: Guidelines for Applying CORSIM Microsimulation Modeling Software
S Moridpour (2010)
157Transp Lett, 2
Vinod Nair, Geoffrey Hinton
Rectified Linear Units Improve Restricted Boltzmann Machines
Chengzhi Qu, Wendong Gai, M. Zhong, Jing Zhang (2020)
A novel reinforcement learning based grey wolf optimizer algorithm for unmanned aerial vehicles (UAVs) path planningAppl. Soft Comput., 89
M. Treiber, Ansgar Hennecke, D. Helbing (2000)
Congested traffic states in empirical observations and microscopic simulationsPhysical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics, 62 2 Pt A
(2020)
Autohome
Dalei Song, Wenhao Gan, Peng Yao, Wenchuan Zang, Zhixuan Zhang, Xiuqing Qu (2022)
Guidance and control of autonomous surface underwater vehicles for target tracking in ocean environment by deep reinforcement learningOcean Engineering
T. Toledo (2007)
Driving Behaviour: Models and ChallengesTransport Reviews, 27
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations
To develop driving automation technologies for humans, a human-centered methodology should be adopted for safety and satisfactory user experience. Automated lane change decision in dense highway traffic is challenging, especially when considering different driver preferences. This paper proposes a personalized lane change decision algorithm based on deep reinforcement learning. Firstly, driving experiments are carried out on a moving-base simulator. Based on the analysis of the experiment data, three personalization indicators are selected to describe the driver preferences in lane-change decisions. Then, a deep reinforcement learning (RL) approach is applied to design human-like agents for automated lane change decisions to capture the driver preferences, with refined rewards using the three personalization indicators. Finally, the trained RL agents and benchmark agents are tested in a two-lane highway driving scenario. Results show that the proposed algorithm can achieve higher consistency of lane change decision preferences than the comparison algorithm.
Applied Intelligence – Springer Journals
Published: Jun 1, 2023
Keywords: Reinforcement learning; Deep Q-Network; Automated driving; Lane change decision; Driver-in-the-loop experiment; Driving style
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.