Exploring DQN-Based Reinforcement Learning in Autonomous Highway Navigation Performance Under High-Traffic Conditions

Authors

  • Sandy Nugroho Dian Nuswantoro University
  • De Rosal Ignatius Moses Setiadi Dian Nuswantoro University https://orcid.org/0000-0001-6615-4457
  • Hussain Md Mehedul Islam The Mathworks, Inc.

DOI:

https://doi.org/10.62411/jcta.9929

Keywords:

Autonomous Highway Navigation, Autonomous Vehicle Navigation, Crowded Traffic Autonomous, Deep-Q Networking, Reinforcement Learning

Abstract

Driving in a straight line is one of the fundamental tasks for autonomous vehicles, but it can become complex and challenging, especially when dealing with high-speed highways and dense traffic conditions. This research aims to explore the Deep-Q Networking (DQN) model, which is one of the reinforcement learning (RL) methods, in a highway environment. DQN was chosen due to its proficiency in handling complex data through integrated neural network approximations, making it capable of addressing high-complexity environments. DQN simulations were conducted across four scenarios, allowing the agent to operate at speeds ranging from 60 to nearly 100 km/h. The simulations featured a variable number of vehicles/obstacles, ranging from 20 to 80, and each simulation had a duration of 40 seconds within the Highway-Env simulator. Based on the test results, the DQN method exhibited excellent performance, achieving the highest reward value in the first scenario, 35.6117 out of a maximum of 40, and a success rate of 90.075%.

Author Biographies

De Rosal Ignatius Moses Setiadi, Dian Nuswantoro University

Sinta ID: 6007744Scopus ID: 57200208474

Hussain Md Mehedul Islam, The Mathworks, Inc.

Software Engineer, The Mathworks, Inc., United States

References

World Health Organisation, “Road traffic injuries,” 2023. https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries (accessed Jan. 30, 2024).

M. Bertoncello and D. Wee, “Ten ways autonomous driving could redefine the automotive world,” McKinsey, 2015. https://mckinsey.com/industries/automotive-and-assembly/our-insights/ten-ways-autonomous-driving-could-redefine-the-automotive-world (accessed Jan. 30, 2024).

H. Detjen, S. Geisler, and S. Schneegass, “Maneuver-based Driving for Intervention in Autonomous Cars,” in CHI’19 Workshop on “Looking into the Future: Weaving the Threads of Vehicle Automation,” 2020.

S. Zhang, L. Wen, H. Peng, and H. E. Tseng, “Quick Learner Automated Vehicle Adapting its Roadmanship to Varying Traffic Cultures with Meta Reinforcement Learning,” in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Sep. 2021, pp. 1745–1752. doi: 10.1109/ITSC48978.2021.9564972.

A. Nandy and M. Biswas, Reinforcement Learning. Berkeley, CA: Apress, 2018. doi: 10.1007/978-1-4842-3285-9.

J. Xie, Z. Shao, Y. Li, Y. Guan, and J. Tan, “Deep Reinforcement Learning With Optimized Reward Functions for Robotic Tra-jectory Planning,” IEEE Access, vol. 7, pp. 105669–105679, 2019, doi: 10.1109/ACCESS.2019.2932257.

B. Paden, M. Cap, S. Z. Yong, D. Yershov, and E. Frazzoli, “A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles,” IEEE Trans. Intell. Veh., vol. 1, no. 1, pp. 33–55, Mar. 2016, doi: 10.1109/TIV.2016.2578706.

E. Leurent and J. Mercat, “Social Attention for Autonomous Decision-Making in Dense Traffic,” Nov. 2019, doi: 10.48550/arXiv.1911.12250.

N. Carrara, E. Leurent, R. Laroche, T. Urvoy, O.-A. Maillard, and O. Pietquin, “Budgeted Reinforcement Learning in Continuous State Space,” in Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 9299–9309. [Online]. Available: http://arxiv.org/abs/1903.01004

B. Brito, A. Agarwal, and J. Alonso-Mora, “Learning Interaction-Aware Guidance for Trajectory Optimization in Dense Traffic Scenarios,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 18808–18821, Oct. 2022, doi: 10.1109/TITS.2022.3160936.

S. Glaser, B. Vanholme, S. Mammar, D. Gruyer, and L. Nouveliere, “Maneuver-Based Trajectory Planning for Highly Autonomous Vehicles on Real Road With Traffic and Driver Interaction,” IEEE Trans. Intell. Transp. Syst., vol. 11, no. 3, pp. 589–606, Sep. 2010, doi: 10.1109/TITS.2010.2046037.

A. Kusari, “Assessing and Accelerating Coverage in Deep Reinforcement Learning,” Dec. 2020, doi: 10.48550/arXiv.2012.00724.

Y. Pan et al., “Understanding and Mitigating the Limitations of Prioritized Experience Replay,” Jul. 2020, doi: 10.48550/arXiv.2007.09569.

E. Leurent, D. Efimov, and O.-A. Maillard, “Robust-Adaptive Interval Predictive Control for Linear Uncertain Systems,” in 2020 59th IEEE Conference on Decision and Control (CDC), Dec. 2020, pp. 1429–1434. doi: 10.1109/CDC42340.2020.9304308.

J. Gläscher, N. Daw, P. Dayan, and J. P. O’Doherty, “States versus Rewards: Dissociable Neural Prediction Error Signals Underlying Model-Based and Model-Free Reinforcement Learning,” Neuron, vol. 66, no. 4, pp. 585–595, May 2010, doi: 10.1016/j.neuron.2010.04.016.

N. D. Daw, Y. Niv, and P. Dayan, “Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control,” Nat. Neurosci., vol. 8, no. 12, pp. 1704–1711, Dec. 2005, doi: 10.1038/nn1560.

V. Mnih et al., “Playing Atari with Deep Reinforcement Learning,” Dec. 2013, [Online]. Available: http://arxiv.org/abs/1312.5602

A. Amballa, A. P., P. Sasmal, and S. Channappayya, “Discrete Control in Real-World Driving Environments using Deep Rein-forcement Learning,” Nov. 2022, [Online]. Available: http://arxiv.org/abs/2211.15920

J. Liao, T. Liu, X. Tang, X. Mu, B. Huang, and D. Cao, “Decision-Making Strategy on Highway for Autonomous Vehicles Using Deep Reinforcement Learning,” IEEE Access, vol. 8, pp. 177804–177814, 2020, doi: 10.1109/ACCESS.2020.3022755.

S. Kuutti, R. Bowden, and S. Fallah, “Weakly Supervised Reinforcement Learning for Autonomous Highway Driving via Virtual Safety Cages,” Sensors, vol. 21, no. 6, p. 2032, Mar. 2021, doi: 10.3390/s21062032.

D. M. Saxena, S. Bae, A. Nakhaei, K. Fujimura, and M. Likhachev, “Driving in Dense Traffic with Model-Free Reinforcement Learning,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), May 2020, pp. 5385–5392. doi: 10.1109/ICRA40945.2020.9197132.

V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015, doi: 10.1038/nature14236.

Y. Li, “Deep Reinforcement Learning,” Oct. 2018, [Online]. Available: http://arxiv.org/abs/1810.06339

E. Leurent, “An Environment for Autonomous Driving Decision-Making,” GitHub repository. GitHub, 2018.

G. Dulac-Arnold et al., “Deep Reinforcement Learning in Large Discrete Action Spaces,” Dec. 2015, [Online]. Available: http://arxiv.org/abs/1512.07679

Downloads

Published

2024-02-13

How to Cite

Nugroho, S., Setiadi, D. R. I. M., & Islam, H. M. M. (2024). Exploring DQN-Based Reinforcement Learning in Autonomous Highway Navigation Performance Under High-Traffic Conditions. Journal of Computing Theories and Applications, 1(3), 274–286. https://doi.org/10.62411/jcta.9929