Let's Connect
Follow Us
Watch Us
(+385) 1 2380 262
journal.prometfpz.unizg.hr
Promet - Traffic&Transportation journal

Accelerating Discoveries in Traffic Science

Accelerating Discoveries in Traffic Science

PUBLISHED
26.04.2014
LICENSE
Copyright (c) 2024 Rok Marsetič, Darja Šemrov, Marijan Žura

Road Artery Traffic Light Optimization with Use of the Reinforcement Learning

Authors:

Rok Marsetič
University of Ljubljana, Faculty of Civil and Geodetic Engineering

Darja Šemrov
University of Ljubljana, Faculty of Civil and Geodetic Engineering

Marijan Žura
University of Ljubljana, Faculty of Civil and Geodetic Engineering

Keywords:reinforcement learning, Q learning, road artery, traffic control, traffic lights,

Abstract

The basic principle of optimal traffic control is the appropriate real-time response to dynamic traffic flow changes. Signal plan efficiency depends on a large number of input parameters. An actuated signal system can adjust very well to traffic conditions, but cannot fully adjust to stochastic traffic volume oscillation. Due to the complexity of the problem analytical methods are not applicable for use in real time, therefore the purpose of this paper is to introduce heuristic method suitable for traffic light optimization in real time. With the evolution of artificial intelligence new possibilities for solving complex problems have been introduced. The goal of this paper is to demonstrate that the use of the Q learning algorithm for traffic lights optimization is suitable. The Q learning algorithm was verified on a road artery with three intersections. For estimation of the effectiveness and efficiency of the proposed algorithm comparison with an actuated signal plan was carried out. The results (average delay per vehicle and the number of vehicles that left road network) show that Q learning algorithm outperforms the actuated signal controllers. The proposed algorithm converges to the minimal delay per vehicle regardless of the stochastic nature of traffic. In this research the impact of the model parameters (learning rate, exploration rate, influence of communication between agents and reward type) on algorithm effectiveness were analysed as well.

References

  1. Anžek M, Kavran Z, Badanjak D. Adaptive Traffic Control as Function of Safety. 12th World Congress on Intelligent Transport Systems and Services. San Francisco, 2005.

    Robertson DI. Research on the TRANSYT and SCOOT Methods of Signal Coordination. ITE Journal. 1986;56(1):36-40.

    Hunt PB, Roberetson DI. Bretherton RD, Winton RI. A traffic responsive method of coordinating signals. Crowthorne, Berkshire: Transport and Road Research Laboratory; 1981.

    Lowrie PR. Scats, Sydney co-ordinated adaptive traffic system: a traffic responsive method of controlling urban traffic. Darlinghurst, NSW Australia: Roads and Traffic Authority NSW; 1990.

    Gartner N. Opac: A demand-responsive strategy for traffic signal control. Transportation Research Board; 1983.

    Veljanovska K, Bombol K, Maher T. Reinforcement Learning Technique in Multiple Motorway Access Control Strategy Design. PROMET - Traffic&Transportation. 2010;22(2):117-123.

    Sen S, Head K. Controlled optimization of phases at an inters

Show more
How to Cite
Marsetič, R. (et al.) 2014. Road Artery Traffic Light Optimization with Use of the Reinforcement Learning. Traffic&Transportation Journal. 26, 2 (Apr. 2014), 101-108. DOI: https://doi.org/10.7307/ptt.v26i2.1318.

SPECIAL ISSUE IS OUT

Guest Editor: Eleonora Papadimitriou, PhD

Editors: Marko Matulin, PhD, Dario Babić, PhD, Marko Ševrović, PhD


Accelerating Discoveries in Traffic Science |
2024 © Promet - Traffic&Transportation journal