Let's Connect
Follow Us
Watch Us
(+385) 1 2380 262
journal.prometfpz.unizg.hr
Promet - Traffic&Transportation journal

Accelerating Discoveries in Traffic Science

Accelerating Discoveries in Traffic Science

PUBLISHED
02.12.2022
LICENSE
Copyright (c) 2024 Pavle Bugarčić, Nenad Jevtić, Marija Malnar

Reinforcement Learning-Based Routing Protocols in Vehicular and Flying Ad Hoc Networks – A Literature Survey

Authors:

Pavle Bugarčić
University of Belgrade, Faculty of Transport and Traffic Engineering

Nenad Jevtić
University of Belgrade, Faculty of Transport and Traffic Engineering

Marija Malnar
University of Belgrade, Faculty of Transport and Traffic Engineering

Keywords:reinforcement learning, Q-learning, routing protocols, VANET, FANET, ITS

Abstract

Vehicular and flying ad hoc networks (VANETs and FANETs) are becoming increasingly important with the development of smart cities and intelligent transporta-tion systems (ITSs). The high mobility of nodes in these networks leads to frequent link breaks, which complicates the discovery of optimal route from source to destination and degrades network performance. One way to over-come this problem is to use machine learning (ML) in the routing process, and the most promising among different ML types is reinforcement learning (RL). Although there are several surveys on RL-based routing protocols for VANETs and FANETs, an important issue of integrating RL with well-established modern technologies, such as software-defined networking (SDN) or blockchain, has not been adequately addressed, especially when used in complex ITSs. In this paper, we focus on performing a comprehensive categorisation of RL-based routing pro-tocols for both network types, having in mind their simul-taneous use and the inclusion with other technologies. A detailed comparative analysis of protocols is carried out based on different factors that influence the reward func-tion in RL and the consequences they have on network performance. Also, the key advantages and limitations of RL-based routing are discussed in detail.

References

  1. Nagib RA, Moh S. Reinforcement learning-based routing protocols for vehicular ad doc networks: A comparative survey. IEEE Access. 2021;9: 27552-27587. doi: 10.1109/ACCESS.2021.3058388.

    Rezwan S, Choi W. A survey on applications of reinforcement learning in flying ad-hoc networks. Electronics. 2021;10(4): 449. doi: 10.3390/electronics10040449.

    Doddalinganavar SS, Tergundi PV, Patil SR. Survey on deep reinforcement learning protocol in VANET. Proc. of the 1st Int. Conf. on Advances in Information Technology, ICAIT, 25-27 July 2019, Chikmagalur, India. IEEE; 2019. p. 81-86.

    Sutton R, Barto A. Reinforcement learning: An introduction, second edition. Cambridge, Massachusetts: MIT Press; 2018.

    Mnih V, et al. Human level control through deep reinforcement learning. Nature. 2015;518(7540): 529-533. doi: 10.1038/nature14236.

    Wang Z, et al. Dueling network architectures for deep reinforcement learning. Proc. of the 33rd Int. Conf. on Machine Learning, 20-22 June 2016, New York, NY, USA.

Show more
How to Cite
Bugarčić, P. (et al.) 2022. Reinforcement Learning-Based Routing Protocols in Vehicular and Flying Ad Hoc Networks – A Literature Survey. Traffic&Transportation Journal. 34, 6 (Dec. 2022), 893-906. DOI: https://doi.org/10.7307/ptt.v34i6.4159.

SPECIAL ISSUE IS OUT

Guest Editor: Eleonora Papadimitriou, PhD

Editors: Marko Matulin, PhD, Dario Babić, PhD, Marko Ševrović, PhD


Accelerating Discoveries in Traffic Science |
2024 © Promet - Traffic&Transportation journal