Approximate reinforcement learning to control beaconing congestion in distributed networks
Ver/
Compartir
Estadísticas
Ver Estadísticas de usoMetadatos
Mostrar el registro completo del ítemÁrea de conocimiento
Ingeniería TelemáticaPatrocinadores
This research has been supported by the projects AIM, ref. TEC2016-76465-C2-1-R, ARISE2 “Future IoT Networks and Nano-networks (FINe)” ref. PID2020-116329GB-C22, ONOFRE-3, ref. PID2020-112675RB-C41 [Agencia Estatal de Investigación (AEI), European Regional Development Fund (FEDER), European Union (EU)], ATENTO, ref. 20889/PI/18 (Fundación Séneca, Región de Murcia), and LIFE [Fondo SUPERA Covid-19, funded by Agencia Estatal Consejo Superior de Investigaciones Científicas (CSIC), Universidades Españolas and Banco Santander]. J.A.P. thanks the Spanish MECD for an FPI grant ref. BES-2017-081061. Finally, the authors acknowledge Laura Wettersten for her contribution in reviewing the grammar and spell of the manuscript.Fecha de publicación
2022Editorial
SpringerCita bibliográfica
Aznar-Poveda J, García-Sánchez AJ, Egea-López E, García-Haro J. Approximate reinforcement learning to control beaconing congestion in distributed networks. Sci Rep. 2022 Jan 7;12(1):142. doi: 10.1038/s41598-021-04123-9. PMID: 34997101; PMCID: PMC8741791.Palabras clave
Reinforcement LearningDistributed networks
Vehicular communications
Driver assistance
Driver assistance systems
Packet losses
Additional information
Markov decision
Assistance systems
Safety applications
Resumen
In vehicular communications, the increase of the channel load caused by excessive periodical messages (beacons) is an important aspect which must be controlled to ensure the appropriate operation of safety applications and driver-assistance systems. To date, the majority of congestion control solutions involve including additional information in the payload of the messages transmitted, which may jeopardize the appropriate operation of these control solutions when channel conditions are unfavorable, provoking packet losses. This study exploits the advantages of non-cooperative, distributed beaconing allocation, in which vehicles operate independently without requiring any costly road infrastructure. In particular, we formulate the beaconing rate control problem as a Markov Decision Process and solve it using approximate reinforcement learning to carry out optimal actions. Results obtained were compared with other traditional solutions, revealing that our approach, called SSFA, is able to ...
Colecciones
- Artículos [1734]
El ítem tiene asociados los siguientes ficheros de licencia:
Redes sociales