Fog Computing is a widely adopted paradigm that allows distributing the computation in a geographic area. This makes it possible to implement time-critical applications and opens the study to a series of solutions that permit smartly organizing the traffic among a set of fog nodes, which constitute the core of the Fog Computing paradigm. As a typical smart city setting is subject to a continuous change in traffic conditions, it is necessary to design algorithms that can manage all the computing resources by properly distributing the traffic among the nodes in an adaptive way. In this paper, we propose a cooperative and decentralized algorithm based on Reinforcement Learning that is able to perform online scheduling decisions among fog nodes. This can be seen as an improvement over the power-of-two random choices paradigm used as a baseline. By showing results from our delay-based simulator and then from our framework "P2PFaaS" installed on 12 Raspberry Pis, we show how our approach maximizes the rate of the tasks executed within the deadline, outperforming the power-of-two random choices both in a fixed load condition and with traffic extracted from a real smart city scenario.


Proietti Mattia, G., & Beraldi, R. (2024). Online Decentralized Scheduling in Fog Computing for Smart Cities Based On Reinforcement Learning. IEEE Transactions on Cognitive Communications and Networking, 1–1. https://doi.org/10.1109/TCCN.2024.3378219

  title = {Online Decentralized Scheduling in Fog Computing for Smart Cities Based On Reinforcement Learning},
  author = {Proietti Mattia, Gabriele and Beraldi, Roberto},
  year = {2024},
  journal = {IEEE Transactions on Cognitive Communications and Networking},
  volume = {},
  number = {},
  pages = {1--1},
  doi = {10.1109/TCCN.2024.3378219},
  keywords = {Task analysis;Scheduling;Reinforcement learning;Edge computing;Q-learning;Computational modeling;Smart cities;Fog computing;Scheduling;Real-time;Reinforcement Learning;Smart Cities}