Abstract

Fog and Edge Computing are two paradigms specifically suitable for real-time and time-critical applications, which are usually distributed among a set of nodes that constitutes the core idea of both Fog and Edge Computing. Since nodes are heterogeneous and subject to different traffic patterns, distributed scheduling algorithms are in charge of making each request meet the specified deadline. In this paper, we exploit the approach of Reinforcement Learning based decision-making for designing a cooperative and decentralized task online scheduling approach which is composed of two RL-based decisions. One for selecting the node to which to offload the traffic and one for accepting or not the incoming offloading request. The experiments that we conducted on a cluster of Raspberry Pi 4 show that introducing a second RL decision increases the rate of tasks executed within the deadline of 4% as it introduces more flexibility during the decision-making process, consequently enabling better scheduling decisions.

Citation

Tayel, A. F. M., Proietti Mattia, G., & Beraldi, R. (2024). A Double-Decision Reinforcement Learning Based Algorithm for Online Scheduling in Edge and Fog Computing. In I. Chatzigiannakis & I. Karydis (Eds.), Algorithmic Aspects of Cloud Computing (pp. 197–210). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-49361-4_11

@inproceedings{2023TayelADoubleDecision,
  title = {A Double-Decision Reinforcement Learning Based Algorithm for Online Scheduling in Edge and Fog Computing},
  author = {Tayel, Ahmed Fayez Moustafa and Proietti Mattia, Gabriele and Beraldi, Roberto},
  year = {2024},
  booktitle = {Algorithmic Aspects of Cloud Computing},
  publisher = {Springer Nature Switzerland},
  address = {Cham},
  pages = {197--210},
  isbn = {978-3-031-49361-4},
  doi = {10.1007/978-3-031-49361-4_11},
  editor = {Chatzigiannakis, Ioannis and Karydis, Ioannis}
}