Comparing control theory and deep reinforcement learning techniques for decentralized task offloading in the edge-cloud continuum
Ver/ Abrir
Registro completo
Mostrar el registro completo DCAutoría
Nieto, Gorka; Villegas Saiz, Neco; Díez Fernández, Luis Francisco

Fecha
2025-11Derechos
Attribution-NonCommercial 4.0 International
Publicado en
Simulation Modelling Practice and Theory, 2025, 144, 103170
Editorial
Elsevier
Enlace a la publicación
Palabras clave
Internet-of-Things (IoT)
Edge–Cloud-Continuum
Task offloading
Deep Reinforcement Learning (DRL)
Lyapunov
Energy consumption
Optimization
Resumen/Abstract
With the increasingly demanding requirements of Internet-of-Things (IoT) applications in terms of latency, energy efficiency, and computational resources, among others, task offloading has become crucial to optimize performance across edge and cloud infrastructures. Thus, optimizing the offloading to reduce latency as well as energy consumption and, ultimately, to guarantee appropriate service levels and enhance performance has become an important area of research. There are many approaches to guide the offloading of tasks in a distributed environment, and, in this work, we present a comprehensive comparison of three of them: A Control Theory (CT) Lyapunov optimization method, 3 Deep Reinforcement Learning (DRL) based strategies and traditional solutions, like Round-Robin or static schedulers. This comparison has been conducted using ITSASO, an in-house developed simulation platform for evaluating decentralized task offloading strategies in a three-layer computing hierarchy comprising IoT, fog, and cloud nodes. The platform models service generation in the IoT layer using a configurable distribution, enabling each IoT node to decide whether to autonomously execute tasks (locally), offload them to the fog layer, or send them to the cloud server. Our approach aims to minimize the energy consumption of devices while meeting tasks´ latency requirements. Our simulation results reveal that Lyapunov optimization excels in static environments, while DRL approaches prove to be more effective in dynamic settings, by better adapting to changing requirements and workloads. This study offers an analysis of the trade-offs between these solutions, highlighting the scenarios in which each scheduling approach is most suitable, thereby contributing valuable theoretical insights into the effectiveness of various offloading strategies in different environments. The source code of ITSASO is publicly available.
Colecciones a las que pertenece
- D12 Artículos [370]
- D12 Proyectos de Investigación [526]