TY - JOUR
T1 - A comprehensive survey of deep reinforcement learning in UAV-assisted IoT data collection
AU - Amodu, Oluwatosin Ahmed
AU - Althumali, Huda
AU - Mohd Hanapi, Zurina
AU - Jarray, Chedia
AU - Raja Mahmood, Raja Azlina
AU - Adam, Mohammed Sani
AU - Bukar, Umar Ali
AU - Abdullah, Nor Fadzilah
AU - Luong, Nguyen Cong
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2025/10
Y1 - 2025/10
N2 - Unmanned Aerial Vehicles (UAVs) play a critical role in data collection for a wide range of Internet of Things (IoT) applications across remote, urban, and marine environments. In large-scale deployments, UAVs often face complex decision-making challenges, for which Deep Reinforcement Learning (DRL) has emerged as a promising solution. This paper presents a comprehensive review of research on UAV-assisted IoT utilizing DRL, covering key research questions relating to DRL algorithm variants, deployment objectives, architectural features, integrated technologies, UAV roles, optimization constraints, energy management strategies, and performance metrics. Findings indicate that value-based and actor-critic algorithms are the most commonly employed, targeting objectives such as path planning, transmit power control, scheduling, velocity and altitude control, and charging optimization. Other architectural considerations include clustering, security, obstacle avoidance, buffered sensors, and multi-UAV coordination. Beyond data collection, UAVs are also used for tasks such as device selection, data aggregation, and sensor charging, with energy management primarily achieved through charging and energy harvesting techniques. Performance is typically assessed using metrics like energy efficiency, throughput, latency, packet loss, and Age of Information (AoI). The paper concludes by outlining several promising research directions and open challenges critical to the successful deployment of UAVs as aerial communication platforms, especially in IoT data collection. By organizing existing work across key themes and outlining promising future directions, this review offers a valuable reference for researchers and technology professionals alike.
AB - Unmanned Aerial Vehicles (UAVs) play a critical role in data collection for a wide range of Internet of Things (IoT) applications across remote, urban, and marine environments. In large-scale deployments, UAVs often face complex decision-making challenges, for which Deep Reinforcement Learning (DRL) has emerged as a promising solution. This paper presents a comprehensive review of research on UAV-assisted IoT utilizing DRL, covering key research questions relating to DRL algorithm variants, deployment objectives, architectural features, integrated technologies, UAV roles, optimization constraints, energy management strategies, and performance metrics. Findings indicate that value-based and actor-critic algorithms are the most commonly employed, targeting objectives such as path planning, transmit power control, scheduling, velocity and altitude control, and charging optimization. Other architectural considerations include clustering, security, obstacle avoidance, buffered sensors, and multi-UAV coordination. Beyond data collection, UAVs are also used for tasks such as device selection, data aggregation, and sensor charging, with energy management primarily achieved through charging and energy harvesting techniques. Performance is typically assessed using metrics like energy efficiency, throughput, latency, packet loss, and Age of Information (AoI). The paper concludes by outlining several promising research directions and open challenges critical to the successful deployment of UAVs as aerial communication platforms, especially in IoT data collection. By organizing existing work across key themes and outlining promising future directions, this review offers a valuable reference for researchers and technology professionals alike.
KW - Deep reinforcement learning
KW - Drones
KW - Internet of Things
KW - UAV-assisted IoT applications
KW - Unmanned aerial vehicles
KW - Wireless sensor networks
UR - https://www.scopus.com/pages/publications/105010689272
U2 - 10.1016/j.vehcom.2025.100949
DO - 10.1016/j.vehcom.2025.100949
M3 - Review article
AN - SCOPUS:105010689272
SN - 2214-2096
VL - 55
JO - Vehicular Communications
JF - Vehicular Communications
M1 - 100949
ER -