A Novel Deep Reinforcement Learning-based Approach for Task-offloading in Vehicular Networks

Document Type

Conference Proceeding

Source of Publication

2021 IEEE Global Communications Conference (GLOBECOM)

Publication Date



Next-generation vehicular networks will impose unprecedented computation demand due to the wide adoption of compute-intensive services with stringent latency requirements. Computational capacity of vehicular networks can be enhanced by integration of vehicular edge or fog computing; however, the growing popularity and massive adoption of novel services make edge resources insufficient. This challenge can be addressed by utilizing the onboard computation resources of neighboring vehicles that are not resource-constrained along with the edge computing resources. To fill the gaps, in this paper, we propose to solve the problem of task offloading by jointly considering the communication and computation resources in a mobile vehicular network. We formulate a non-linear problem to minimize the energy consumption subject to the network resources. Further-more, we consider a practical vehicular environment by taking into account the dynamics of mobile vehicular networks. The formulated problem is solved via a deep reinforcement learning (DRL) based approach. Finally, numerical evaluations are performed that demonstrates the effectiveness of our proposed scheme.


Institute of Electrical and Electronics Engineers (IEEE)


Computer Sciences


Energy consumption, Conferences, Reinforcement learning, Resource management, Global communication, Task analysis, Vehicle dynamics

Indexed in Scopus


Open Access