Artigo Revisado por pares

Deep Reinforcement Learning-Based Joint User Association and CU–DU Placement in O-RAN

2022; Institute of Electrical and Electronics Engineers; Volume: 19; Issue: 4 Linguagem: Inglês

10.1109/tnsm.2022.3221670

ISSN

2373-7379

Autores

Roghayeh Joda, Turgay Pamuklu, Pedro Enrique Iturria-Rivera, Melike Erol‐Kantarci,

Tópico(s)

Energy Harvesting in Wireless Networks

Resumo

Open Radio Access Networks (O-RAN) architecture is based on disaggregation, virtualization, openness, and intelligence. These features allow the RAN network functions (NFs) to be split into Central Unit (CU), Distributed Unit (DU), and Radio Unit (RU); and deployed on open hardware and cloud nodes as Virtualized Network Functions (VNFs) or Containerized Network Functions (CNFs). In this paper, we propose strategies for the placement of CU and DU network functions in the regional and edge O-Cloud nodes while jointly associating the users to RUs. The aim is to minimize the end-to-end delay of users and minimize the cost of O-RAN deployment. Thus, we first formulate the end-to-end delay, the cost, and the constraints. We then model the problem as a multi-objective optimization problem The optimization formulation consists of a huge number of constraints and variables. To provide a solution to the problem, we develop the corresponding Markov Decision Problem (MDP) and propose a Deep Q-Network (DQN)-based algorithm. The simulation results demonstrate that our proposed scheme reduces the average user delay up to 40% and the deployment cost up to 20% with respect to our baselines.

Referência(s)