Skip to Main content Skip to Navigation
Journal articles

A Deep Q-Learning Bisection Approach for Power Allocation in Downlink NOMA Systems

Abstract : In this work, we study the weighted sum-rate maximization problem for a downlink non-orthogonal multiple access (NOMA) system. With power and data-rate constraints, this problem is generally non-convex. Therefore, a novel solution based on the deep reinforcement learning (DRL) framework is proposed for the power allocation problem. While previous work based on DRL restrict the solution to a limited set of possible power levels, the proposed DRL framework is specifically designed to find a solution with a much larger granularity, emulating a continuous power allocation. Simulation results show that the proposed power allocation method outperforms two baseline algorithms. Moreover, it achieves almost 85% of the weighted sum-rate obtained by a far more complex genetic algorithm that approaches exhaustive search in terms of performance.
Complete list of metadata
Contributor : Charbel Abdel Nour Connect in order to contact the contributor
Submitted on : Thursday, November 25, 2021 - 9:49:35 AM
Last modification on : Saturday, November 27, 2021 - 3:47:05 AM



Marie-Josepha Youssef, Charbel Abdel Nour, Xavier Lagrange, Catherine Douillard. A Deep Q-Learning Bisection Approach for Power Allocation in Downlink NOMA Systems. IEEE Communications Letters, Institute of Electrical and Electronics Engineers, In press, pp.1-1. ⟨10.1109/LCOMM.2021.3130102⟩. ⟨hal-03448296⟩



Record views