Developing Real-Time Scheduling Policy by Deep Reinforcement Learning

Published in 27th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), 2021

Abstract: Designing scheduling policies for multiprocessor real-time systems is challenging since the multiprocessor scheduling problem is NP-complete. The existing heuristics are customized policies that may achieve poor performance under some specific task loads. Thus, a new design pattern is needed to make the multiprocessor scheduling policies perform well under various task loads. In this paper, we investigate a new real-time scheduling policy based on reinforcement learning. For any given real-time task set, our policy can automatically derive a high performance by online learning. Specifically, we model the real-time scheduling process as a multi-agent cooperative game and propose multi-agent self-cooperative learning that overcomes the curse of dimensionality and credit assignment problems. Simulation results show that our approach can learn high-performance policies for various task/system models.

Recommended citation: BO, Zitong, et al. Developing real-time scheduling policy by deep reinforcement learning. In: 2021 IEEE 27th Real-Time and Embedded Technology and Applications Symposium (RTAS). IEEE, 2021. p. 131-142.
Download Paper