Artigo Revisado por pares

Multi-agent deep reinforcement learning for adaptive coordinated metro service operations with flexible train composition

2022; Elsevier BV; Volume: 161; Linguagem: Inglês

10.1016/j.trb.2022.05.001

ISSN

1879-2367

Autores

Cheng-shuo Ying, Andy H.F. Chow, Hoa T.M. Nguyen, Kwai‐Sang Chin,

Tópico(s)

Traffic control and management

Resumo

This paper presents an adaptive control system for coordinated metro operations with flexible train composition by using a multi-agent deep reinforcement learning (MADRL) approach. The control problem is formulated as a Markov decision process (MDP) with multiple agents regulating different service lines in a metro network with passenger transfer. To ensure the overall computational effectiveness and stability of the control system, we adopt an actor–critic reinforcement learning framework in which each control agent is associated with a critic function for estimating future system states and an actor function deriving local operational decisions. The critics and actors in the MADRL are represented by multi-layer artificial neural networks (ANNs). A multi-agent deep deterministic policy gradient (MADDPG) algorithm is developed for training the actor and critic ANNs through successive simulated transitions over the entire metro network. The developed framework is tested with a real-world scenario in Bakerloo and Victoria Lines of London Underground, UK. Experiment results demonstrate that the proposed method can outperform previous centralized optimization and distributed control approaches in terms of solution quality and performance achieved. Further analysis shows the merits of MADRL for coordinated service regulation with flexible train composition. This study contributes to real-time coordinated metro network services with flexible train composition and advanced optimization techniques.

Referência(s)