dc.contributor.author | Wigmore, Jerrod | |
dc.contributor.author | Shrader, Brooke | |
dc.contributor.author | Modiano, Eytan | |
dc.date.accessioned | 2024-11-15T21:50:36Z | |
dc.date.available | 2024-11-15T21:50:36Z | |
dc.date.issued | 2024-10-14 | |
dc.identifier.isbn | 979-8-4007-0521-2 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/157553 | |
dc.description | MOBIHOC '24, October 14-17, 2024, Athens, Greece | en_US |
dc.description.abstract | Deep Reinforcement Learning (DRL) offers a powerful approach to training neural network control policies for stochastic queuing networks (SQN). However, traditional DRL methods rely on offline simulations or static datasets, limiting their real-world application in SQN control. This work proposes Online Deep Reinforcement Learning-based Controls (ODRLC) as an alternative, where an intelligent agent interacts directly with a real environment and learns an optimal control policy from these online interactions. SQNs present a challenge for ODRLC due to the unbounded nature of the queues within the network resulting in an unbounded state-space. An unbounded state-space is particularly challenging for neural network policies as neural networks are notoriously poor at extrapolating to unseen states. To address this challenge, we propose an intervention-assisted framework that leverages strategic interventions from known stable policies to ensure the queue sizes remain bounded. This framework combines the learning power of neural networks with the guaranteed stability of classical control policies for SQNs. We introduce a method to design these intervention-assisted policies to ensure strong stability of the network. Furthermore, we extend foundational DRL theorems for intervention-assisted policies and develop two practical algorithms specifically for ODRLC of SQNs. Finally, we demonstrate through experiments that our proposed algorithms outperform both classical control approaches and prior ODRLC algorithms. | en_US |
dc.publisher | ACM|The Twenty-fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing | en_US |
dc.relation.isversionof | https://doi.org/10.1145/3641512.3686383 | en_US |
dc.rights | Creative Commons Attribution | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | Association for Computing Machinery | en_US |
dc.title | Intervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimization | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Wigmore, Jerrod, Shrader, Brooke and Modiano, Eytan. 2024. "Intervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimization." | |
dc.contributor.department | Lincoln Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Laboratory for Information and Decision Systems | en_US |
dc.identifier.mitlicense | PUBLISHER_CC | |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2024-11-01T07:47:09Z | |
dc.language.rfc3066 | en | |
dc.rights.holder | The author(s) | |
dspace.date.submission | 2024-11-01T07:47:09Z | |
mit.license | PUBLISHER_CC | |
mit.metadata.status | Authority Work and Publication Information Needed | en_US |