Show simple item record

dc.contributor.authorWigmore, Jerrod
dc.contributor.authorShrader, Brooke
dc.contributor.authorModiano, Eytan
dc.date.accessioned2024-11-15T21:50:36Z
dc.date.available2024-11-15T21:50:36Z
dc.date.issued2024-10-14
dc.identifier.isbn979-8-4007-0521-2
dc.identifier.urihttps://hdl.handle.net/1721.1/157553
dc.descriptionMOBIHOC '24, October 14-17, 2024, Athens, Greeceen_US
dc.description.abstractDeep Reinforcement Learning (DRL) offers a powerful approach to training neural network control policies for stochastic queuing networks (SQN). However, traditional DRL methods rely on offline simulations or static datasets, limiting their real-world application in SQN control. This work proposes Online Deep Reinforcement Learning-based Controls (ODRLC) as an alternative, where an intelligent agent interacts directly with a real environment and learns an optimal control policy from these online interactions. SQNs present a challenge for ODRLC due to the unbounded nature of the queues within the network resulting in an unbounded state-space. An unbounded state-space is particularly challenging for neural network policies as neural networks are notoriously poor at extrapolating to unseen states. To address this challenge, we propose an intervention-assisted framework that leverages strategic interventions from known stable policies to ensure the queue sizes remain bounded. This framework combines the learning power of neural networks with the guaranteed stability of classical control policies for SQNs. We introduce a method to design these intervention-assisted policies to ensure strong stability of the network. Furthermore, we extend foundational DRL theorems for intervention-assisted policies and develop two practical algorithms specifically for ODRLC of SQNs. Finally, we demonstrate through experiments that our proposed algorithms outperform both classical control approaches and prior ODRLC algorithms.en_US
dc.publisherACM|The Twenty-fifth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computingen_US
dc.relation.isversionofhttps://doi.org/10.1145/3641512.3686383en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleIntervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimizationen_US
dc.typeArticleen_US
dc.identifier.citationWigmore, Jerrod, Shrader, Brooke and Modiano, Eytan. 2024. "Intervention-Assisted Online Deep Reinforcement Learning for Stochastic Queuing Network Optimization."
dc.contributor.departmentLincoln Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-11-01T07:47:09Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-11-01T07:47:09Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record