Show simple item record

dc.contributor.authorKondratyev, Dmitry
dc.contributor.authorRiedel, Benedikt
dc.contributor.authorChou, Yuan-Tang
dc.contributor.authorCochran-Branson, Miles
dc.contributor.authorPaladino, Noah
dc.contributor.authorSchultz, David
dc.contributor.authorLiu, Mia
dc.contributor.authorDuarte, Javier
dc.contributor.authorHarris, Philip
dc.contributor.authorHsu, Shih-Chieh
dc.date.accessioned2025-12-18T21:48:02Z
dc.date.available2025-12-18T21:48:02Z
dc.date.issued2025-07-18
dc.identifier.isbn979-8-4007-1398-9
dc.identifier.urihttps://hdl.handle.net/1721.1/164411
dc.descriptionPEARC ’25, Columbus, OH, USAen_US
dc.description.abstractThe increasing computational demand from growing data rates and complex machine learning (ML) algorithms in large-scale scientific experiments has driven the adoption of the Services for Optimized Network Inference on Coprocessors (SONIC) approach. SONIC accelerates ML inference by offloading it to local or remote coprocessors to optimize resource utilization. Leveraging its portability to different types of coprocessors, SONIC enhances data processing and model deployment efficiency for cutting-edge research in high energy physics (HEP) and multi-messenger astrophysics (MMA). We developed the SuperSONIC project, a scalable server infrastructure for SONIC, enabling the deployment of computationally intensive tasks to Kubernetes clusters equipped with graphics processing units (GPUs). Using NVIDIA Triton Inference Server, SuperSONIC decouples client workflows from server infrastructure, standardizing communication, optimizing throughput, load balancing, and monitoring. SuperSONIC has been successfully deployed for the CMS and ATLAS experiments at the CERN Large Hadron Collider (LHC), the IceCube Neutrino Observatory (IceCube), and the Laser Interferometer Gravitational-Wave Observatory (LIGO) and tested on Kubernetes clusters at Purdue University, the National Research Platform (NRP), and the University of Chicago. SuperSONIC addresses the challenges of the Cloud-native era by providing a reusable, configurable framework that enhances the efficiency of accelerator-based inference deployment across diverse scientific domains and industries.en_US
dc.publisherACM|Practice and Experience in Advanced Research Computingen_US
dc.relation.isversionofhttps://doi.org/10.1145/3708035.3736049en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleSuperSONIC: Cloud-Native Infrastructure for ML Inferencingen_US
dc.typeArticleen_US
dc.identifier.citationDmitry Kondratyev, Benedikt Riedel, Yuan-Tang Chou, Miles Cochran-Branson, Noah Paladino, David Schultz, Mia Liu, Javier Duarte, Philip Harris, and Shih-Chieh Hsu. 2025. SuperSONIC: Cloud-Native Infrastructure for ML Inferencing. In Practice and Experience in Advanced Research Computing 2025: The Power of Collaboration (PEARC '25). Association for Computing Machinery, New York, NY, USA, Article 29, 1–5.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physicsen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-08-01T08:29:42Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:29:42Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record