Show simple item record

dc.contributor.authorLi, Yuke
dc.contributor.authorChen, Lixiong
dc.contributor.authorChen, Guangyi
dc.contributor.authorChan, Ching-Yao
dc.contributor.authorZhang, Kun
dc.contributor.authorAnzellotti, Stefano
dc.contributor.authorWei, Donglai
dc.date.accessioned2024-11-21T17:02:16Z
dc.date.available2024-11-21T17:02:16Z
dc.date.issued2024-10-28
dc.identifier.isbn979-8-4007-1192-3
dc.identifier.urihttps://hdl.handle.net/1721.1/157626
dc.descriptionMM’24, October 28 - November 1, 2024, Melbourne, Australiaen_US
dc.description.abstractIn order to predict a pedestrian's trajectory in a crowd accurately, one has to take into account her/his underlying socio-temporal interactions with other pedestrians consistently. Unlike existing work that represents the relevant information separately, partially, or implicitly, we propose a complete representation for it to be fully and explicitly captured and analyzed. In particular, we introduce a Directed Acyclic Graph-based structure, which we term Socio-Temporal Graph (STG), to explicitly capture pair-wise socio-temporal interactions among a group of people across both space and time. Our model is built on a time-varying generative process, whose latent variables determine the structure of the STGs. We design an attention-based model named STGformer that affords an end-to-end pipeline to learn the structure of the STGs for trajectory prediction. Our solution achieves overall state-of-the-art prediction accuracy in two large-scale benchmark datasets. Our analysis shows that a person's past trajectory is critical for predicting another person's future path. Our model learns this relationship with a strong notion of socio-temporal localities. Statistics show that utilizing this information explicitly for prediction yields a noticeable performance gain with respect to the trajectory-only approaches.en_US
dc.publisherACM|Proceedings of the 5th International Workshop on Human-centric Multimedia Analysisen_US
dc.relation.isversionofhttps://doi.org/10.1145/3688865.3689481en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleLearning Socio-Temporal Graphs for Multi-Agent Trajectory Predictionen_US
dc.typeArticleen_US
dc.identifier.citationLi, Yuke, Chen, Lixiong, Chen, Guangyi, Chan, Ching-Yao, Zhang, Kun et al. 2024. "Learning Socio-Temporal Graphs for Multi-Agent Trajectory Prediction."
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2024-11-01T07:53:57Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-11-01T07:53:58Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record