Show simple item record

dc.contributor.authorAncha, Siddharth
dc.contributor.authorPathak, Gaurav
dc.contributor.authorZhang, Ji
dc.contributor.authorNarasimhan, Srinivasa
dc.contributor.authorHeld, David
dc.date.accessioned2025-06-17T20:41:36Z
dc.date.available2025-06-17T20:41:36Z
dc.date.issued2024-08-10
dc.identifier.urihttps://hdl.handle.net/1721.1/159430
dc.description.abstractTo navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a full-stack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttps://doi.org/10.1007/s10514-024-10168-2en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceSpringer USen_US
dc.titleActive velocity estimation using light curtains via self-supervised multi-armed banditsen_US
dc.typeArticleen_US
dc.identifier.citationAncha, S., Pathak, G., Zhang, J. et al. Active velocity estimation using light curtains via self-supervised multi-armed bandits. Auton Robot 48, 15 (2024).en_US
dc.contributor.departmentMIT Quest for Intelligenceen_US
dc.relation.journalAutonomous Robotsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-03-27T13:48:13Z
dc.language.rfc3066en
dc.rights.holderThe Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature
dspace.embargo.termsY
dspace.date.submission2025-03-27T13:48:13Z
mit.journal.volume48en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record