Show simple item record

dc.contributor.authorR?ddiger, Tobias
dc.contributor.authorZitz, Valeria
dc.contributor.authorHummel, Jonas
dc.contributor.authorK?ttner, Michael
dc.contributor.authorLepold, Philipp
dc.contributor.authorKing, Tobias
dc.contributor.authorParadiso, Joseph
dc.contributor.authorClarke, Christopher
dc.contributor.authorBeigl, Michael
dc.date.accessioned2025-12-17T16:51:12Z
dc.date.available2025-12-17T16:51:12Z
dc.date.issued2025-04-25
dc.identifier.isbn979-8-4007-1395-8
dc.identifier.urihttps://hdl.handle.net/1721.1/164383
dc.descriptionCHI EA ’25, Yokohama, Japanen_US
dc.description.abstractIn this demo, we present OpenEarable 2.0, an open-source earphone platform designed to provide an interactive exploration of physiological ear sensing and the development of AI applications. Attendees will have the opportunity to explore real-time sensor data and understand the capabilities of OpenEarable 2.0’s sensing components. OpenEarable 2.0 integrates a rich set of sensors, including two ultrasound-capable microphones (inward/outward), a 3-axis ear canal accelerometer/bone conduction microphone, a 9-axis head inertial measurement unit, a pulse oximeter, an optical temperature sensor, an ear canal pressure sensor, a microSD slot, and a microcontroller. Participants will be able to try out the web-based dashboard and mobile app for real-time control and data visualization. Furthermore, the demo will show different applications and real-time data based on OpenEarable 2.0 across physiological sensing and health monitoring, movement and activity tracking, and human-computer interaction.en_US
dc.publisherACM|Extended Abstracts of the CHI Conference on Human Factors in Computing Systemsen_US
dc.relation.isversionofhttps://doi.org/10.1145/3706599.3721161en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleDemonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platformen_US
dc.typeArticleen_US
dc.identifier.citationTobias Röddiger, Valeria Zitz, Jonas Hummel, Michael Küttner, Philipp Lepold, Tobias King, Joseph A. Paradiso, Christopher Clarke, and Michael Beigl. 2025. Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25). Association for Computing Machinery, New York, NY, USA, Article 713, 1–4.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T08:28:19Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:28:19Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record