| dc.contributor.author | R?ddiger, Tobias | |
| dc.contributor.author | Zitz, Valeria | |
| dc.contributor.author | Hummel, Jonas | |
| dc.contributor.author | K?ttner, Michael | |
| dc.contributor.author | Lepold, Philipp | |
| dc.contributor.author | King, Tobias | |
| dc.contributor.author | Paradiso, Joseph | |
| dc.contributor.author | Clarke, Christopher | |
| dc.contributor.author | Beigl, Michael | |
| dc.date.accessioned | 2025-12-17T16:51:12Z | |
| dc.date.available | 2025-12-17T16:51:12Z | |
| dc.date.issued | 2025-04-25 | |
| dc.identifier.isbn | 979-8-4007-1395-8 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164383 | |
| dc.description | CHI EA ’25, Yokohama, Japan | en_US |
| dc.description.abstract | In this demo, we present OpenEarable 2.0, an open-source earphone platform designed to provide an interactive exploration of physiological ear sensing and the development of AI applications. Attendees will have the opportunity to explore real-time sensor data and understand the capabilities of OpenEarable 2.0’s sensing components. OpenEarable 2.0 integrates a rich set of sensors, including two ultrasound-capable microphones (inward/outward), a 3-axis ear canal accelerometer/bone conduction microphone, a 9-axis head inertial measurement unit, a pulse oximeter, an optical temperature sensor, an ear canal pressure sensor, a microSD slot, and a microcontroller. Participants will be able to try out the web-based dashboard and mobile app for real-time control and data visualization. Furthermore, the demo will show different applications and real-time data based on OpenEarable 2.0 across physiological sensing and health monitoring, movement and activity tracking, and human-computer interaction. | en_US |
| dc.publisher | ACM|Extended Abstracts of the CHI Conference on Human Factors in Computing Systems | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3706599.3721161 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Tobias Röddiger, Valeria Zitz, Jonas Hummel, Michael Küttner, Philipp Lepold, Tobias King, Joseph A. Paradiso, Christopher Clarke, and Michael Beigl. 2025. Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25). Association for Computing Machinery, New York, NY, USA, Article 713, 1–4. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Media Laboratory | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T08:28:19Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T08:28:19Z | |
| mit.license | PUBLISHER_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |