<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="https://hdl.handle.net/1721.1/100167">
<title>Fluid Interfaces - Conference Proceedings</title>
<link>https://hdl.handle.net/1721.1/100167</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="https://hdl.handle.net/1721.1/108441"/>
<rdf:li rdf:resource="https://hdl.handle.net/1721.1/108440"/>
<rdf:li rdf:resource="https://hdl.handle.net/1721.1/100242"/>
</rdf:Seq>
</items>
<dc:date>2026-04-06T23:33:04Z</dc:date>
</channel>
<item rdf:about="https://hdl.handle.net/1721.1/108441">
<title>Investigating Social Presence and Communication with Embodied Avatars in Room-Scale Virtual Reality</title>
<link>https://hdl.handle.net/1721.1/108441</link>
<description>Investigating Social Presence and Communication with Embodied Avatars in Room-Scale Virtual Reality
Greenwald, Scott W.; Wang, Zhangyuan; Funk, Markus; Maes, Pattie
Room-scale virtual reality (VR) holds great potential as a medium for communication and collaboration in remote and same-time, same-place settings. Related work has established that movement realism can create a strong sense of social presence, even in the absence of photorealism. Here, we explore the noteworthy attributes of communicative interaction using embodied minimal avatars in room-scale VR in the same-time, same-place setting. Our system is the first in the research community to enable this kind of interaction, as far as we are aware. We carried out an experiment in which pairs of users performed two activities in contrasting variants: VR vs. face-to-face (F2F), and 2D vs. 3D. Objective and subjective measures were used to compare these, including motion analysis, electrodermal activity, questionnaires, retrospective think-aloud protocol, and interviews. On the whole, participants communicated effectively in VR to complete their tasks, and reported a strong sense of social presence. The system's high fidelity capture and display of movement seems to have been a key factor in supporting this. Our results confirm some expected shortcomings of VR compared to F2F, but also some non-obvious advantages. The limited anthropomorphic properties of the avatars presented some difficulties, but the impact of these varied widely between the activities. In the 2D vs. 3D comparison, the basic affordance of freehand drawing in 3D was new to most participants, resulting in novel observations and open questions. We also present methodological observations across all conditions concerning the measures that did and did not reveal differences between conditions, including unanticipated properties of the think-aloud protocol applied to VR.
Submission includes video.
</description>
<dc:date>2017-06-26T00:00:00Z</dc:date>
</item>
<item rdf:about="https://hdl.handle.net/1721.1/108440">
<title>Multi-User Framework for Collaboration and Co-Creation in Virtual Reality</title>
<link>https://hdl.handle.net/1721.1/108440</link>
<description>Multi-User Framework for Collaboration and Co-Creation in Virtual Reality
Greenwald, Scott W.; Corning, Wiley; Maes, Pattie
We present CocoVerse, a shared immersive virtual reality environment in which users interact with each other and create and manipulate virtual objects using a set of hand-based tools. Simple, intuitive interfaces make the application easy to use, and its flexible toolset facilitates constructivist and exploratory learning. The modular design of the system allows it to be easily customized for new room-scale applications.
Presented as a poster.
</description>
<dc:date>2017-06-18T00:00:00Z</dc:date>
</item>
<item rdf:about="https://hdl.handle.net/1721.1/100242">
<title>TagAlong: Informal Learning from a Remote Companion with Mobile Perspective Sharing</title>
<link>https://hdl.handle.net/1721.1/100242</link>
<description>TagAlong: Informal Learning from a Remote Companion with Mobile Perspective Sharing
Greenwald, Scott W.; Khan, Mina; Vazquez, Christian D.; Maes, Pattie
Questions often arise spontaneously in a curious mind, due to an observation about a new or unknown environment. When an expert is right there, prepared to engage in dialog, this curiosity can be harnessed and converted into highly effective, intrinsically motivated learning. This paper investigates how this kind of situated informal learning can be realized in real-world settings with wearable technologies and the support of a remote learning companion. In particular, we seek to understand how the use of different multimedia communication mediums impacts the quality of the interaction with a remote teacher, and how these remote interactions compare with face-to-face, co-present learning. A prototype system called TagAlong was developed with attention to features that facilitate dialog based on the visual environment. It was developed to work robustly in the wild, depending only on widely-available components and infrastructure. A pilot study was performed to learn about what characteristics are most important for successful interactions, as a basis for further system development and a future full-scale study. We conclude that it is critical for system design to be informed by (i) an analysis of the attentional burdens imposed by the system on both wearer and companion and (ii) a knowledge of the strengths and weaknesses of co-present learning.
</description>
<dc:date>2015-10-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
