Nonverbal and unconscious behaviour is an important component of daily human-human interaction. This is especially true in situations such as public speaking, job interviews or information sensitive conversations, where researchers have shown that an increased awareness of one's behaviour can improve the outcome of the interaction. With wearable technology, such as Google Glass, we now have the opportunity to augment social interactions and provide realtime feedback on one's behaviour in an unobtrusive way. In this paper we present Logue, a system that provides realtime feedback on the presenters' openness, body energy and speech rate during public speaking. The system analyses the user's nonverbal behaviour using social signal processing techniques and gives visual feedback on a head-mounted display. We conducted two user studies with a staged and a real presentation scenario which yielded that Logue's feedback was perceived helpful and had a positive impact on the speaker's performance.
Posts tagged: HCI
The role of physiological cues during remote collaboration
Empathic communication allows individuals to perceive and understand the feeling and emotion of the person with whom they are interacting. This could be particularly important during remote collaboration (such as remote assistance or distance learning) to enhance the social and emotional understanding of geographically distributed partners. However, supporting awareness in remote collaboration is very challenging especially when the interaction with the remote parties results in less information that can be communicated than in a physical interaction. We explore the effect of visualization using physiological cues that allow users to interpret emotional behaviors of remote parties with whom they are interacting in real time. The proposed visual representation allows users to infer emotional patterns from physiological cues that can potentially influence their communication approach toward a more aggressive style or maintain passive and peaceful interaction. We conducted a study involving participants who were paired up for a collaborative assessment task, interacting via voice only, videoconference, or a visual representation of the physiological measurements. Participants perceived the usage of our visual representation with higher group cohesiveness than using voice-only interaction. Further analysis shows that the visual representation significantly increases the positive affect score (i.e., participants are perceived to be more alert and demonstrate less distress) during remote collaboration. We discuss the possibilities of the proposed visual representation to support empathic communication during remote collaboration, and the benefits to the remote partners of having positive affect and group cohesiveness.
The EICS 2014 doctoral consortium
The design of slow-motion feedback
The misalignment between the timeframe of systems and that of their users can cause problems, especially when the system relies on implicit interaction. It makes it hard for users to understand what is happening and leaves them little chance to intervene. This paper introduces the design concept of slow-motion feedback, which can help to address this issue. A definition is provided, together with an overview of existing applications of this technique.
Suit up!: Enabling eyes-free interactions on jacket buttons
We present a new interaction space for wearables by integrating interactive elements, in the form of buttons, into outdoor clothing, specifically jackets and coats. Interactive buttons, or "iButtons", allow users to perform specific tasks using subtle, inconspicuous gestures. They are intended for outdoor settings, where reaching for a mobile phone or an other device may not be convenient or appropriate. Different types of buttons serve dedicated functions, and appropriate placement of these buttons make them easily accessible, without requiring visual contact. By adding context sensitivity, these buttons can also be repurposed to fit other functions. By linking multiple buttons, it is possible to create workflows for specific tasks. We provide a description of an initial iButton design space and highlight some scenarios to illustrate the envisioned usage of interactive buttons.
ReHoblet - A home-based rehabilitation game on the tablet
PhysiCube: Providing tangible interaction in a pervasive upper-limb rehabilitation system
Paddle: Highly deformable mobile devices with physical controls
Paddle is a highly deformable mobile device that leverages engineering principles from the design of the Rubik's Magic, a folding plate puzzle. The various transformations supported by Paddle bridges the gap between differently sized mobile devices available nowadays, such as phones, armbands, tablets and game controllers. Besides this, Paddle can be transformed to different physical controls in only a few steps, such as peeking options, a ring to scroll through lists and a book-like form factor to leaf through pages. These special-purpose physical controls have the advantage of providing clear physical affordances and exploiting people's innate abilities for manipulating objects in the real world. We investigated the benefits of these interaction techniques in detail in [1]. In contrast to traditional touch screens, physical controls are usually less flexible and therefore less suitable for mobile settings. Paddle, shows how mobile devices can be designed to bring physical controls to mobile devices and thus combine the flexibility of touch screens with the physical qualities that real world controls provide. Our current prototype is tracked with an optical tracking system and uses a projector to provide visual output. In the future, we envision devices similar to Paddle that are entirely self-contained, using tiny integrated displays.
Paddle: Highly deformable mobile devices with physical controls
Touch screens have been widely adopted in mobile devices. Although touch input is very flexible in that it can be used for a wide variety of applications on mobile devices, they do not provide physical affordances, encourage eyes-free use or utilize the full dexterity of our hands due to the lack of physical controls. On the other hand, physical controls are often tailored to the task at hand, making them less flexible and therefore less suitable for general purpose use in mobile settings. In this paper, we show how to combine the flexibility of touch screens with the physical qualities that real world controls provide in a mobile context. We do so using a deformable device that can be transformed into various special-purpose physical controls. We present Paddle, a highly deformable device that can be transformed to different shapes. Paddle bridges the gap between differently sized mobile available devices nowadays, such as phones and tablets. Additionally, Paddle demonstrates a novel opportunity for deformable devices to transform into differently shaped physical controls that provide clear physical affordances for the task at hand. Physical controls have the advantage of exploiting people's innate abilities for manipulating physical objects in the real world. We designed and implemented a prototyped system of which the engineering principles are based on the design of the Rubik's magic, a folding plate puzzle. Additionally, we explore the interaction techniques enabled by this concept and conduct an in-depth study to evaluate our transformable physical controls. Our findings show that these physical controls provide several benefits over traditional touch interaction techniques commonly used on mobile devices.
Multi-viewer gesture-based interaction for omni-directional video
Omni-directional video (ODV) is a novel medium that offers viewers a 360º panoramic recording. This type of content will become more common within our living rooms in the near future, seeing that immersive displaying technologies such as 3D television are on the rise. However, little attention has been given to how to interact with ODV content. We present a gesture elicitation study in which we asked users to perform mid-air gestures that they consider to be appropriate for ODV interaction, both for individual as well as collocated settings. We are interested in the gesture variations and adaptations that come forth from individual and collocated usage. To this end, we gathered quantitative and qualitative data by means of observations, motion capture, questionnaires and interviews. This data resulted in a user-defined gesture set for ODV, alongside an in-depth analysis of the variation in gestures we observed during the study.