During remote video-mediated assistance, instructors often guide workers through problems and instruct them to perform unfamiliar or complex operations. However, the workers' performance might deteriorate due to stress. We argue that informing biofeedback to the instructor, can improve communication and lead to lower stress. This paper presents a thorough investigation on mental workload and stress perceived by twenty participants, paired up in an instructor-worker scenario, performing remote video-mediated tasks. The interface conditions differ in task, facial and biofeedback communication. Two self-report measures are used to assess mental workload and stress. Results show that pairs reported lower mental workload and stress when instructors are using the biofeedback as compared to using interfaces with facial view. Significant correlations were found on task performance with reducing stress (i.e. increased task engagement and decreased worry) for instructors and declining mental workload (i.e. in- creased performance) for workers. Our findings provide insights to advance video-mediated interfaces for remote collaborative work.
Posts tagged: HCI
Interdisciplinary design of a pervasive fall handling system: A case study
Game of tones: Learning to play songs on a piano using projected instructions and games
Learning to play a musical instrument such as the piano requires a substantial amount of practice and perseverance in learning to read and play from sheet music. Our interactivity demo allows people to learn to play songs without requiring sheet music reading skills. We project a graphical notation on top of a piano that indicates what key(s) need to be pressed and create a feedback loop that monitors the player's performance. We implemented The Augmented Piano (TAP), which is a straightforward combination of a physical piano with our alternative notation projected on top. Piano Attack (PAT) extends TAP with a shooting game that continuously provides game-based incentives for learning to play the piano.
Exploring social augmentation concepts for public speaking using peripheral feedback and real-time behavior analysis
A domain-specific textual language for rapid prototyping of multimodal interactive systems
There are currently toolkits that allow the specification of executable multimodal human-machine interaction models. Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. Others instead, interpret concise specifications written in existing textual languages even though their non-specialized notations prevent the productivity improvement achievable through domain-specific ones. We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. The language provides notations and constructs specially tailored to compactly declare the event patterns raised during the execution of multimodal commands. The toolkit detects the occurrence of these patterns and invokes the functionality of a back-end system in response.
Timisto: A technique to extract usage sequences from storyboards
Storyboarding is a technique that is often used for the conception of new interactive systems. A storyboard illustrates graphically how a system is used by its users and what a typical context of usage is. Although the informal notation of a storyboard stimulates creativity, and makes them easy to understand for everyone, it is more difficult to integrate in further steps in the engineering process. We present an approach, "Time In Storyboards" (Timisto), to extract valuable information on how various interactions with the system are positioned in time with respect to each other. Timisto does not interfere with the creative process of storyboarding, but maximizes the structured information about time that can be deduced from a storyboard.
SmartObjects: Second workshop on interacting with smart objects
Liftacube: A prototype for pervasive rehabilitation in a residential setting
Introduction to the special issue on interaction with smart objects
Informing intelligent user interfaces by inferring affective states from body postures in ubiquitous computing environments
Intelligent User Interfaces can benefit from having knowledge on the user's emotion. However, current implementations to detect affective states, are often constraining the user's freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone's affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. We demonstrate the possibility of inferring affective states from body postures in ubiquitous computing environments and our study also provides insights how this opens up new possibilities for IUI to access the affective states of users from body postures in a nonintrusive way.