Posts tagged: XR

Context-aware support of dexterity skills in cross-reality environments

Within our work, we apply context-awareness to determine how AR/VR technology should adapt instructions based on the context to suit user needs. We focus on situations where the user must carry out a complex manual activity that requires additional information to be present during the activity to achieve the desired result. To this end, the emphasis is on activities that require fine-motor skills and in-depth expertise and training, for which XR is a powerful tool to support and guide users performing these tasks. The contexts we detect include user intentions, environmental conditions, and activity progressions. Our work builds on these contexts with the main focus on determining how XR should adapt for the end-user from a usability perspective. The feedback we request from ISMAR consists of input in detection, usability, and simulation categories, together with how to balance these categories to create real-time and user-friendly systems. The next steps of our work will consider how to content should adjust based on the cognitive load, activity space, and environmental conditions.

Read more →

HapticPanel: An open system to render haptic interfaces in virtual reality for manufacturing industry

Virtual Reality (VR) allows simulation of machine control panels without physical access to the machine, enabling easier and faster initial exploration, testing, and validation of machine panel designs. However, haptic feedback is indispensable if we want to interact with these simulated panels in a realistic manner. We present HapticPanel, an encountered-type haptic system that provides realistic haptic feedback for machine control panels in VR. To ensure a realistic manipulation of input elements, the user's hand is continuously tracked during interaction with the virtual interface. Based on which virtual element the user intends to manipulate, a motorized panel with stepper motors moves a corresponding physical input element in front of the user's hand, enabling realistic physical interaction.

Read more →

Smart makerspace: A web platform implementation

Makerspaces are creative and learning environments, home to activities such as fabrication processes and Do-It-Yourself (DIY) tasks. How- ever, containing equipment that are not commonly seen or handled, these spaces can look rather challenging to novice users. This paper is based on the Smart Makerspace research from Autodesk, which uses a smart workbench for an immersive instructional space for DIY tasks. Having its functionalities in mind and trying to overcome some of its limitations, we approach the concept build- ing an immersive instructional space as a web platform. The platform, intro- duced to users in a makerspace, had a feedback that reflects its potential be- tween novice and intermediate users, for creating facilitators and encouraging these users.

Read more →

Whom-i-approach: A system that provides cues on approachability of bystanders for blind users

Body posture is one of many visual cues used by sighted persons to determine if someone would be open to initiate a conversation. These cues are inaccessible for individuals with blindness leading to difficulties when deciding whom to approach for eventual assistance. Current camera technologies, such as depth cameras, enable to automatically scan the environment to assess the approachability of nearby persons. We present Whom-I-Approach, a system that translates postures of bystanders into a measure of approachability and communicates this information using auditory and tactile cues. The system scans the environment and determines the approachability based on body posture for the persons in the vicinity of the user. Efficiency as well as perceived system usability and psychosocial attitudes are measured in a user study showing the potential to improve competence for users with blindness prior to engagement in social interactions.

Read more →

Purpose-centric appropriation of everyday objects as game controllers

Generic multi-button controllers are the most common input devices used for video games. In contrast, dedicated game controllers and gestural interactions increase immersion and playability. Room-sized gaming has opened up possibilities to further enhance the immersive experience, and provides players with opportunities to use full-body movements as input. We present a purpose-centric approach to appropriating everyday objects as physical game controllers, for immersive room-sized gaming. Virtual manipulations supported by such physical controllers mimic real-world function and usage. Doing so opens up new possibilities for interactions that flow seamlessly from the physical into the virtual world. As a proof-of-concept, we present a 'Tower Defence' styled game, that uses four everyday household objects as game controllers, each of which serves as a weapon to defend the base of the players from enemy bots. Players can use 1) a mop (or a broom) to sweep away enemy bots directionally; 2) a fan to scatter them away; 3) a vacuum cleaner to suck them; 4) a mouse trap to destroy them. Each controller is tracked using a motion capture system. A physics engine is integrated in the game, and ensures virtual objects act as though they are manipulated by the actual physical controller, thus providing players with a highly-immersive gaming experience.

Read more →

Augmenting social interactions: Realtime behavioural feedback using social signal processing techniques

Nonverbal and unconscious behaviour is an important component of daily human-human interaction. This is especially true in situations such as public speaking, job interviews or information sensitive conversations, where researchers have shown that an increased awareness of one's behaviour can improve the outcome of the interaction. With wearable technology, such as Google Glass, we now have the opportunity to augment social interactions and provide realtime feedback on one's behaviour in an unobtrusive way. In this paper we present Logue, a system that provides realtime feedback on the presenters' openness, body energy and speech rate during public speaking. The system analyses the user's nonverbal behaviour using social signal processing techniques and gives visual feedback on a head-mounted display. We conducted two user studies with a staged and a real presentation scenario which yielded that Logue's feedback was perceived helpful and had a positive impact on the speaker's performance.

Read more →

Multi-viewer gesture-based interaction for omni-directional video

Omni-directional video (ODV) is a novel medium that offers viewers a 360º panoramic recording. This type of content will become more common within our living rooms in the near future, seeing that immersive displaying technologies such as 3D television are on the rise. However, little attention has been given to how to interact with ODV content. We present a gesture elicitation study in which we asked users to perform mid-air gestures that they consider to be appropriate for ODV interaction, both for individual as well as collocated settings. We are interested in the gesture variations and adaptations that come forth from individual and collocated usage. To this end, we gathered quantitative and qualitative data by means of observations, motion capture, questionnaires and interviews. This data resulted in a user-defined gesture set for ODV, alongside an in-depth analysis of the variation in gestures we observed during the study.

Read more →

Informing intelligent user interfaces by inferring affective states from body postures in ubiquitous computing environments

Intelligent User Interfaces can benefit from having knowledge on the user's emotion. However, current implementations to detect affective states, are often constraining the user's freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone's affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. We demonstrate the possibility of inferring affective states from body postures in ubiquitous computing environments and our study also provides insights how this opens up new possibilities for IUI to access the affective states of users from body postures in a nonintrusive way.

Read more →

Bro-cam: Improving game experience with empathic feedback using posture tracking

In todays videogames user feedback is often provided through raw statistics and scoreboards. We envision that incorporating empathic feedback matching the player's current mood will improve the overall gaming experi- ence. In this paper we present Bro-cam, a novel system that provides empathic feedback to the player based on their body postures. Different body postures of the players are used as an indicator for their openness. From their level of open- ness, Bro-cam profiles the players into different personality types ranging from introvert to extrovert. Empathic feedback is then automatically generated and matched to their preferences for certain humoristic feedback statements. We use a depth camera to track the player's body postures and movements during the game and analyze these to provide customized feedback. We conducted a user study involving 32 players to investigate their subjective assessment on the em- pathic game feedback. Semi-structured interviews reveal that participants were positive about the empathic feedback and Bro-cam significantly improves their game experience.

Read more →