Posts tagged: HCI

The art of timing: Effects of AR guidance timing on speed control

Augmented Reality (AR) holds significant potential to facilitate users in executing manual tasks. For effective support, however, we need to understand how showing movement instructions in AR affects how well people can follow those movements in real life. In this paper, we examine the degree to which users can synchronize the speed of their movements with speed cues presented through an AR environment. Specifically, we investigate the effects of timing in AR visual guidance. We assess performance using a highly realistic Mixed Reality (MR) welding simulation. Welding is a task that requires very precise timing and control over hand and arm motion. Our results show that upfront visual guidance (before manual task execution) alone often fails to transfer the knowledge of intended speeds, especially at higher target speeds. Live guidance (during manual task execution) during the activity provides more accurate speed results but typically requires a higher overshoot at the start. Optimal outcomes occur when visual guidance appears upfront and continues during the activity for users to follow through.

Substitute buttons: Exploring tactile perception of physical buttons for use as haptic proxies

Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.

Opportunities and challenges of model multiplicity in interactive software systems

The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings –a phenomenon called model multiplicity– affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.

Exploring alternative text input modalities in virtual reality: A comparative study

Text input in Virtual Reality (VR) is crucial for various applications, including communication, search, and productivity. We compare different keyboard designs for text entry in VR, taking advantage of the flexibility and the tracking options that are available for a 3D environment. To assess the differences between the input modalities and the spatial keyboard layouts void of the user experience with specific keyboard layouts, the Dvorak keyboard layout was used. Four different settings were included in the comparison: (a) a floating keyboard with finger pointing input, (b) a keyboard attached on the back of the hand with finger pointing input, (c) a floating keyboard with eye tracking and finger pinch input, and (d) a keyboard laid out over a rolling shape with finger pointing input. Keyboards (b), (c), and (d) can move in 3D space, while keyboard design (a) is fixed. (a) and (d) showed similar typing efficiencies, however, users reported an increase in perceived usability and lower physical demand for keyboard design (d). Users also reported a higher physical demand, effort, and annoyance for keyboard design (b), and a lower physical demand for keyboard design (c), with higher mental demand, effort, and the highest error rate.

Evaluation of AR pattern guidance methods for a surface cleaning task

We investigate the efficacy of augmented reality (AR) in enhancing a cleanroom cleaning task by implementing various pattern guidance designs. Cleanroom cleaning is an example of a surface coverage task that is hard to execute where the pattern should be followed correctly and the entire surface should be covered. We developed an AR guidance system for the cleaning procedure and evaluated four distinct pattern guidance methods: (1) breadcrumbs, (2) examples, (3) middle lines, and (4) outlines. We also varied the scale of the instructions, where information is present either on the entire surface or shows the instructions as a single step when they are necessary. To measure the performance, accuracy, and user satisfaction associated with each guidance method, we conducted a large-scale (n=864) between-subjects study. Our findings indicate that the single-step instructions proved to be more intuitive and efficient than the full instructions, especially for the breadcrumbs. We also discussed the implications of our results for the development of AR applications for surface coverage and pattern optimization, providing a multitude of observations related to instruction behaviors.

Direct feedforward techniques for the ViRgilites system

In this poster we propose an implementation of direct feedforward for the ViRgilites system. The project defines two alternative uses, with respect to the current implementation, that only shows in an indirect way (icons, target object images, text) how to perform an interaction in the simulated environment. The first representation is a single avatar mode where the user sees a virtual avatar performing an action in the same environment as the user, while the second representation is a multiple avatar mode, where the user can choose to compare two interactions and see the avatar representations side by side in dedicated panels. We report on the initial ideas and proof-of-concepts, while we envision further modifications and a future evaluation of the final outcome.

Anthropomorphic user interfaces: Past, present and future of anthropomorphic aspects for sustainable digital interface design

Interactions with computing systems and conversational services such as ChatGPT have become an inherent part of our daily lives. It is surprising that user interfaces, the gateways through which we communicate with an interactive intelligent system, are still predominantly devoid from hedonic aspects. There is little attempt to make communication through user interfaces intentionally more like communication with humans. Anthropomorphic user interfaces can transform interactions with intelligent software into more pleasant experiences by integrating human-like attributes. Anthropomorphic user interfaces expose human-like attributes that enable people to perceive, connect and interact with the interfaces as social actors. This integration of human-like aspects not only enhances user experience but also holds the potential to make interfaces more sustainable, as they rely on familiar human interaction patterns, thus potentially reducing the learning curve and increasing user adoption rates. However, there is little consensus on how to build these anthropomorphic user interfaces. We conducted an extensive literature review on existing anthropomorphic user interfaces for software systems (past), in order to map and connect existing definitions and interpretations in an overarching taxonomy (present). The taxonomy is used to organize and structure examples of anthropomorphic user interfaces into an accessible collection. The taxonomy and an accompanying web tool provides designers with a reference framework for analyzing and dissecting existing anthropomorphic user interfaces, and for designing new anthropomorphic user interfaces (future).

AntHand: Interaction techniques for precise telerobotic control using scaled objects in virtual environments

This paper introduces AntHand, a set of interaction techniques for enhancing precision and adaptability in telerobotics through the use of scaled objects in virtual environments. AntHand operates in three phases: up-scaling interaction, for detailed control through a magnified virtual model; constraining interaction, which locks movement dimensions for accuracy; and post-editing, allowing manipulation trace optimization and noise reduction. Leveraging a use-case related to surgery, the application of AntHand is showcased in a scenario demanding high accuracy and precise manipulation. AntHand demonstrates how collaboration between humans and robots can improve precise control of robot actions in telerobotic operations, while maintaining the familiar use of traditional tools, rather than relying on specialized controllers.