Posts tagged: XR

Will astronauts fumble? Preparing for unpredictable floating tools with encountered haptics and virtual reality

Astronauts routinely train spacewalks when on Earth. These spacewalks—extravehicular activities (EVAs)—are typically trained in neutral‑buoyancy pools or VR environments. However, neither environment captures the chaotic micro‑dynamics of a tethered tool in micro‑gravity. We designed and developed ZeroTraining: an encountered‑type haptic training rig (ZeroArm) paired with a VR simulation (ZeroPGT) that recreates the physical behavior of a tethered floating object in space. The integration of virtual and physical interactions supports dexterity training and improves transferability to real situations. We demonstrate feasibility using low‑cost components and validate the design in a formative study with ten participants.

Read more →

ViRgilites: Multilevel feedforward for multimodal interaction in VR

Navigating the interaction landscape of Virtual Reality (VR) and Augmented Reality (AR) presents significant complexities due to the plethora of available input hardware and interaction modalities, compounded by spatially diverse visual interfaces. Such complexities elevate the likelihood of user errors, necessitating frequent backtracking. To address this, we introduce ViRgilites, a virtual guidance framework that delivers multi-level feedforward information covering the available interaction techniques as well as the future possibilities to interact with virtual objects, anticipating the interaction effects and how they fit with the overall user's goal. ViRgilites is engineered to facilitate task execution, empowering users to make informed decisions about action methodologies and alternative courses of action. This paper presents the architecture and functionality of ViRgilites and demonstrates its efficacy through evaluation with a formative user study

Read more →

The art of timing: Effects of AR guidance timing on speed control

Augmented Reality (AR) holds significant potential to facilitate users in executing manual tasks. For effective support, however, we need to understand how showing movement instructions in AR affects how well people can follow those movements in real life. In this paper, we examine the degree to which users can synchronize the speed of their movements with speed cues presented through an AR environment. Specifically, we investigate the effects of timing in AR visual guidance. We assess performance using a highly realistic Mixed Reality (MR) welding simulation. Welding is a task that requires very precise timing and control over hand and arm motion. Our results show that upfront visual guidance (before manual task execution) alone often fails to transfer the knowledge of intended speeds, especially at higher target speeds. Live guidance (during manual task execution) during the activity provides more accurate speed results but typically requires a higher overshoot at the start. Optimal outcomes occur when visual guidance appears upfront and continues during the activity for users to follow through.

Read more →

Substitute buttons: Exploring tactile perception of physical buttons for use as haptic proxies

Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.

Read more →

Exploring alternative text input modalities in virtual reality: A comparative study

Text input in Virtual Reality (VR) is crucial for various applications, including communication, search, and productivity. We compare different keyboard designs for text entry in VR, taking advantage of the flexibility and the tracking options that are available for a 3D environment. To assess the differences between the input modalities and the spatial keyboard layouts void of the user experience with specific keyboard layouts, the Dvorak keyboard layout was used. Four different settings were included in the comparison: (a) a floating keyboard with finger pointing input, (b) a keyboard attached on the back of the hand with finger pointing input, (c) a floating keyboard with eye tracking and finger pinch input, and (d) a keyboard laid out over a rolling shape with finger pointing input. Keyboards (b), (c), and (d) can move in 3D space, while keyboard design (a) is fixed. (a) and (d) showed similar typing efficiencies, however, users reported an increase in perceived usability and lower physical demand for keyboard design (d). Users also reported a higher physical demand, effort, and annoyance for keyboard design (b), and a lower physical demand for keyboard design (c), with higher mental demand, effort, and the highest error rate.

Read more →

Evaluation of AR pattern guidance methods for a surface cleaning task

We investigate the efficacy of augmented reality (AR) in enhancing a cleanroom cleaning task by implementing various pattern guidance designs. Cleanroom cleaning is an example of a surface coverage task that is hard to execute where the pattern should be followed correctly and the entire surface should be covered. We developed an AR guidance system for the cleaning procedure and evaluated four distinct pattern guidance methods: (1) breadcrumbs, (2) examples, (3) middle lines, and (4) outlines. We also varied the scale of the instructions, where information is present either on the entire surface or shows the instructions as a single step when they are necessary. To measure the performance, accuracy, and user satisfaction associated with each guidance method, we conducted a large-scale (n=864) between-subjects study. Our findings indicate that the single-step instructions proved to be more intuitive and efficient than the full instructions, especially for the breadcrumbs. We also discussed the implications of our results for the development of AR applications for surface coverage and pattern optimization, providing a multitude of observations related to instruction behaviors.

Read more →

Direct feedforward techniques for the ViRgilites system

In this poster we propose an implementation of direct feedforward for the ViRgilites system. The project defines two alternative uses, with respect to the current implementation, that only shows in an indirect way (icons, target object images, text) how to perform an interaction in the simulated environment. The first representation is a single avatar mode where the user sees a virtual avatar performing an action in the same environment as the user, while the second representation is a multiple avatar mode, where the user can choose to compare two interactions and see the avatar representations side by side in dedicated panels. We report on the initial ideas and proof-of-concepts, while we envision further modifications and a future evaluation of the final outcome.

Read more →

AntHand: Interaction techniques for precise telerobotic control using scaled objects in virtual environments

This paper introduces AntHand, a set of interaction techniques for enhancing precision and adaptability in telerobotics through the use of scaled objects in virtual environments. AntHand operates in three phases: up-scaling interaction, for detailed control through a magnified virtual model; constraining interaction, which locks movement dimensions for accuracy; and post-editing, allowing manipulation trace optimization and noise reduction. Leveraging a use-case related to surgery, the application of AntHand is showcased in a scenario demanding high accuracy and precise manipulation. AntHand demonstrates how collaboration between humans and robots can improve precise control of robot actions in telerobotic operations, while maintaining the familiar use of traditional tools, rather than relying on specialized controllers.

Read more →

A VR prototype for one-dimensional movement visualizations for robotic arms

To enable effective communication between users and autonomous robots, it is crucial to have a shared understanding of goals and actions. This is made possible through an intelligible interface that communicates relevant information. This intelligibility enhances user comprehension, enabling them to anticipate the robot's actions and respond appropriately. However, because robots can perform a wide variety of actions and communication resources are limited, such as the number of available "pixels", visualizations must be carefully designed. To tackle this challenge, we have developed a visual design framework. Leveraging Unity, we developed a Virtual Reality implementation to prototype and evaluate our framework. Within this framework, we introduce two visualization techniques for visualizing the movement of a robotic arm, laying a foundation for subsequent development and user testing.

Read more →

AR guidance design for line tracing speed control

In many jobs, workers execute precise line tracing tasks; welding, spray painting, or chiseling, for example. Training and support for such tasks can be done using VR and AR. However, to enable workers to achieve the required precision in movement and timing, the effect of visual guidance on continuous movement needs to be explored. In VR environments, we want to ensure people are trained so that the obtained skill is transferable to a real-world context, whereas, in AR, we want to ensure an ongoing task can be completed successfully when adding visual guidance. To simulate these various contexts, we employ a VR environment to investigate the effectiveness of different visualizations for motion-based guidance in a line tracing task. We tested five different visualizations, including faster and slower arrows on the pen, the same arrows on the line, a dynamic graph on the pen or line, and a ghost object to follow. Each visualization was tested with the same set of five lines of different target speeds (2cm/s to 10 cm/s in steps of 2 cm/s) with a training line of 5 cm/s. Our results show that the example ghost on the line turns out to be the most efficient visualization for allowing users to achieve a specific speed. Users also perceived this visualization as the most engaging and easy to use. These findings have significant implications for the development of AR-based guidance systems, specifically in the realm of speed control, across diverse domains such as industrial applications, training, and entertainment.

Read more →

All Posts by Category or Tags.