Posts tagged: UI Engineering

Will astronauts fumble? Preparing for unpredictable floating tools with encountered haptics and virtual reality

Astronauts routinely train spacewalks when on Earth. These spacewalks—extravehicular activities (EVAs)—are typically trained in neutral‑buoyancy pools or VR environments. However, neither environment captures the chaotic micro‑dynamics of a tethered tool in micro‑gravity. We designed and developed ZeroTraining: an encountered‑type haptic training rig (ZeroArm) paired with a VR simulation (ZeroPGT) that recreates the physical behavior of a tethered floating object in space. The integration of virtual and physical interactions supports dexterity training and improves transferability to real situations. We demonstrate feasibility using low‑cost components and validate the design in a formative study with ten participants.

Read more →

Engineering interactive systems embedding AI technologies (3rd workshop on)

EICS 2025 foreword

AI-spectra: A visual dashboard for model multiplicity to enhance informed and transparent decision-making

We present an approach, AI-Spectra, to leverage model multiplicity for interactive systems. Model multiplicity means using slightly different AI models yielding equally valid outcomes or predictions for the same task, thus relying on many simultaneous "expert advisors" that can have different opinions. Dealing with multiple AI models that generate potentially divergent results for the same task is challenging for users to deal with. It helps users understand and identify AI models are not always correct and might differ, but it can also result in an information overload when being confronted with multiple results instead of one. AI-Spectra leverages model multiplicity by using a visual dashboard designed for conveying what AI models generate which results while minimizing the cognitive effort to detect consensus among models and what type of models might have different opinions. We use a custom adaptation of Chernoff faces for AI-Spectra; Chernoff Bots. This visualization technique lets users quickly interpret complex, multivariate model configurations and compare predictions across multiple models. Our design is informed by building on established Human-AI Interaction guidelines and well know practices in information visualization. We validated our approach through a series of experiments training a wide variation of models with the MNIST dataset to perform number recognition. Our work contributes to the growing discourse on making AI systems more transparent, trustworthy, and effective through the strategic use of multiple models.

Read more →

Opportunities and challenges of model multiplicity in interactive software systems

The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings –a phenomenon called model multiplicity– affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.

Read more →

Direct feedforward techniques for the ViRgilites system

In this poster we propose an implementation of direct feedforward for the ViRgilites system. The project defines two alternative uses, with respect to the current implementation, that only shows in an indirect way (icons, target object images, text) how to perform an interaction in the simulated environment. The first representation is a single avatar mode where the user sees a virtual avatar performing an action in the same environment as the user, while the second representation is a multiple avatar mode, where the user can choose to compare two interactions and see the avatar representations side by side in dedicated panels. We report on the initial ideas and proof-of-concepts, while we envision further modifications and a future evaluation of the final outcome.

Read more →

PACMHCI - engineering interactive computing systems, june 2023: Editorial introduction

Welcome to this issue of the Proceedings of the ACM on Human-Computer Interaction, bringing together contributions from the community on Engineering Interactive Computing Systems (EICS). The EICS track of the PACM-HCI is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering. This year, over the three rounds of submissions, for the issue of PACM-HCI we received 68 valid submissions (out of 90 submissions in total), of which we carefully selected 19 papers, bringing our acceptance rate to 27.9%. The result of this selection process is presented in this issue of the Proceedings of the ACM.

Read more →

HCI and worker well-being in manufacturing industry

Operators' well-being is a key factor for the success of industrial production processes. Even though research has studied the well-being aspects of the industry, such as support and improvement of ergonomics, there is still a long way to go to achieve a sustainable and healthy work context for manufacturing industry. We believe the Human-Computer Interaction community can contribute by developing research on worker well-being in real-life settings. This workshop intends to offer a venue for HCI researchers that focus on worker well-being for the manufacturing industry and other industry domains.

Read more →

TaskHerder: A wearable minimal interaction interface for mobile and long-lived task execution

Notifications have become a core component of the smart-phone as our ubiquitous companion. Many of these only require minimal interaction, for which the smartwatch is a helpful companion device. However, its design and placement is influenced by its traditional ancestors. For applications where the user is constrained because of a specific usage situation, or performs tasks with both hands simultaneously, interaction with the smartwatch can be cumbersome. In this paper, we propose a wearable armstrap for minimal interaction in long-lived tasks. Placed around the elbow, it is outside the hands' proximal working space which reduces interference. Its flexible e-ink display provides screen space to provide overview information at minimal energy consumption for longer uptime. We designed the wearable for a professional use-case, meaning that is can easily be placed above protective clothing as its flexible round shape easily adjusts to various diameters. Capacitive touch sensing allows gesture input even under rough conditions, e.g., with gloves.

Read more →

Fortunettes: Feedforward about the future state of GUI widgets

Feedback is commonly used to explain what happened in an interface. What if questions, on the other hand, remain mostly unanswered. In this paper, we present the concept of enhanced widgets capable of visualizing their future state, which helps users to understand what will happen without committing to an action. We describe two approaches to extend GUI toolkits to support widget-level feedforward, and illustrate the usefulness of widget-level feedforward in a standardized interface to control the weather radar in commercial aircraft. In our evaluation, we found that users require less clicks to achieve tasks and are more confident about their actions when feedforward information was available. These findings suggest that widget-level feedforward is highly suitable in applications the user is unfamiliar with, or when high confidence is desirable.

Read more →

All Posts by Category or Tags.