Posts tagged: Intelligible UI

Every move you make: Visualizing near-future motion under delay for telerobotics

Delays in direct teleoperation decouple operator input from robot feedback. We frame this not as a unitary problem but as three facets of operator uncertainty: (1) communication, when commands take effect, (2) trajectory, how inputs map to motion, and (3) environmental, how external factors alter outcomes. We externalized each facet through predictive visualizations: Network, Path, and Envelope. In a controlled study with 24 participants (novices in telerobotics) navigating a simulated robot under a fixed 2.56s round-trip delay, we compared these visualizations against a delayed-video baseline. Path significantly shortened task time, lowered perceived cognitive load, and reduced reliance on reactive "move-and-wait" behavior. Envelope lowered cognitive load but did not significantly reduce reactive behavior or improve performance, while Network had no measurable effect. These results indicate that predictive support is effective only when trajectory uncertainty is externalized, enabling operators to move from reactive to more proactive control

A visual design space for one-dimensional intelligible human-robot interaction visualizations

To enable effective communication between users and autonomous robots, it is crucial to have a shared understanding of goals and actions. This is made possible through an intelligible interface that communicates relevant information. This intelligibility enhances user comprehension, enabling them to anticipate the robot's actions and respond appropriately. However, because robots can perform a wide variety of actions and communication resources are limited, such as the number of available "pixels", visualizations must be carefully designed. To tackle this challenge, we have developed a visual design framework and design space that can be used to create intelligible visualizations for human-robot interaction. Our framework focuses on three key components: information type, pixel layout, and robot type. We demonstrate how intelligibility can be integrated into interactions through prototype visualizations featuring a one-dimensional pixel layout, laying the groundwork for developing more detailed and understandable visualizations.

Choreobot: A reference framework and online visual dashboard for supporting the design of intelligible robotic systems

As robots are equipped with software that makes them increasingly autonomous, it becomes harder for humans to understand and control these robots. Human users should be able to understand and, to a certain amount, predict what the robot will do. The software that drives a robotic system is often very complex, hard to understand for human users, and there is only limited support for ensuring robotic systems are also intelligible. Adding intelligibility to the behavior of a robotic system improves the predictability, trust, safety, usability, and acceptance of such autonomous robotic systems. Applying intelligibility to the interface design can be challenging for developers and designers of robotic systems, as they are expert users in robot programming but not necessarily experts on interaction design. We propose Choreobot, an interactive, online, and visual dashboard to use with our reference framework to help identify where and when adding intelligibility to the interface design is required, desired, or optional. The reference framework and accompanying input cards allow developers and designers of robotic systems to specify a usage scenario as a set of actions and, for each action, capture the context data that is indispensable for revealing when feedforward is required. The Choreobot interactive dashboard generates a visualization that presents this data on a timeline for the sequence of actions that make up the usage scenario. A set of heuristics and rules are included that highlight where and when feedforward is desired. Based on these insights, the developers and designers can adjust the interactions to improve the interaction for the human users working with the robotic system.

FORTNIoT: Intelligible predictions to improve user understanding of smart home behavior

Ubiquitous environments, such as smart homes, are becoming more intelligent and autonomous. As a result, their behavior becomes harder to grasp and unintended behavior becomes more likely. Researchers have contributed tools to better understand and validate an environments' past behavior (e.g. logs, end-user debugging), and to prevent unintended behavior. There is, however, a lack of tools that help users understand the future behavior of such an environment. Information about the actions it will perform, and why it will perform them, remains concealed. In this paper, we contribute FORTNIoT, a well-defined approach that combines self-sustaining predictions (e.g. weather forecasts) and simulations of trigger-condition-action rules to deduce when these rules will trigger in the future and what state changes they will cause to connected smart home entities. We implemented a proof-of-concept of this approach, as well as a visual demonstrator that shows such predictions, including causes and effects, in an overview of a smart home's behavior. A between-subject evaluation with 42 participants indicates that FORTNIoT predictions lead to a more accurate understanding of the future behavior, more confidence in that understanding, and more appropriate trust in what the system will (not) do. We envision a wide variety of situations where predictions about the future are beneficial to inhabitants of smart homes, such as debugging unintended behavior and managing conflicts by exception, and hope to spark a new generation of intelligible tools for ubiquitous environments.

Fortunettes: Feedforward about the future state of GUI widgets

Feedback is commonly used to explain what happened in an interface. What if questions, on the other hand, remain mostly unanswered. In this paper, we present the concept of enhanced widgets capable of visualizing their future state, which helps users to understand what will happen without committing to an action. We describe two approaches to extend GUI toolkits to support widget-level feedforward, and illustrate the usefulness of widget-level feedforward in a standardized interface to control the weather radar in commercial aircraft. In our evaluation, we found that users require less clicks to achieve tasks and are more confident about their actions when feedforward information was available. These findings suggest that widget-level feedforward is highly suitable in applications the user is unfamiliar with, or when high confidence is desirable.

Fortune nets for fortunettes: Formal, petri nets-based engineering of feedforward for GUI widgets

Feedback and feedforward are two fundamental mechanisms that supports users' activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. Fortunettes is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers' tasks to detect and correct defects. This paper proposes the use of an existing formal notation for describing the behavior of interactive applications and how to exploit that formal model to extend the behavior to offer feedforward. We use a small login example to demonstrate the process and the results.

Enhancing patient motivation through intelligibility in cardiac tele-rehabilitation

Physical exercise training and medication compliance are primary components of cardiac rehabilitation. When rehabilitating independently at home, patients often fail to comply with their prescribed medication and find it challenging to interpret exercise targets or be aware of the expected efforts. Our work aims to assist cardiac patients in understanding their condition better, promoting medication adherence and motivating them to achieve their exercise targets in a tele-rehabilitation setting. We introduce a patient-centric intelligible visualization approach to present prescribed medication and exercise targets to patients. We assessed efficacy of intelligible visualizations on patients' comprehension in two lab studies. We evaluated the impact on patient motivation and health outcomes in field studies. Patients were able to adhere to medication prescriptions, manage their physical exercises, monitor their progress and gained better self-awareness on how they achieved their rehabilitation targets. Patients confirmed that the intelligible visualizations motivated them to achieve their targets better. We observed an improvement in overall physical activity levels and health outcomes of patients.

Smart computer-aided translation environment (SCATE) : highlights

We present key results of SCATE (Smart Computer-Aided Translation Environment). The project investigated algorithms, user interfaces and methods that can contribute to the development of more efficient tools for translation work.

Intellingo: An intelligible translation environment

Translation environments offer various translation aids to support professional translators. However, translation aids typically provide only limited justification for the translation suggestions they propose. In this paper we present Intellingo, a translation environment that explores intelligibility for translation aids, to enable more sensible usage of translation suggestions. We performed a comparative study between an intelligible version and a non-intelligible version of Intellingo. The results show that although adding intelligibility does not necessarily result in significant changes to the user experience, translators can better assess translation suggestions without a negative impact on their performance. Intelligibility is preferred by translators when the additional information it conveys benefits the translation process and when this information is not part of the translator's readily available knowledge.

Calculating and visualising energy expenditure to monitor physical activity in tele-rehabilitation

We have developed an approach that presents patients with an intelligible, user-friendly yet correct visualisation to check progress and verify adherence to the prescribed physical exercise program. Integrated in a comprehensive, mobile self-monitoring app, this patient-centric approach facilitates keeping patients motivated and engaged while rehabilitating remotely.

All Posts by Category or Tags.