Posts tagged: HCI

HCI and worker well-being in manufacturing industry

Operators' well-being is a key factor for the success of industrial production processes. Even though research has studied the well-being aspects of the industry, such as support and improvement of ergonomics, there is still a long way to go to achieve a sustainable and healthy work context for manufacturing industry. We believe the Human-Computer Interaction community can contribute by developing research on worker well-being in real-life settings. This workshop intends to offer a venue for HCI researchers that focus on worker well-being for the manufacturing industry and other industry domains.

Read more →

Engineering interactive computing systems 2022: Editorial introduction

The Engineering Interactive Computing Systems (EICS) track of the Proceedings of the ACM on Human-Computer Interaction (PACM-HCI) is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering. EICS 2022 is the fourteenth edition of the EICS conference, however, our community was the first to organize a scientific gathering to foster and exchange research ideas and contributions on how to engineer the effective interactive aspects of a computing system. In the seventies of the previous century, the Conference on Command Languages explored the emerging primary technologies to interact with computing systems, namely command languages. Since then, this conference has evolved into the Engineering HCI conference, and the same community organized sibling conferences such as CADUI (Computer-Aided Design of User Interfaces), Tamodia (Tasks, Models and Diagrams) and DSV-IS (Design Specification and Verification of Interactive Systems). These separate venues merged into one single ACM SIGCHI sponsored conference in 2010 EICS (see Fig.1). This conference became the primary venue for rigorous contributions, and dissemination of research results, that hold the interconnection between user interface design, software engineering and computational interaction.

Read more →

Context-aware support of dexterity skills in cross-reality environments

Within our work, we apply context-awareness to determine how AR/VR technology should adapt instructions based on the context to suit user needs. We focus on situations where the user must carry out a complex manual activity that requires additional information to be present during the activity to achieve the desired result. To this end, the emphasis is on activities that require fine-motor skills and in-depth expertise and training, for which XR is a powerful tool to support and guide users performing these tasks. The contexts we detect include user intentions, environmental conditions, and activity progressions. Our work builds on these contexts with the main focus on determining how XR should adapt for the end-user from a usability perspective. The feedback we request from ISMAR consists of input in detection, usability, and simulation categories, together with how to balance these categories to create real-time and user-friendly systems. The next steps of our work will consider how to content should adjust based on the cognitive load, activity space, and environmental conditions.

Read more →

Choreobot: A reference framework and online visual dashboard for supporting the design of intelligible robotic systems

As robots are equipped with software that makes them increasingly autonomous, it becomes harder for humans to understand and control these robots. Human users should be able to understand and, to a certain amount, predict what the robot will do. The software that drives a robotic system is often very complex, hard to understand for human users, and there is only limited support for ensuring robotic systems are also intelligible. Adding intelligibility to the behavior of a robotic system improves the predictability, trust, safety, usability, and acceptance of such autonomous robotic systems. Applying intelligibility to the interface design can be challenging for developers and designers of robotic systems, as they are expert users in robot programming but not necessarily experts on interaction design. We propose Choreobot, an interactive, online, and visual dashboard to use with our reference framework to help identify where and when adding intelligibility to the interface design is required, desired, or optional. The reference framework and accompanying input cards allow developers and designers of robotic systems to specify a usage scenario as a set of actions and, for each action, capture the context data that is indispensable for revealing when feedforward is required. The Choreobot interactive dashboard generates a visualization that presents this data on a timeline for the sequence of actions that make up the usage scenario. A set of heuristics and rules are included that highlight where and when feedforward is desired. Based on these insights, the developers and designers can adjust the interactions to improve the interaction for the human users working with the robotic system.

Read more →

Model-based engineering of feedforward usability function for GUI widgets

Feedback and feedforward are two fundamental mechanisms that support users' activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. For interactive applications where making a mistake has more impact than just reduced user comfort, correct feedforward is an essential step toward correctly informed, and thus safe, usage. Our approach, Fortunettes, is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers' tasks to detect and correct defects. We build upon an existing formal notation based on Petri Nets for describing the behavior of interactive applications and present an approach that allows for adding correct and consistent feedforward.

Read more →

An interactive design space for wearable displays

The promise of on-body interactions has led to widespread development of wearable displays. They manifest themselves in highly variable shapes and form, and are realized using technologies with fundamentally different properties. Through an extensive survey of the field of wearable displays, we characterize existing systems based on key qualities of displays and wearables, such as location on the body, intended viewers or audience, and the information density of rendered content. We present the results of this analysis in an open, web-based interactive design space that supports exploration and refinement along various parameters. The design space, which currently encapsulates 129 cases of wearable displays, aims to inform researchers and practitioners on existing solutions and designs, and enable the identification of gaps and opportunities for novel research and applications. Further, it seeks to provide them with a thinking tool to deliberate on how the displayed content should be adapted based on key design parameters. Through this work, we aim to facilitate progress in wearable displays, informed by existing solutions, by providing researchers with an interactive platform for discovery and reflection.

Read more →

Rataplan: Resilient automation of user interface actions with multi-modal proxies

We present Rataplan, a robust and resilient pixel-based approach for linking multi-modal proxies to automated sequences of actions in graphical user interfaces (GUIs). With Rataplan, users demonstrate a sequence of actions and answer human-readable follow-up questions to clarify their desire for automation. After demonstrating a sequence, the user can link a proxy input control to the action which can then be used as a shortcut for automating a sequence. Alternatively, output proxies use a notification model in which content is pushed when it becomes available. As an example use case, Rataplan uses keyboard shortcuts and tangible user interfaces (TUIs) as input proxies, and TUIs as output proxies. Instead of relying on available APIs, Rataplan automates GUIs using pixel-based reverse engineering. This ensures our approach can be used with all applications that offer a GUI, including web applications. We implemented a set of important strategies to support robust automation of modern interfaces that have a flat and minimal style, have frequent data and state changes, and have dynamic viewports.

Read more →

Impact of situational impairment on interaction with wearable displays

The number of wearable devices that we carry increases, with smaller companion devices like smartwatches providing quick access for simple tasks. These devices are, however, not necessarily in direct sight of the user and during everyday activities, it is unlikely, even undesirable, that the user constantly focuses on or interacts with these screens. Furthermore, interaction is often limited because our hands are occupied carrying or holding items such as bags, papers, boxes, or tools. In this paper, we evaluate how encumbrance affects, among others, the time it takes to perceive and react to a notification depending on the placement of the companion device. Our experimental results can assist designers in choosing the right device for the task.

Read more →

FORTNIoT: Intelligible predictions to improve user understanding of smart home behavior

Ubiquitous environments, such as smart homes, are becoming more intelligent and autonomous. As a result, their behavior becomes harder to grasp and unintended behavior becomes more likely. Researchers have contributed tools to better understand and validate an environments' past behavior (e.g. logs, end-user debugging), and to prevent unintended behavior. There is, however, a lack of tools that help users understand the future behavior of such an environment. Information about the actions it will perform, and why it will perform them, remains concealed. In this paper, we contribute FORTNIoT, a well-defined approach that combines self-sustaining predictions (e.g. weather forecasts) and simulations of trigger-condition-action rules to deduce when these rules will trigger in the future and what state changes they will cause to connected smart home entities. We implemented a proof-of-concept of this approach, as well as a visual demonstrator that shows such predictions, including causes and effects, in an overview of a smart home's behavior. A between-subject evaluation with 42 participants indicates that FORTNIoT predictions lead to a more accurate understanding of the future behavior, more confidence in that understanding, and more appropriate trust in what the system will (not) do. We envision a wide variety of situations where predictions about the future are beneficial to inhabitants of smart homes, such as debugging unintended behavior and managing conflicts by exception, and hope to spark a new generation of intelligible tools for ubiquitous environments.

Read more →

Attracktion: Field evaluation of multi-track audio as unobtrusive cues for pedestrian navigation

Listening to music while being on the move is common in our headphone society. However, if we want assistance in navigation from our smartphone, existing approaches either demand exclusive playback through the headphones or impact the listening experience of the music. We present a field evaluation of Attracktion, a spatial audio navigation system that leverages the access to single stems in a multi-track recording to minimize the impact on the listening experience. We compared Attracktion against current turn-by-turn navigation instructions in a field-study with 22 users and found that users perceived acoustic overlays with additional navigation information to have no impact on the listening experience. In terms of path efficiency, errors, and mental workload, Attracktion is on par with spoken turn-by-turn navigation instructions, and users liked it for the aspect of serendipity.

Read more →