Posts tagged: UI Engineering

HCI and worker well-being in manufacturing industry

Operators' well-being is a key factor for the success of industrial production processes. Even though research has studied the well-being aspects of the industry, such as support and improvement of ergonomics, there is still a long way to go to achieve a sustainable and healthy work context for manufacturing industry. We believe the Human-Computer Interaction community can contribute by developing research on worker well-being in real-life settings. This workshop intends to offer a venue for HCI researchers that focus on worker well-being for the manufacturing industry and other industry domains.

FortClash: Predicting and mediating unintended behavior in home automation

Smart home inhabitants can specify trigger-condition-action rules to control the home's behavior. As the number of rules and their complexity grow, however, so does the probability of issues such as inconsistencies and redundancies. These can lead to unintended behavior, including security vulnerabilities and wasted resources, which harms the inhabitants' trust in the system. Existing approaches to handle unintended behavior typically require inhabitants to define all-encompassing, permanent solutions by modifying the rules. Although this is fitting in certain situations, some unforeseen situations might occur. We argue that the user always must have the last word to avoid unwanted behaviors, without altering the overall behavior. With FortClash, we present an approach to predict many different types of unintended behavior, and contribute four novel mechanisms to mediate them that rely on making one-time exceptions. With FortClash, inhabitants gain a new tool to deal with unintended behavior in the short-term that is compatible with existing long-term approaches such as editing rules.

Engineering interactive computing systems 2022: Editorial introduction

The Engineering Interactive Computing Systems (EICS) track of the Proceedings of the ACM on Human-Computer Interaction (PACM-HCI) is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering. EICS 2022 is the fourteenth edition of the EICS conference, however, our community was the first to organize a scientific gathering to foster and exchange research ideas and contributions on how to engineer the effective interactive aspects of a computing system. In the seventies of the previous century, the Conference on Command Languages explored the emerging primary technologies to interact with computing systems, namely command languages. Since then, this conference has evolved into the Engineering HCI conference, and the same community organized sibling conferences such as CADUI (Computer-Aided Design of User Interfaces), Tamodia (Tasks, Models and Diagrams) and DSV-IS (Design Specification and Verification of Interactive Systems). These separate venues merged into one single ACM SIGCHI sponsored conference in 2010 EICS (see Fig.1). This conference became the primary venue for rigorous contributions, and dissemination of research results, that hold the interconnection between user interface design, software engineering and computational interaction.

Choreobot: A reference framework and online visual dashboard for supporting the design of intelligible robotic systems

As robots are equipped with software that makes them increasingly autonomous, it becomes harder for humans to understand and control these robots. Human users should be able to understand and, to a certain amount, predict what the robot will do. The software that drives a robotic system is often very complex, hard to understand for human users, and there is only limited support for ensuring robotic systems are also intelligible. Adding intelligibility to the behavior of a robotic system improves the predictability, trust, safety, usability, and acceptance of such autonomous robotic systems. Applying intelligibility to the interface design can be challenging for developers and designers of robotic systems, as they are expert users in robot programming but not necessarily experts on interaction design. We propose Choreobot, an interactive, online, and visual dashboard to use with our reference framework to help identify where and when adding intelligibility to the interface design is required, desired, or optional. The reference framework and accompanying input cards allow developers and designers of robotic systems to specify a usage scenario as a set of actions and, for each action, capture the context data that is indispensable for revealing when feedforward is required. The Choreobot interactive dashboard generates a visualization that presents this data on a timeline for the sequence of actions that make up the usage scenario. A set of heuristics and rules are included that highlight where and when feedforward is desired. Based on these insights, the developers and designers can adjust the interactions to improve the interaction for the human users working with the robotic system.

Model-based engineering of feedforward usability function for GUI widgets

Feedback and feedforward are two fundamental mechanisms that support users' activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. For interactive applications where making a mistake has more impact than just reduced user comfort, correct feedforward is an essential step toward correctly informed, and thus safe, usage. Our approach, Fortunettes, is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers' tasks to detect and correct defects. We build upon an existing formal notation based on Petri Nets for describing the behavior of interactive applications and present an approach that allows for adding correct and consistent feedforward.

Rataplan: Resilient automation of user interface actions with multi-modal proxies

We present Rataplan, a robust and resilient pixel-based approach for linking multi-modal proxies to automated sequences of actions in graphical user interfaces (GUIs). With Rataplan, users demonstrate a sequence of actions and answer human-readable follow-up questions to clarify their desire for automation. After demonstrating a sequence, the user can link a proxy input control to the action which can then be used as a shortcut for automating a sequence. Alternatively, output proxies use a notification model in which content is pushed when it becomes available. As an example use case, Rataplan uses keyboard shortcuts and tangible user interfaces (TUIs) as input proxies, and TUIs as output proxies. Instead of relying on available APIs, Rataplan automates GUIs using pixel-based reverse engineering. This ensures our approach can be used with all applications that offer a GUI, including web applications. We implemented a set of important strategies to support robust automation of modern interfaces that have a flat and minimal style, have frequent data and state changes, and have dynamic viewports.

Individualising graphical layouts with predictive visual search models

In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) on an interface. This article contributes individualised predictive models of visual search, and a computational approach to restructure graphical layouts for an individual user such that features on a new, unvisited interface can be found quicker. It explores four technical principles inspired by the human visual system (HVS) to predict expected positions of features and create individualised layout templates: (I) the interface with highest frequency is chosen as the template; (II) the interface with highest predicted recall probability (serial position curve) is chosen as the template; (III) the most probable locations for features across interfaces are chosen (visual statistical learning) to generate the template; (IV) based on a generative cognitive model, the most likely visual search locations for features are chosen (visual sampling modelling) to generate the template. Given a history of previously seen interfaces, we restructure the spatial layout of a new (unseen) interface with the goal of making its features more easily findable. The four HVS principles are implemented in Familiariser, a web browser that automatically restructures webpage layouts based on the visual history of the user. Evaluation of Familiariser (using visual statistical learning) with users provides first evidence that our approach reduces visual search time by over 10

FORTNIoT: Intelligible predictions to improve user understanding of smart home behavior

Ubiquitous environments, such as smart homes, are becoming more intelligent and autonomous. As a result, their behavior becomes harder to grasp and unintended behavior becomes more likely. Researchers have contributed tools to better understand and validate an environments' past behavior (e.g. logs, end-user debugging), and to prevent unintended behavior. There is, however, a lack of tools that help users understand the future behavior of such an environment. Information about the actions it will perform, and why it will perform them, remains concealed. In this paper, we contribute FORTNIoT, a well-defined approach that combines self-sustaining predictions (e.g. weather forecasts) and simulations of trigger-condition-action rules to deduce when these rules will trigger in the future and what state changes they will cause to connected smart home entities. We implemented a proof-of-concept of this approach, as well as a visual demonstrator that shows such predictions, including causes and effects, in an overview of a smart home's behavior. A between-subject evaluation with 42 participants indicates that FORTNIoT predictions lead to a more accurate understanding of the future behavior, more confidence in that understanding, and more appropriate trust in what the system will (not) do. We envision a wide variety of situations where predictions about the future are beneficial to inhabitants of smart homes, such as debugging unintended behavior and managing conflicts by exception, and hope to spark a new generation of intelligible tools for ubiquitous environments.

TaskHerder: A wearable minimal interaction interface for mobile and long-lived task execution

Notifications have become a core component of the smart-phone as our ubiquitous companion. Many of these only require minimal interaction, for which the smartwatch is a helpful companion device. However, its design and placement is influenced by its traditional ancestors. For applications where the user is constrained because of a specific usage situation, or performs tasks with both hands simultaneously, interaction with the smartwatch can be cumbersome. In this paper, we propose a wearable armstrap for minimal interaction in long-lived tasks. Placed around the elbow, it is outside the hands' proximal working space which reduces interference. Its flexible e-ink display provides screen space to provide overview information at minimal energy consumption for longer uptime. We designed the wearable for a professional use-case, meaning that is can easily be placed above protective clothing as its flexible round shape easily adjusts to various diameters. Capacitive touch sensing allows gesture input even under rough conditions, e.g., with gloves.

Improving the translation environment for professional translators

When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project.