Posts tagged: HCI

FORTNIoT: Intelligible predictions to improve user understanding of smart home behavior

Ubiquitous environments, such as smart homes, are becoming more intelligent and autonomous. As a result, their behavior becomes harder to grasp and unintended behavior becomes more likely. Researchers have contributed tools to better understand and validate an environments' past behavior (e.g. logs, end-user debugging), and to prevent unintended behavior. There is, however, a lack of tools that help users understand the future behavior of such an environment. Information about the actions it will perform, and why it will perform them, remains concealed. In this paper, we contribute FORTNIoT, a well-defined approach that combines self-sustaining predictions (e.g. weather forecasts) and simulations of trigger-condition-action rules to deduce when these rules will trigger in the future and what state changes they will cause to connected smart home entities. We implemented a proof-of-concept of this approach, as well as a visual demonstrator that shows such predictions, including causes and effects, in an overview of a smart home's behavior. A between-subject evaluation with 42 participants indicates that FORTNIoT predictions lead to a more accurate understanding of the future behavior, more confidence in that understanding, and more appropriate trust in what the system will (not) do. We envision a wide variety of situations where predictions about the future are beneficial to inhabitants of smart homes, such as debugging unintended behavior and managing conflicts by exception, and hope to spark a new generation of intelligible tools for ubiquitous environments.

Attracktion: Field evaluation of multi-track audio as unobtrusive cues for pedestrian navigation

Listening to music while being on the move is common in our headphone society. However, if we want assistance in navigation from our smartphone, existing approaches either demand exclusive playback through the headphones or impact the listening experience of the music. We present a field evaluation of Attracktion, a spatial audio navigation system that leverages the access to single stems in a multi-track recording to minimize the impact on the listening experience. We compared Attracktion against current turn-by-turn navigation instructions in a field-study with 22 users and found that users perceived acoustic overlays with additional navigation information to have no impact on the listening experience. In terms of path efficiency, errors, and mental workload, Attracktion is on par with spoken turn-by-turn navigation instructions, and users liked it for the aspect of serendipity.

TaskHerder: A wearable minimal interaction interface for mobile and long-lived task execution

Notifications have become a core component of the smart-phone as our ubiquitous companion. Many of these only require minimal interaction, for which the smartwatch is a helpful companion device. However, its design and placement is influenced by its traditional ancestors. For applications where the user is constrained because of a specific usage situation, or performs tasks with both hands simultaneously, interaction with the smartwatch can be cumbersome. In this paper, we propose a wearable armstrap for minimal interaction in long-lived tasks. Placed around the elbow, it is outside the hands' proximal working space which reduces interference. Its flexible e-ink display provides screen space to provide overview information at minimal energy consumption for longer uptime. We designed the wearable for a professional use-case, meaning that is can easily be placed above protective clothing as its flexible round shape easily adjusts to various diameters. Capacitive touch sensing allows gesture input even under rough conditions, e.g., with gloves.

JigFab: Computational fabrication of constraints to facilitate woodworking with power tools

We present JigFab, an integrated end-to-end system that supports casual makers in designing and fabricating con- structions with power tools. Starting from a digital version of the construction, JigFab achieves this by generating vari- ous types of constraints that configure and physically aid the movement of a power tool. Constraints are generated for ev- ery operation and are custom to the work piece. Constraints are laser cut and assembled together with predefined parts to reduce waste. JigFab's constraints are used according to an interactive step-by-step manual. JigFab internalizes all the required domain knowledge for designing and building intri- cate structures, consisting of various types of finger joints, tenon & mortise joints, grooves, and dowels. Building such structures is normally reserved for artisans or automated with advanced CNC machinery.

Improving the translation environment for professional translators

When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project.

Fortunettes: Feedforward about the future state of GUI widgets

Feedback is commonly used to explain what happened in an interface. What if questions, on the other hand, remain mostly unanswered. In this paper, we present the concept of enhanced widgets capable of visualizing their future state, which helps users to understand what will happen without committing to an action. We describe two approaches to extend GUI toolkits to support widget-level feedforward, and illustrate the usefulness of widget-level feedforward in a standardized interface to control the weather radar in commercial aircraft. In our evaluation, we found that users require less clicks to achieve tasks and are more confident about their actions when feedforward information was available. These findings suggest that widget-level feedforward is highly suitable in applications the user is unfamiliar with, or when high confidence is desirable.

Fortune nets for fortunettes: Formal, petri nets-based engineering of feedforward for GUI widgets

Feedback and feedforward are two fundamental mechanisms that supports users' activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. Fortunettes is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers' tasks to detect and correct defects. This paper proposes the use of an existing formal notation for describing the behavior of interactive applications and how to exploit that formal model to extend the behavior to offer feedforward. We use a small login example to demonstrate the process and the results.

Enhancing patient motivation through intelligibility in cardiac tele-rehabilitation

Physical exercise training and medication compliance are primary components of cardiac rehabilitation. When rehabilitating independently at home, patients often fail to comply with their prescribed medication and find it challenging to interpret exercise targets or be aware of the expected efforts. Our work aims to assist cardiac patients in understanding their condition better, promoting medication adherence and motivating them to achieve their exercise targets in a tele-rehabilitation setting. We introduce a patient-centric intelligible visualization approach to present prescribed medication and exercise targets to patients. We assessed efficacy of intelligible visualizations on patients' comprehension in two lab studies. We evaluated the impact on patient motivation and health outcomes in field studies. Patients were able to adhere to medication prescriptions, manage their physical exercises, monitor their progress and gained better self-awareness on how they achieved their rehabilitation targets. Patients confirmed that the intelligible visualizations motivated them to achieve their targets better. We observed an improvement in overall physical activity levels and health outcomes of patients.

Towards tool-support for robot-assisted product creation in fab labs

Collaborative robot-assisted production has great potential for high variety low volume production lines. These type of production lines are common in both personal fabrication settings as well as in several types of flexible production lines. Moreover, many assembly tasks are in fact hard to complete by a single user or a single robot, and benefit greatly from a fluent collaboration between both. However, programming such systems is cumbersome, given the wide variation of tasks and the complexity of instructing a robot how it should move and operate in collaboration with a human user. In this paper we explore the case of collaborative assembly for personal fabrication. Based on a CAD model of the envisioned product, our software analyzes how this can be composed from a set of standardized pieces and suggests a series of collaborative assembly steps to complete the product. The proposed tool removes the need for the end-user to perform additional programming of the robot. We use a low-cost robot setup that is accessible and usable for typical personal fabrication activities in Fab Labs and Makerspaces. Participants in a first experimental study testified that our approach leads to a fluent collaborative assembly process. Based on this preliminary evaluation, we present next steps and potential implications.

SmartObjects: Sixth workshop on interacting with smart objects