Posts tagged: HCI

Answering why and why not questions in ubiquitous computing

Users often find it hard to understand and control the behavior of a Ubicomp system. This gives rise to usability problems and can lead to loss of user trust, which may hamper the acceptance of these systems. We are extending an existing Ubicomp framework to allow users to pose why and why not questions about its behavior. Initial experiments suggest that these questions are easy to use and could help users in understanding how Ubicomp systems work.

Read more →

Reasoning over spatial relations for context-aware distributed user interfaces

Considering the amount of devices a user owns nowadays, a distributed user interface can become increasingly important. This requires reasoning techniques that allow making predictions of future values in the spatial model because these devices can be expected to change their location during usage. Our primary attention will be devoted to the problem of re-distribution of user interfaces in a constantly changing environment. So that a change in spatial topology, i.e. in the way the devices are located relative to one another, will be detected on time and interpreted in a proper way, resulting in redistribution of a user interface the devices are sharing.

Read more →

MuiCSer: A multi-disciplinary user-centered software engineering process to increase the overal user experience

In this paper we present an incremental and user-centered process to create suitable and usable user interfaces. Validation is done throughout the process by prototyping, the prototypes evolve from low-fidelity to the final user interface. Applications developed with this process are more likely to correspond to users' expectations. Furthermore, the process takes into account the need for sustainable evolution often required by modern soft- ware configurations, by combining traditional software engineering with a user-centered approach. We think our approach is beneficial in its scope, since it considers evolving software beyond the deployment stage and supports a multi-disciplinary team.

Read more →

Gummy for multi-platform user interface designs: Shape me, multiply me, fix me, use me

Designers still often create a specific user interface for ev- ery target platform they wish to support, which is time- consuming and error-prone. The need for a multi-platform user interface design approach that designers feel comfort- able with increases as people expect their applications and data to go where they go. We present Gummy, a multi- platform graphical user interface builder that can generate an initial design for a new platform by adapting and combin- ing features of existing user interfaces created for the same application. Our approach makes it easy to target new plat- forms and keep all user interfaces consistent without requir- ing designers to considerably change their work practice.

Read more →

Ghosts in the interface: Meta-user interface visualizations as guides for multi-touch interaction

Multi-touch large display interfaces are becoming increasingly popular in public spaces. These spaces impose specific requirements on the accessibility of the user interfaces: most users are not familiar with the interface and expectations with regard to user experience are very high. Multi-touch interaction beyond the traditional move-rotate-scale interactions is often unknown to the public and can become exceedingly complex. We introduce TouchGhosts: visual guides that are embedded in the multi-touch user interface and that demonstrate the available interactions to the user. TouchGhosts are activated while using an interface, providing guidance on the fly and within the context-of-use. Our approach allows to define reconfigurable strategies to decide how or when a TouchGhost should be activated and which particular visualization will be presented to the user.

Read more →