Posts tagged: UI Engineering

Paddle: Highly deformable mobile devices with physical controls

Touch screens have been widely adopted in mobile devices. Although touch input is very flexible in that it can be used for a wide variety of applications on mobile devices, they do not provide physical affordances, encourage eyes-free use or utilize the full dexterity of our hands due to the lack of physical controls. On the other hand, physical controls are often tailored to the task at hand, making them less flexible and therefore less suitable for general purpose use in mobile settings. In this paper, we show how to combine the flexibility of touch screens with the physical qualities that real world controls provide in a mobile context. We do so using a deformable device that can be transformed into various special-purpose physical controls. We present Paddle, a highly deformable device that can be transformed to different shapes. Paddle bridges the gap between differently sized mobile available devices nowadays, such as phones and tablets. Additionally, Paddle demonstrates a novel opportunity for deformable devices to transform into differently shaped physical controls that provide clear physical affordances for the task at hand. Physical controls have the advantage of exploiting people's innate abilities for manipulating physical objects in the real world. We designed and implemented a prototyped system of which the engineering principles are based on the design of the Rubik's magic, a folding plate puzzle. Additionally, we explore the interaction techniques enabled by this concept and conduct an in-depth study to evaluate our transformable physical controls. Our findings show that these physical controls provide several benefits over traditional touch interaction techniques commonly used on mobile devices.

A domain-specific textual language for rapid prototyping of multimodal interactive systems

There are currently toolkits that allow the specification of executable multimodal human-machine interaction models. Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. Others instead, interpret concise specifications written in existing textual languages even though their non-specialized notations prevent the productivity improvement achievable through domain-specific ones. We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. The language provides notations and constructs specially tailored to compactly declare the event patterns raised during the execution of multimodal commands. The toolkit detects the occurrence of these patterns and invokes the functionality of a back-end system in response.

Timisto: A technique to extract usage sequences from storyboards

Storyboarding is a technique that is often used for the conception of new interactive systems. A storyboard illustrates graphically how a system is used by its users and what a typical context of usage is. Although the informal notation of a storyboard stimulates creativity, and makes them easy to understand for everyone, it is more difficult to integrate in further steps in the engineering process. We present an approach, "Time In Storyboards" (Timisto), to extract valuable information on how various interactions with the system are positioned in time with respect to each other. Timisto does not interfere with the creative process of storyboarding, but maximizes the structured information about time that can be deduced from a storyboard.

Finding a needle in a haystack: An interactive video archive explorer for professional video searchers

Professional video searchers typically have to search for partic- ular video fragments in a vast video archive that contains many hours of video data. Without having the right video archive exploration tools, this is a difficult and time consuming task that induces hours of video skimming. We propose the video archive explorer, a video exploration tool that provides visual representations of automatically detected concepts to facilitate individual and collaborative video search tasks. This video archive explorer is developed by employing a user-centred methodology, which ensures that the tool is more likely to fit to the end user needs. A qualitative evaluation with professional video searchers shows that the combination of automatic video indexing, interactive visualisations and user-centred design can result in an increased usability, user satisfaction and productivity.

Empathic television experiences with second screens

The television remains a central hub in the home environment. We believe that in order to maintain its central role future TV's will need to incorporate empathic features. These will be delivered by interacting with other personal devices in home and services in the cloud. This position paper illustrates the common as well as the individual views of several Belgian partners working around a common scenario in the ITEA2 'Empathic Products' project.

Crossing the bridge over norman's gulf of execution: Revealing feedforward's true identity

Feedback and affordances are two of the most well-known principles in interaction design. Unfortunately, the related and equally important notion of feedforward has not been given as much consideration. Nevertheless, feedforward is a powerful design principle for bridging Norman's Gulf of Execution. We reframe feedforward by disambiguating it from related design principles such as feedback and perceived affordances, and identify new classes of feedforward. In addition, we present a reference framework that provides a means for designers to explore and recognize different opportunities for feedforward.