Posts tagged: UI Engineering

Plug-and-design: Embracing mobile devices as part of the design environment

Due to the large amount of mobile devices that continue to appear on the consumer market, mobile user interface design becomes increasingly important. The major issue with many existing mobile user interface design approaches is the time and effort that is needed to deploy a user interface design to the target device. In order to address this issue, we propose the plug-and-design tool that relies on a continuous multi-device mouse pointer to design user interfaces directly on the mobile target device. This will shorten iteration time since designers can continuously test and validate each design action they take. Using our approach, designers can empirically learn the specialities of a target device which will help them while creating user interfaces for devices they are not familiar with.

Read more →

Edit, inspect and connect your surroundings: A reference framework for meta-UIs

Discovering and unlocking the full potential of complex pervasive environments is still approached in application-centric ways. A set of statically deployed applications often defines the possible interactions within the environment. However, the increasing dynamics of such environments require a more versatile and generic approach which allows the end-user to inspect, configure and control the overall behavior of such an environment. A meta-UI addresses these needs by providing the end-user with an interactive view on a physical or virtual environment which can then be observed and manipulated at runtime. The meta-UI bridges the gap between the resource providers and the end-users by abstracting a resource's features as executable activities that can be assembled at runtime to reach a common goal. In order to allow software services to automatically integrate with a pervasive computing environment, the minimal requirements of the environment's meta-UI must be identified and agreed on. In this paper we present Meta-STUD, a goal- and service-oriented reference framework that supports the creation of meta-UIs for usage in pervasive environments. The framework is validated using two independent implementation approaches designed with different technologies and focuses.

Read more →

ReWiRe: Designing reactive systems for pervasive environments

The design of interactive software that populates an ambient space is a complex and ad-hoc process with traditional software development approaches. In an ambient space, important building blocks can be both physical objects within the user's reach and software objects accessible from within that space. However, putting many heterogeneous resources together to create a single system mostly requires writing a large amount of glue code before such a system is operational. Besides, users all have their own needs and preferences to interact with various kinds of environments which often means that the system behavior should be adapted to a specific context of use while the system is being used. In this paper we present a methodology to orchestrate resources on an abstract level and hence configure a pervasive computing environment. We use a semantic layer to model behavior and illustrate its use in an application.

Read more →

Gummy for multi-platform user interface designs: Shape me, multiply me, fix me, use me

Designers still often create a specific user interface for ev- ery target platform they wish to support, which is time- consuming and error-prone. The need for a multi-platform user interface design approach that designers feel comfort- able with increases as people expect their applications and data to go where they go. We present Gummy, a multi- platform graphical user interface builder that can generate an initial design for a new platform by adapting and combin- ing features of existing user interfaces created for the same application. Our approach makes it easy to target new plat- forms and keep all user interfaces consistent without requir- ing designers to considerably change their work practice.

Read more →

Service-interaction descriptions: Augmenting services with user interface models

Semantic service descriptions have paved the way for flexible interaction with services in a mobile computing environment. Services can be automatically discovered, invoked and even composed. On the contrary, the user interfaces for interacting with these services are often still designed by hand. This approach poses a serious threat to the overall flexibility of the system. To make the user interface design process scale, it should be automated as much as possible. We propose to augment service descriptions with high-level user interface models to support automatic user interface adaptation. Our method builds upon OWL-S, an ontology for Semantic Web Services, by connecting a collection of OWL-S services to a hierarchical task structure and selected presentation information. This allows end-users to interact with services on a variety of platforms.

Read more →

Constraint adaptability of multi-device user interfaces

Methods to support the creation of multi-device user interfaces typically use some type of abstraction of the user interface design. To retrieve the final user interface from the abstraction a transformation will be applied that specializes the abstraction for a particular target platform. The User Interface Markup Language (UIML) offers a way to create multi-device user interface descriptions while maintaining the consistency of certain aspects of a user interface across platforms. We extended the UIML language with support for layout constraints. Designers can create layout templates based on constraints that limit the ways a user interface can rearrange across platforms. This results in a higher degree of consistency and reusability of interface designs.

Read more →