The design of interactive software that populates an ambient space is a complex and ad-hoc process with traditional software development approaches. In an ambient space, important building blocks can be both physical objects within the user's reach and software objects accessible from within that space. However, putting many heterogeneous resources together to create a single system mostly requires writing a large amount of glue code before such a system is operational. Besides, users all have their own needs and preferences to interact with various kinds of environments which often means that the system behavior should be adapted to a specific context of use while the system is being used. In this paper we present a methodology to orchestrate resources on an abstract level and hence configure a pervasive computing environment. We use a semantic layer to model behavior and illustrate its use in an application.
Posts tagged: UI Engineering
Toward multi-disciplinary model-based (re)design of sustainable user interfaces
ReWiRe: Designing reactive systems for pervasive environments
Meta-gui-builders: Generating domain-specific interface builders for multi-device user interface creation
Gummy for multi-platform user interface designs: Shape me, multiply me, fix me, use me
Designers still often create a specific user interface for every target platform they wish to support, which is time-consuming and error-prone. The need for a multi-platform user interface design approach that designers feel comfortable with increases as people expect their applications and data to go where they go. We present Gummy, a multi-platform graphical user interface builder that can generate an initial design for a new platform by adapting and combining features of existing user interfaces created for the same application. Our approach makes it easy to target new plat- forms and keep all user interfaces consistent without requiring designers to considerably change their work practice.
Eunomia: Toward a framework for multi-touch information displays in public spaces
Design by example of graphical user interfaces adapting to available screen size
Task models and diagrams for users interface design, 5th international workshop, TAMODIA 2006, hasselt, belgium, october 23-24, 2006. Revised papers
Service-interaction descriptions: Augmenting services with user interface models
Semantic service descriptions have paved the way for flexible interaction with services in a mobile computing environment. Services can be automatically discovered, invoked and even composed. On the contrary, the user interfaces for interacting with these services are often still designed by hand. This approach poses a serious threat to the overall flexibility of the system. To make the user interface design process scale, it should be automated as much as possible. We propose to augment service descriptions with high-level user interface models to support automatic user interface adaptation. Our method builds upon OWL-S, an ontology for Semantic Web Services, by connecting a collection of OWL-S services to a hierarchical task structure and selected presentation information. This allows end-users to interact with services on a variety of platforms.
Seamless interaction between multiple devices and meeting rooms
Meetings often suffer from the inability of participants to be physically present in one room. Moreover, with current networking technologies, meeting environments can be distributed over multiple rooms. The goal of the iConnect project is to provide collaboration services while interconnecting both collocated and remote users. We focus on smooth engagement by allowing participants to share arbitrary data through heterogeneous input devices and displays