Posts tagged: UI Engineering

Profile-aware multi-device interfaces: An MPEG-21-based approach for accessible user interfaces

The wide diversity of consumer devices has led to new methodologies and techniques to make digital content available over a broad range of devices with minimal effort. In particular the design of the interactive parts of a system has been the subject of a lot of research efforts because these parts are the most visible and are critical for the usability (and thus use) of a system. One thing that is missing in many current approaches is the ability to combine these new methodologies and techniques with a user-centric approach to ensure preferences from and requirements for a specific user are taken into account besides the device adaptations. In this paper we analysed the applicability of MPEG-21, part 7: Digital Item Adaptation, for the adaptation of a user interface to user characteristics. We show how the high-level XML-based user interface description language UIML in combination with an MPEG-21-based user profile enables designers to create accessible and personalised multi-device user interfaces. Using this combination results in user interfaces that can be deployed on a broad range of devices while taking into account user preferences with minimal effort. This approach enhances accessibility to digital items on various platforms, since all interactions with digital items should be supported by an appropriate user interface.

Read more →

UIML.NET: An open UIML renderer for the .net framework

As the diversity of available computing devices increases it becomes more difficult to adapt User Interface development to support the full range of available devices. One of the difficulties are the different GUI libraries: to use an alternative library or device one is forced to redevelop the interface completely for the alternative GUI library. To overcome these problems the User Interface Mark-up Language (UIML) specification has been proposed, as a way of glueing the interface design to different GUI libraries in different environments without further efforts. In contrast with other approaches UIML has matured and has some implementations proving its usefulness. We introduce the first UIML renderer for the .Net framework, a framework that can be accessed by different kinds of programming languages and can use different kinds of widget sets. We show that its properties, among them its reflection mechanism, are suitable for the development of a reusable and portable UIML renderer. The suitability for multi-device rendering is discussed in comparison with our own multi-device UI framework Dygimes. The focus is on how layout management can be generalised in the specification to allow the GUI to adapt to different screen sizes.

Read more →

The mapping problem back and forth: Customizing dynamic models while preserving consistency

Model-Based User Interface Development uses a multitude of models which are related in one way or another. Usu- ally there is some kind of process that starts with the design of the abstract models and progresses gradually towards the more concrete models, resulting in the final user interface when the design process is complete. Progressing from one model to another involves transforming the model and map- ping pieces of information contained in the source model onto the target model. Most existing development environ- ments propose solutions that apply these steps (semi-)automatically in one way only (from abstract to concrete models). Man- ual intervention that changes the target model (e.g. dialog model) to the designer preferences is not reflected in the source model (e.g. task model), thus this step can introduce inconsistencies between the different models. In this paper, we identify some rules that can be manually applied to the model after a transformation has taken place. The effect on the target and source models are shown together with how different models involved in the transformation can be up- dated accordingly to ensure consistency between models.

Read more →

Generating context-sensitive multiple device interfaces from design

This paper shows a technique that allows adaptive user interfaces, span- ning multiple devices, to be rendered from the task specification at runtime taking into account the context of use. The designer can spec- ify a task model using the ConcurTaskTrees Notation and its context- dependent parts, and deploy the user interface immediately from the specification. By defining a set of context-rules in the design stage, the appropriate context-dependent parts of the task specification will be se- lected before the concrete interfaces will be rendered. The context will be resolved by the runtime environment and does not require any man- ual intervention. This way the same task specification can be deployed for several different contexts of use. Traditionally, a context-sensitive task specification only took into account a variable single deployment device. This paper extends this approach as it takes into account task specifications that can be executed by multiple co-operating devices.

Read more →

Developing user interfaces with XML: Advances on user interface description language

This collection highlights advancements in User Interface Description Languages (UIDLs), focusing on XML-based solutions for device-independent and context-sensitive user interface development. Contributions cover a range of topics including dynamic generation of multimodal interfaces, model-driven UIDL integration, and language extensibility for multi-device scenarios. Innovations in UIDL frameworks, such as UIML and USIXML, showcase methods to enhance reusability, adaptability, and scalability in HCI tools. Practical applications, case studies, and evaluations of UIDL frameworks underscore their potential in improving usability, accessibility, and integration across diverse computing environments.

Read more →

Building user interfaces with tasks, dialogs and XML

We present two ongoing research efforts, both aim to support the use of models for designing (multi- and multiple-device) User Interfaces. The first tool, a part of the Dygimes framework, shows how context and tasks can be combined. It allows to generate prototype interfaces from context-sensitive task models. It builds upon a runtime environment in Java and an XML-based High Level User Interface Description (HLUID) language to generate the prototype interface. The second tool, uiml.net, experiments with another HLUID and another runtime enviroment to generate interfaces. Both tools are work in progress.

Read more →

Derivation of a dialog model from a task model by activity chain extraction

Over the last few years, Model-Based User Interface Design has become an important tool for creating multi-device User Interfaces. By providing information about several aspects of the User Interface, such as the task for which it is being built, different User Interfaces can be generated for fulfilling the same needs although they have a differ- ent concrete appearance. In the process of making User Interfaces with a Model-Based Design approach, several models can be used: a task model, a dialog model, a user model, a data model,etc. Intuitively, using more models provides more (detailed) information and will create more appro- priate User Interfaces. Nevertheless, the designer must take care to keep the different models consistent with respect to each other. This paper presents an algorithm to extract the dialog model (partially) from the task model. A task model and dialog model are closely related because the dialog model defines a sequence of user interactions, an activity chain, to reach the goal postulated in the task specification. We formalise the activity chain as a State Transition Network, and in addition this chain can be partially extracted out of the task specification. The designer benefits of this approach since the task and dialog model are consistent. This approach is useful in automatic User Interface generation where several different dialogs are involved: the transitions between dialogs can be handled smoothly without explicitely implementing them.

Read more →