Posts tagged: HCI

UIML.NET: An open UIML renderer for the .net framework

As the diversity of available computing devices increases it becomes more difficult to adapt User Interface development to support the full range of available devices. One of the difficulties are the different GUI libraries: to use an alternative library or device one is forced to redevelop the interface completely for the alternative GUI library. To overcome these problems the User Interface Mark-up Language (UIML) specification has been proposed, as a way of glueing the interface design to different GUI libraries in different environments without further efforts. In contrast with other approaches UIML has matured and has some implementations proving its usefulness. We introduce the first UIML renderer for the .Net framework, a framework that can be accessed by different kinds of programming languages and can use different kinds of widget sets. We show that its properties, among them its reflection mechanism, are suitable for the development of a reusable and portable UIML renderer. The suitability for multi-device rendering is discussed in comparison with our own multi-device UI framework Dygimes. The focus is on how layout management can be generalised in the specification to allow the GUI to adapt to different screen sizes.

Read more →

The mapping problem back and forth: Customizing dynamic models while preserving consistency

Model-Based User Interface Development uses a multitude of models which are related in one way or another. Usu- ally there is some kind of process that starts with the design of the abstract models and progresses gradually towards the more concrete models, resulting in the final user interface when the design process is complete. Progressing from one model to another involves transforming the model and map- ping pieces of information contained in the source model onto the target model. Most existing development environ- ments propose solutions that apply these steps (semi-)automatically in one way only (from abstract to concrete models). Man- ual intervention that changes the target model (e.g. dialog model) to the designer preferences is not reflected in the source model (e.g. task model), thus this step can introduce inconsistencies between the different models. In this paper, we identify some rules that can be manually applied to the model after a transformation has taken place. The effect on the target and source models are shown together with how different models involved in the transformation can be up- dated accordingly to ensure consistency between models.

Read more →

Generating context-sensitive multiple device interfaces from design

This paper shows a technique that allows adaptive user interfaces, span- ning multiple devices, to be rendered from the task specification at runtime taking into account the context of use. The designer can spec- ify a task model using the ConcurTaskTrees Notation and its context- dependent parts, and deploy the user interface immediately from the specification. By defining a set of context-rules in the design stage, the appropriate context-dependent parts of the task specification will be se- lected before the concrete interfaces will be rendered. The context will be resolved by the runtime environment and does not require any man- ual intervention. This way the same task specification can be deployed for several different contexts of use. Traditionally, a context-sensitive task specification only took into account a variable single deployment device. This paper extends this approach as it takes into account task specifications that can be executed by multiple co-operating devices.

Read more →

Developing user interfaces with XML: Advances on user interface description language

This collection highlights advancements in User Interface Description Languages (UIDLs), focusing on XML-based solutions for device-independent and context-sensitive user interface development. Contributions cover a range of topics including dynamic generation of multimodal interfaces, model-driven UIDL integration, and language extensibility for multi-device scenarios. Innovations in UIDL frameworks, such as UIML and USIXML, showcase methods to enhance reusability, adaptability, and scalability in HCI tools. Practical applications, case studies, and evaluations of UIDL frameworks underscore their potential in improving usability, accessibility, and integration across diverse computing environments.

Read more →

Building user interfaces with tasks, dialogs and XML

We present two ongoing research efforts, both aim to support the use of models for designing (multi- and multiple-device) User Interfaces. The first tool, a part of the Dygimes framework, shows how context and tasks can be combined. It allows to generate prototype interfaces from context-sensitive task models. It builds upon a runtime environment in Java and an XML-based High Level User Interface Description (HLUID) language to generate the prototype interface. The second tool, uiml.net, experiments with another HLUID and another runtime enviroment to generate interfaces. Both tools are work in progress.

Read more →

Runtime transformations for modal independent user interface migration

The usage of computing systems has evolved dramatically over the last years. Starting from a low level procedural usage, in which a process for executing one or several tasks is carried out, computers now tend to be used in a problem oriented way. Future computer usage will be more centered around particular services, and will not be focused on platforms or applications. These services should be independent of the technology used to interact with them. In this paper an approach will be presented which provides a uniform interface to such services, without any dependence on modality, platform or programming language. Through the usage of general user interface descriptions, presented in XML, and converted using XSLT, a uniform framework is presented for runtime migration of user interfaces. As a consequence, future services will become easily extensible for all kinds of devices and modalities. Special attention goes out to a component-based software development approach. Services represented by and grouped in components can offer a special interface for modal- and device-independent rendering. Components become responsible for describing their own possibilities and constraints for interacting. An implementation serving as a proof of concept, a runtime conversion of a joystick in a 3D virtual environment into a 2D dialog-based user interface, is developed.

Read more →

Dygimes: Dynamically generating interfaces for mobile computing devices and embedded systems

Abstract. Constructing multi-device interfaces still presents major challenges, despite all efforts of the industry and several academic initiatives to develop usable solutions. One approach which is finding its way into general use, is XML-based User Interface descriptions to generate suitable User Interfaces for embedded systems and mobile computing devices. Another important solution is Model-based User Interface design, which evolved into a very suitable but academic approach for designing multi-device interfaces. We introduce a framework, Dygimes, which uses XML-based User Interface descriptions in combination with selected models, to generate User Interfaces for different kinds of devices at runtime. With this framework task specifications are combined with XML-based User Interface building blocks to generate User Interfaces that can adapt to the context of use. The design of the User Interface and the implementation of the application code can be separated, while smooth integration of the functionality and the User Interface is supported. The resulting interface is location independent: it can migrate over devices while invoking functionality using standard protocols.

Read more →

Derivation of a dialog model from a task model by activity chain extraction

Over the last few years, Model-Based User Interface Design has become an important tool for creating multi-device User Interfaces. By providing information about several aspects of the User Interface, such as the task for which it is being built, different User Interfaces can be generated for fulfilling the same needs although they have a differ- ent concrete appearance. In the process of making User Interfaces with a Model-Based Design approach, several models can be used: a task model, a dialog model, a user model, a data model,etc. Intuitively, using more models provides more (detailed) information and will create more appro- priate User Interfaces. Nevertheless, the designer must take care to keep the different models consistent with respect to each other. This paper presents an algorithm to extract the dialog model (partially) from the task model. A task model and dialog model are closely related because the dialog model defines a sequence of user interactions, an activity chain, to reach the goal postulated in the task specification. We formalise the activity chain as a State Transition Network, and in addition this chain can be partially extracted out of the task specification. The designer benefits of this approach since the task and dialog model are consistent. This approach is useful in automatic User Interface generation where several different dialogs are involved: the transitions between dialogs can be handled smoothly without explicitely implementing them.

Read more →

Specifying user interfaces for runtime modal independent migration

The usage of computing systems has evolved dramatically over the last years. Starting from a low level procedural usage, in which a process for executing one or several tasks is carried out, computers now tend to be used in a problem oriented way. Future computer usage will be more centered around particular services, and will not be focused on platforms or applications. These services should be independent of the technology used to interact with them. In this paper an approach will be presented which provides a uniform interface to such services, with- out any dependence on modality, platform or programming language. Through the usage of general user interface descriptions, presented in XML, and converted using XSLT, a uniform framework is presented for runtime migration of user interfaces. As a consequence, future services will become easily extensible for all kinds of devices and modalities. An implementation serving as a proof of concept, a runtime conversion of a joystick in a 3D virtual environment into a 2D dialog-based user interface, is developed.

Read more →