As the diversity of available computing devices increases it becomes more difficult to adapt User Interface development to support the full range of available devices. One of the difficulties are the different GUI libraries: to use an alternative library or device one is forced to redevelop the interface completely for the alternative GUI library. To overcome these problems the User Interface Mark-up Language (UIML) specification has been proposed, as a way of glueing the interface design to different GUI libraries in different environments without further efforts. In contrast with other approaches UIML has matured and has some implementations proving its usefulness. We introduce the first UIML renderer for the .Net framework, a framework that can be accessed by different kinds of programming languages and can use different kinds of widget sets. We show that its properties, among them its reflection mechanism, are suitable for the development of a reusable and portable UIML renderer. The suitability for multi-device rendering is discussed in comparison with our own multi-device UI framework Dygimes. The focus is on how layout management can be generalised in the specification to allow the GUI to adapt to different screen sizes.
Posts tagged: Model-Based Interface Development
A generic approach for multi-device user interface rendering with UIML
Task modeling for ambient intelligent environments: Design support for situated task executions
Distributed user interface elements to support smart interaction spaces
Designing interactive systems in context: From prototype to deployment
Context-sensitive user interfaces for ambient environments: Design, development and deployment
UIML.NET: An open UIML renderer for the .net framework
The mapping problem back and forth: Customizing dynamic models while preserving consistency
Model-Based User Interface Development uses a multitude of models which are related in one way or another. Usually there is some kind of process that starts with the design of the abstract models and progresses gradually towards the more concrete models, resulting in the final user interface when the design process is complete. Progressing from one model to another involves transforming the model and mapping pieces of information contained in the source model onto the target model. Most existing development environments propose solutions that apply these steps (semi-)automatically in one way only (from abstract to concrete models). Manual intervention that changes the target model (e.g. dialog model) to the designer preferences is not reflected in the source model (e.g. task model), thus this step can introduce inconsistencies between the different models. In this paper, we identify some rules that can be manually applied to the model after a transformation has taken place. The effect on the target and source models are shown together with how different models involved in the transformation can be updated accordingly to ensure consistency between models.
Generating context-sensitive multiple device interfaces from design
This paper shows a technique that allows adaptive user interfaces, span- ning multiple devices, to be rendered from the task specification at runtime taking into account the context of use. The designer can spec- ify a task model using the ConcurTaskTrees Notation and its context- dependent parts, and deploy the user interface immediately from the specification. By defining a set of context-rules in the design stage, the appropriate context-dependent parts of the task specification will be se- lected before the concrete interfaces will be rendered. The context will be resolved by the runtime environment and does not require any man- ual intervention. This way the same task specification can be deployed for several different contexts of use. Traditionally, a context-sensitive task specification only took into account a variable single deployment device. This paper extends this approach as it takes into account task specifications that can be executed by multiple co-operating devices.