Posts tagged: AI

AI-spectra: A visual dashboard for model multiplicity to enhance informed and transparent decision-making

We present an approach, AI-Spectra, to leverage model multiplicity for interactive systems. Model multiplicity means using slightly different AI models yielding equally valid outcomes or predictions for the same task, thus relying on many simultaneous "expert advisors" that can have different opinions. Dealing with multiple AI models that generate potentially divergent results for the same task is challenging for users to deal with. It helps users understand and identify AI models are not always correct and might differ, but it can also result in an information overload when being confronted with multiple results instead of one. AI-Spectra leverages model multiplicity by using a visual dashboard designed for conveying what AI models generate which results while minimizing the cognitive effort to detect consensus among models and what type of models might have different opinions. We use a custom adaptation of Chernoff faces for AI-Spectra; Chernoff Bots. This visualization technique lets users quickly interpret complex, multivariate model configurations and compare predictions across multiple models. Our design is informed by building on established Human-AI Interaction guidelines and well know practices in information visualization. We validated our approach through a series of experiments training a wide variation of models with the MNIST dataset to perform number recognition. Our work contributes to the growing discourse on making AI systems more transparent, trustworthy, and effective through the strategic use of multiple models.

Read more →

Paper on A Visual Dashboard for Model Multiplicity

In AI research, model multiplicity can help users better understand the diversity of AI predictions. Our new system “AI-Spectra” provides a visual dashboard to harness this concept effectively. Instead of relying on a single AI model, AI-Spectra uses multiple models—each seen as an expert—to produce predictions for the same task. This helps users see not only what different models agree or disagree on, but also why these differences occur. Gilles Eerlings (a FAIR PhD student ) and Sebe Vanbrabant where the main contributors for this work and combined machine learning, model multiplicity and visualisations that focus on the characteristics of an AI model, instead of explaining the behaviour.

Read more →

Paper on Anthropomorphic User Interfaces

Anthropomorphic User Interfaces

Together with Eva Geurts, we explored Anthropomorphic User Interfaces (AUIs) and created a taxonomy that helps us to analyze, identify, and design appropriate AUIs. The paper is available here, and our interactive tool that helps you to find related resources for specific aspects from our technology is available at this URL: https://anthropomorphic-ui.onrender.com.

Citation

@inproceedings{geurtsantropomorphic2024,
title = {Anthropomorphic User Interfaces: Past, Present and Future of
Anthropomorphic Aspects for Sustainable Digital Interface Design},
author = {Eva Geurts and Kris Luyten},
booktitle = {Proceedings of the European Conference on Cognitive Ergonomics 2024},
articleno = {31},
numpages = {7},
keywords = {Anthropomorphism, Human-like interfaces, Taxonomy, User interface design},
location = {Paris, France},
series = {ECCE '24},
year = {2024},
publisher = {Association for Computing Machinery},
url={https://anthropomorphic-ui.onrender.com},
doi = {10.1145/3673805.3673831},
isbn = {9798400718243}
}

Abstract

Interactions with computing systems and conversational services such as ChatGPT have become an inherent part of our daily lives. It is surprising that user interfaces, the gateways through which we communicate with an interactive intelligent system, are still predominantly devoid of hedonic aspects. There is little attempt to make communication through user interfaces intentionally more like communication with humans. Anthropomorphic user interfaces can transform interactions with intelligent software into more pleasant experiences by integrating human-like attributes. Anthropomorphic user interfaces expose human-like attributes that enable people to perceive, connect, and interact with the interfaces as social actors. This integration of human-like aspects not only enhances user experience but also holds the potential to make interfaces more sustainable, as they rely on familiar human interaction patterns, thus potentially reducing the learning curve and increasing user adoption rates. However, there is little consensus on how to build these anthropomorphic user interfaces. We conducted an extensive literature review on existing anthropomorphic user interfaces for software systems (past), in order to map and connect existing definitions and interpretations in an overarching taxonomy (present). The taxonomy is used to organize and structure examples of anthropomorphic user interfaces into an accessible collection. The taxonomy and an accompanying web tool provide designers with a reference framework for analyzing and dissecting existing anthropomorphic user interfaces, and for designing new anthropomorphic user interfaces (future).

Read more →

Papers accepted on Anthropomorphic UIs and Model Multiplicity

Model Multiplicity in Interactive Software Systems

We got a workshop paper accepted, presenting the initial work of Gilles Eerlings et al. We explore how model multiplicity can be a potential answer to reduce overtrust in AI, as well as avoid undertrust. Still a lot of work that lies ahead, but this seems like a promising direction.

Citation

@inproceedings{luyteneerlings-modelmultiplicity2024,
  author = {Kris Luyten and Gilles Eerlings and Jori Liesenborgs and Gustavo {Rovelo Ruiz} and Sebe Vanbrabant and Davy Vanacken},
  title = {Opportunities and Challenges of Model Multiplicity in Interactive Software Systems},
  booktitle = {The Second Workshop on Engineering Interactive Systems Embedding AI Technologies},
  year = {2024}
}

Abstract

The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings—a phenomenon called model multiplicity—affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.

Read more →

Second Workshop on Engineering Interactive Systems Embedding AI Technologies @ EICS'2024

We will be organizing a workshop on Engineering Interactive Systems Embedding AI Technologies at the EICS 2024 conference – Tuesday June 24th or June 25th 2024 in Caglieri, Italy. Submissions welcome.

Read more →

Opportunities and challenges of model multiplicity in interactive software systems

The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings –a phenomenon called model multiplicity– affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.

Read more →

AI-spectra: A visual dashboard for model multiplicity to enhance informed and transparent decision-making

We present an approach, AI-Spectra, to leverage model multiplicity for interactive systems. Model multiplicity means using slightly different AI models yielding equally valid outcomes or predictions for the same task, thus relying on many simultaneous "expert advisors" that can have different opinions. Dealing with multiple AI models that generate potentially divergent results for the same task is challenging for users to deal with. It helps users understand and identify AI models are not always correct and might differ, but it can also result in an information overload when being confronted with multiple results instead of one. AI-Spectra leverages model multiplicity by using a visual dashboard designed for conveying what AI models generate which results while minimizing the cognitive effort to detect consensus among models and what type of models might have different opinions. We use a custom adaptation of Chernoff faces for AI-Spectra; Chernoff Bots. This visualization technique lets users quickly interpret complex, multivariate model configurations and compare predictions across multiple models. Our design is informed by building on established Human-AI Interaction guidelines and well know practices in information visualization. We validated our approach through a series of experiments training a wide variation of models with the MNIST dataset to perform number recognition. Our work contributes to the growing discourse on making AI systems more transparent, trustworthy, and effective through the strategic use of multiple models.

Read more →

Familiarisation: Restructuring layouts with visual learning models

In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) in familiar locations. This paper contributes computational approaches to restructuring layouts such that features on a new, unvisited interface can be found quicker. We explore four concepts of familiarisation, inspired by the human visual system (HVS), to automatically generate a familiar design for each user.

SCWT: A joint workshop on smart connected and wearable things