Posts tagged: FAIR

Extended Abstract accepted at CHI 2026: Teaching Cobots What to Do by Watching an Expert

DELEGACT: Let the Robot Watch, Then Decide Who Does What

Our extended abstract "Learning to Delegate and Act with DELEGACT: Multimodal Language Models for Task-Level Human–Cobot Planning in Industrial Assembly" has been accepted at CHI 2026 in Barcelona. This is work by Bram Verstappen together with Dries Cardinaels, Danny Leen, and Raf Ramakers at the Digital Future Lab (UHasselt - Flanders Make).

Read more →

Presented at EURECA-PRO Education & Research Days: Teaching as Training

Teaching as Training: Incremental and Iterative AI Skill Development

We presented our contribution “Teaching as Training: Iterative and Incremental AI Skill Development” () at the EURECA-PRO Education & Research Days in Hasselt, held under the theme Glocalising Universities: A Shifting Horizon. This is joint work with Jolien Notermans (Department of Educational Development, Policy and Quality Assurance) and Sarah Doumen (Faculty of Sciences) at Hasselt University. More details on the publication page. The visual story is generated using StoryBookly.

Read more →

Paper accepted at ICLR 2026: DIVERSE: Disagreement-Inducing Vector Evolution for Rashomon Set Exploration

DIVERSE: Finding the Many Faces of AI Decision-Making

Our paper “DIVERSE: Disagreement-Inducing Vector Evolution for Rashomon Set Exploration” () has been accepted at ICLR 2026, one of the top venues for machine learning research. This is joint work with my PhD student Gilles Eerlings, Brent Zoomers, Jori Liesenborgs, and Gustavo Rovelo Ruiz at the Digital Future Lab. More details on the publication page.

Read more →

Paper accepted at CHI 2026: Helping Humans Control Robots on the Moon

Every Move You Make: Helping Operators See Where Their Robot Will Go

Our paper "Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics" () has been accepted at CHI 2026 in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the Digital Future Lab (UHasselt - Flanders Make). More details on the publication page.

Paper page on driescardinaels.be

Read more →

Two student projects from the UHasselt Human-AI Interaction course featured in SAI Update

The SAI Update magazine (Nov 2025 , sia.be) selected two projects from our Human–AI Interaction (HAII) course for its Next Technology Generation special. Proud of our students Linsey Helsen and Xander Vervaecke who turned their Human-AI Interaction project ideas into concrete, useful systems.

1) A Multi-Agent Approach to Fact-Checking (, ) — Xander Vervaecke (UHasselt) Xander’s LieSpy.ai coordinates multiple LLMs (e.g., GPT, Gemini, Mistral) to verify claims, compare reasoning, and aggregate evidence into a transparent verdict. The interface exposes sources, trust scores, and model rationales, moving fact-checking beyond a single-model answer. Key ideas: multi-agent collaboration, cross-validation, explainability.

Read more →

LLMQuery for Slidev: Integration of on-the-fly LLM Queries during your Presentation

I wanted to show my students appropriate ways of using LLMs for and during coding, so I started building (with some LLM help) a Slidev component, LLMQuery.vue, that adds LLM interactions to slides. It feels important to actively show students how these tools can amplify human knowledge and skill building rather than replace it altogether, even if I’m far from an expert. So with a bit of LLM help , I put together a sli.dev component in Vue that integrates LLMQuery right into my Slidev presentation. Maybe it’s useful for others too, so I’m sharing it here for download and further tinkering—people who are much better at web dev (there are many!) can probably turn it into something truly polished.

Read more →

Paper on A Visual Dashboard for Model Multiplicity

In AI research, model multiplicity can help users better understand the diversity of AI predictions. Our new system “AI-Spectra” provides a visual dashboard to harness this concept effectively. Instead of relying on a single AI model, AI-Spectra uses multiple models—each seen as an expert—to produce predictions for the same task. This helps users see not only what different models agree or disagree on, but also why these differences occur. Gilles Eerlings (a FAIR PhD student ) and Sebe Vanbrabant where the main contributors for this work and combined machine learning, model multiplicity and visualisations that focus on the characteristics of an AI model, instead of explaining the behaviour.

Read more →

All Posts by Category or Tags.