Posts tagged: HCI

Extended Abstract accepted at CHI 2026: Teaching Cobots What to Do by Watching an Expert

DELEGACT: Let the Robot Watch, Then Decide Who Does What

Our extended abstract "Learning to Delegate and Act with DELEGACT: Multimodal Language Models for Task-Level Human–Cobot Planning in Industrial Assembly" has been accepted at CHI 2026 in Barcelona. This is work by Bram Verstappen together with Dries Cardinaels, Danny Leen, and Raf Ramakers at the Digital Future Lab (UHasselt - Flanders Make).

Read more →

Paper accepted at CHI 2026: Helping Humans Control Robots on the Moon

Every Move You Make: Helping Operators See Where Their Robot Will Go

Our paper "Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics" () has been accepted at CHI 2026 in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the Digital Future Lab (UHasselt - Flanders Make). More details on the publication page.

Paper page on driescardinaels.be

Read more →

Learning to delegate and act with DELEGACT: Multimodal language models for task-level human--cobot planning in industrial assembly

Industrial assembly is shifting toward human-robot collaboration (HRC) to leverage the complementary strengths of both agents. However, traditional task allocation referred to as the Robotic Assembly Line Balancing Problem (RALBP) remains labor-intensive and often lacks transparency. We introduce DELEGACT, a framework designed to produce workable, intelligible human-cobot task allocations. The framework uses a Vision-Language Model (VLM) to extract atomic operations from expert demonstration videos, then employs a Large Language Model (LLM) to delegate these tasks based on robot specifications, operator competencies, and material definitions. We provide a proof-of-concept prototype and preliminary testing on illustrative cases. Results demonstrate the system's ability to reason about complex constraints such as precision, weight, and ergonomics. This paper illustrates how off-the-shelf foundation models can automate HRC decision-making via a human-in-the-loop paradigm while preserving operator agency and understanding.

Read more →

Every move you make: Visualizing near-future motion under delay for telerobotics

Delays in direct teleoperation decouple operator input from robot feedback. We frame this not as a unitary problem but as three facets of operator uncertainty: (1) communication, when commands take effect, (2) trajectory, how inputs map to motion, and (3) environmental, how external factors alter outcomes. We externalized each facet through predictive visualizations: Network, Path, and Envelope. In a controlled study with 24 participants (novices in telerobotics) navigating a simulated robot under a fixed 2.56s round-trip delay, we compared these visualizations against a delayed-video baseline. Path significantly shortened task time, lowered perceived cognitive load, and reduced reliance on reactive "move-and-wait" behavior. Envelope lowered cognitive load but did not significantly reduce reactive behavior or improve performance, while Network had no measurable effect. These results indicate that predictive support is effective only when trajectory uncertainty is externalized, enabling operators to move from reactive to more proactive control

Read more →

Two student projects from the UHasselt Human-AI Interaction course featured in SAI Update

The SAI Update magazine (Nov 2025 , sia.be) selected two projects from our Human–AI Interaction (HAII) course for its Next Technology Generation special. Proud of our students Linsey Helsen and Xander Vervaecke who turned their Human-AI Interaction project ideas into concrete, useful systems.

1) A Multi-Agent Approach to Fact-Checking (, ) — Xander Vervaecke (UHasselt) Xander’s LieSpy.ai coordinates multiple LLMs (e.g., GPT, Gemini, Mistral) to verify claims, compare reasoning, and aggregate evidence into a transparent verdict. The interface exposes sources, trust scores, and model rationales, moving fact-checking beyond a single-model answer. Key ideas: multi-agent collaboration, cross-validation, explainability.

Read more →

Engineering interactive systems embedding AI technologies (3rd workshop on)

EICS 2025 foreword

Challenges and opportunities for delay-invariant telerobotic interactions (short paper)

Effective operation in direct-control telerobotics relies heavily on real-time communication between the operator and the robot, as the operator retains full control over the robot's actions. However, in scenarios involving long distances, communication delays disrupt this feedback loop, creating significant challenges for precise control. To investigate these challenges, we conducted a user study where participants operated a TurtleBot3 Waffle Pi under varying delay conditions. Post-experiment brainstorming and analysis revealed recurring challenges, including over-correction, unpredictable robot behavior, and reduced situational awareness. Potential solutions identified include improving robot behavior predictability, integrating feedforward mechanisms, and enhancing visual feedback. These findings underscore the importance of designing intelligent interfaces to mitigate the impact of delays on telerobotic performance.

Read more →

All Posts by Category or Tags.