Posts tagged: HCI

Making It Work Is the Work: attempts to progress HCIxFabrication research from the lab to the market

Making It Work Is the Work: Engineering Maturity as Epistemic Work

We contribute some reflections on our attempts to progress HCIxFabrication research from the lab to the market in a short paper "Making It Work Is the Work" (to be discussed at the RealFab'26 workshop at CHI 2026 in Barcelona). This is work with my colleagues Danny Leen, Stig Konings, and Raf Ramakers at the Digital Future Lab (UHasselt – Flanders Make).

Read more →

Extended Abstract accepted at CHI 2026: Teaching Cobots What to Do by Watching an Expert

DELEGACT: Let the Robot Watch, Then Decide Who Does What

Our extended abstract "Learning to Delegate and Act with DELEGACT: Multimodal Language Models for Task-Level Human–Cobot Planning in Industrial Assembly" has been accepted at CHI 2026 in Barcelona. This is work by Bram Verstappen together with Dries Cardinaels, Danny Leen, and Raf Ramakers at the Digital Future Lab (UHasselt - Flanders Make).

Read more →

Paper accepted at CHI 2026: Helping Humans Control Robots on the Moon

Every Move You Make: Helping Operators See Where Their Robot Will Go

Our paper "Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics" () has been accepted at CHI 2026 in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the Digital Future Lab (UHasselt - Flanders Make). More details on the publication page.

Paper page on driescardinaels.be

Read more →

Making it work is the work: Engineering maturity as epistemic work

Many HCI fabrication systems are compelling as prototypes but remain difficult to reuse, extend, or transfer beyond their original publication. A common explanation is that adoption simply takes time. We argue that the issue is more fundamental. The knowledge needed to make fabrication systems transferable, namely how they behave across different materials, machines, and users, usually does not exist at the time of publication because the work required to generate this knowledge is rarely incentivized or rewarded. Drawing on engineering epistemology and prior debates in systems-oriented HCI, we reframe engineering maturity as epistemic work: sustained engineering effort that produces knowledge which prototyping alone cannot reveal. We propose six dimensions, Fab-ilities, as a vocabulary to describe what aspects of fabrication artifacts have become established and what knowledge remains tacit: (1) buildability, (2) executability, (3) reliability, (4) maintainability, (5) transferability, and (6) scalability. We describe five of our own projects (JigFab, StoryStick++, Silicone Devices, LamiFold, and PaperPulse), where varied attempts at dissemination, such as commercialization, spin-offs, and market exploration, each exposed different gaps between what we published and what transfer actually required.

Learning to delegate and act with DELEGACT: Multimodal language models for task-level human--cobot planning in industrial assembly

Industrial assembly is shifting toward human-robot collaboration (HRC) to leverage the complementary strengths of both agents. However, traditional task allocation referred to as the Robotic Assembly Line Balancing Problem (RALBP) remains labor-intensive and often lacks transparency. We introduce DELEGACT, a framework designed to produce workable, intelligible human-cobot task allocations. The framework uses a Vision-Language Model (VLM) to extract atomic operations from expert demonstration videos, then employs a Large Language Model (LLM) to delegate these tasks based on robot specifications, operator competencies, and material definitions. We provide a proof-of-concept prototype and preliminary testing on illustrative cases. Results demonstrate the system's ability to reason about complex constraints such as precision, weight, and ergonomics. This paper illustrates how off-the-shelf foundation models can automate HRC decision-making via a human-in-the-loop paradigm while preserving operator agency and understanding.

Every move you make: Visualizing near-future motion under delay for telerobotics

Delays in direct teleoperation decouple operator input from robot feedback. We frame this not as a unitary problem but as three facets of operator uncertainty: (1) communication, when commands take effect, (2) trajectory, how inputs map to motion, and (3) environmental, how external factors alter outcomes. We externalized each facet through predictive visualizations: Network, Path, and Envelope. In a controlled study with 24 participants (novices in telerobotics) navigating a simulated robot under a fixed 2.56s round-trip delay, we compared these visualizations against a delayed-video baseline. Path significantly shortened task time, lowered perceived cognitive load, and reduced reliance on reactive "move-and-wait" behavior. Envelope lowered cognitive load but did not significantly reduce reactive behavior or improve performance, while Network had no measurable effect. These results indicate that predictive support is effective only when trajectory uncertainty is externalized, enabling operators to move from reactive to more proactive control

Two student projects from the UHasselt Human-AI Interaction course featured in SAI Update

The SAI Update magazine (Nov 2025 , sia.be) selected two projects from our Human–AI Interaction (HAII) course for its Next Technology Generation special. Proud of our students Linsey Helsen and Xander Vervaecke who turned their Human-AI Interaction project ideas into concrete, useful systems.

1) A Multi-Agent Approach to Fact-Checking (, ) — Xander Vervaecke (UHasselt) Xander’s LieSpy.ai coordinates multiple LLMs (e.g., GPT, Gemini, Mistral) to verify claims, compare reasoning, and aggregate evidence into a transparent verdict. The interface exposes sources, trust scores, and model rationales, moving fact-checking beyond a single-model answer. Key ideas: multi-agent collaboration, cross-validation, explainability.

Read more →

Engineering interactive systems embedding AI technologies (3rd workshop on)

All Posts by Category or Tags.