<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Kris Luyten</title>
    <link>https://www.krisluyten.net/</link>
    <description>Recent content on Kris Luyten</description>
    <generator>Hugo</generator>
    <language>en</language>
    <copyright>&amp;copy; 2025 Kris Luyten</copyright>
    <lastBuildDate>Wed, 11 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://www.krisluyten.net/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Software Engineering</title>
      <link>https://www.krisluyten.net/teaching/se-2025/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/se-2025/</guid>
      <description>&lt;p&gt;In dit opleidingsonderdeel maak je kennis met de processen, tools en technieken om complexe, correcte en bruikbare software te bouwen. De verschillende fases van een software engineering process worden bestudeerd. We starten met een basis van requirements engineering. We behandelen diverse procesmodellen voor de ontwikkeling van software, inclusief agiele processen. Technieken zoals test-driven development en refactoring komen aan bod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PhD Students</title>
      <link>https://www.krisluyten.net/research/students/</link>
      <pubDate>Mon, 04 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/research/students/</guid>
      <description>&lt;h2 id=&#34;current-students&#34;&gt;Current students&lt;/h2&gt;&#xA;&lt;h3 id=&#34;as-advisor&#34;&gt;As advisor:&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://driescardinaels.be/&#34;&gt;Dries Cardinaels&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://gilleseerlings.github.io/website/&#34;&gt;Gilles Eerlings&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;hr&gt;&#xA;&lt;h2 id=&#34;previous-students&#34;&gt;Previous students&lt;/h2&gt;&#xA;&lt;h3 id=&#34;as-advisor-1&#34;&gt;As advisor:&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;dr. &lt;a href=&#34;https://www.linkedin.com/in/bramvandeurzen/&#34;&gt;Bram van Deurzen&lt;/a&gt;: Interface Clarity in Human-Robot Interaction across Physical and Virtual Realities&lt;/li&gt;&#xA;&lt;li&gt;dr. &lt;a href=&#34;https://www.linkedin.com/in/svencoppers&#34;&gt;Sven Coppers&lt;/a&gt;:  Intelligibility and control for context-aware Internet of Things applications. (2021)&lt;/li&gt;&#xA;&lt;li&gt;prof. dr. &lt;a href=&#34;http://www.raframakers.net&#34;&gt;Raf Ramakers&lt;/a&gt;: End-User Control over Physical User-Interfaces: From Digital Fabrication to Real-Time Adaptability. (2016)&lt;/li&gt;&#xA;&lt;li&gt;dr. &lt;a href=&#34;https://www.kashyaptodi.com&#34;&gt;Kashyap Todi&lt;/a&gt;: Improving and Facilitating the Placement of Interactive Elements on User Interfaces. (2018)&lt;/li&gt;&#xA;&lt;li&gt;dr. &lt;a href=&#34;https://www.linkedin.com/in/sean-tan-8b146322/&#34;&gt;Sean Chiew Seng Tan&lt;/a&gt;: Enabling Empathic Communication in Ubiquitous Computing Environments to Improve interaction between People. (2014)&lt;/li&gt;&#xA;&lt;li&gt;dr. &lt;a href=&#34;https://www.linkedin.com/in/joel-vogt/&#34;&gt;Joël Vogt&lt;/a&gt;: Requirements Elicitation and System Specification of Assistive Systems for People with Mild Dementia. (2013)&lt;/li&gt;&#xA;&lt;li&gt;dr. &lt;a href=&#34;https://www.linkedin.com/in/gvanderhulst/&#34;&gt;Geert Vanderhulst&lt;/a&gt;: Development and deployment of interactive pervasive appplications for ambient intelligent environments. (2010)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;as-a-co-advisor&#34;&gt;As a co-advisor:&lt;/h3&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;dr. Jeroen Ceyssens: Psychomotor Task Assistance and Training using Extended Reality.&lt;/li&gt;&#xA;&lt;li&gt;dr. Tom Veuskens: Advancing Design Reuse through Intelligible Dialogs in Feature-Based CAD Modeling.&lt;/li&gt;&#xA;&lt;li&gt;dr. Mannu Lambrichts: Democratizing Prototyping of Interactive Devices: A Unified Plug-and-Play Platform for Interconnecting and Driving Heterogeneous Electronic Components.&lt;/li&gt;&#xA;&lt;li&gt;dr. Niels Bylois: Conditional Metrics for Data Ingestion Validation.&lt;/li&gt;&#xA;&lt;li&gt;dr. Eva Geurts: Exploring Mobile Interactive Applications to Increase Patient Motivation in Rehabilitation.&lt;/li&gt;&#xA;&lt;li&gt;dr. Steven Nagels: Electronic devices which stretch like rubber bands: an holistic approach to materials and fabrication methods for stretchable electronics.&lt;/li&gt;&#xA;&lt;li&gt;prof. dr. Supraja Sankaran: HeartHab: From Persuasion to Self-management in Cardiac.&lt;/li&gt;&#xA;&lt;li&gt;dr. Marisela Gutierrez Lopez: Techniques and Artefacts for Documenting Design Rationale Among Multidisciplinary Design Teams.&lt;/li&gt;&#xA;&lt;li&gt;dr. Fredy Enrique Cuenca Lucero: Towards a composite event-based language for describing multimodal interactions.&lt;/li&gt;&#xA;&lt;li&gt;dr. Jan Meskens: Tool Support for Designing, Managing and Optimizing Multi-Device User Interfaces.&lt;/li&gt;&#xA;&lt;li&gt;dr. Jo Vermeulen: Designing for Intelligiblity and Control in Ubiquitous Computing Environments.&lt;/li&gt;&#xA;&lt;li&gt;dr. Nasim Mahmud: Exploiting context-awareness and social interaction to provide help in large-scale environments.&lt;/li&gt;&#xA;&lt;li&gt;dr. Mieke Haesen: User-centered process framework and techniques to support the realization of interactive systems by multi-disciplinary teams.&lt;/li&gt;&#xA;&lt;li&gt;dr. Petr Aksenov: The variability of location context in pervasive environments: modelling, representation and visualisation.&lt;/li&gt;&#xA;&lt;li&gt;prof. dr. Davy Vanacken: Touch-based interaction and collaboration in walk-up-and-use and multiuser environment.&lt;/li&gt;&#xA;&lt;li&gt;prof. dr. Wouter Gelade: Foundations of XML: Regular expressions revisited&lt;/li&gt;&#xA;&lt;/ul&gt;</description>
    </item>
    <item>
      <title>Software Engineering</title>
      <link>https://www.krisluyten.net/teaching/se/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/se/</guid>
      <description>&lt;p&gt;In dit opleidingsonderdeel maak je kennis met de processen, tools en technieken om complexe, correcte en bruikbare software te bouwen. De verschillende fases van een software engineering process worden bestudeerd. We starten met een basis van requirements engineering. We behandelen diverse procesmodellen voor de ontwikkeling van software, inclusief agiele processen. Technieken zoals test-driven development en refactoring komen aan bod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Geavanceerd Object-georienteerd Programmeren</title>
      <link>https://www.krisluyten.net/teaching/gop-2025/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/gop-2025/</guid>
      <description>&lt;p&gt;De studenten verdiepen zich verder in object-georiënteerd programmeren met de nodige aandacht voor het ontwerp en programmeren van goed gestuctureerde, robuuste, uitbreidbare en elegante code. Java wordt gebruikt als de centrale object-georiënteerde programmeertaal, maar de aangeleerde concepten en technieken zijn van toepassing op vele object-georiënteerde programmeertalen.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Geavanceerd Object-georienteerd Programmeren</title>
      <link>https://www.krisluyten.net/teaching/gop/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/gop/</guid>
      <description>&lt;p&gt;De studenten verdiepen zich verder in object-georiënteerd programmeren met de nodige aandacht voor het ontwerp en programmeren van goed gestuctureerde, robuuste, uitbreidbare en elegante code. Java wordt gebruikt als de centrale object-georiënteerde programmeertaal, maar de aangeleerde concepten en technieken zijn van toepassing op vele object-georiënteerde programmeertalen.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Committees and Boards</title>
      <link>https://www.krisluyten.net/research/committees/</link>
      <pubDate>Wed, 26 Jul 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/research/committees/</guid>
      <description>&lt;p&gt;My service to the research community:&lt;/p&gt;&#xA;&lt;h2 id=&#34;as-a-chair&#34;&gt;As a chair&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;ACM EICS 2027 (General Conference co-chair, with &lt;a href=&#34;https://www.irit.fr/recherches/ICS/people/martinie/&#34;&gt;Célia Martinie&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2026/&#34;&gt;ACM EICS 2026&lt;/a&gt; (Doctoral Consortium co-chair, with &lt;a href=&#34;http://hiis.isti.cnr.it/Users/Fabio/index.html&#34;&gt;Fabio Paternò&lt;/a&gt; and &lt;a href=&#34;https://hci.ece.upatras.gr/people/avouris/&#34;&gt;Nikos Avouris&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2025/&#34;&gt;ACM EICS 2025&lt;/a&gt; (Technical Programme co-chair, with &lt;a href=&#34;https://scholar.google.com/citations?user=y0kz-FgAAAAJ&amp;amp;hl=en&#34;&gt;Luciana Zaina&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2023/&#34;&gt;ACM EICS 2023&lt;/a&gt; (Full Papers and Tech Notes co-chair, with &lt;a href=&#34;https://giove.isti.cnr.it/Users/Carmen/index.html&#34;&gt;Carmen Santoro&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2022/&#34;&gt;ACM EICS 2022&lt;/a&gt; (Full Papers and Tech Notes co-chair, with &lt;a href=&#34;https://www.irit.fr/recherches/ICS/people/palanque/&#34;&gt;Philippe Palanque&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2021/&#34;&gt;ACM EICS 2021&lt;/a&gt; (Tutorial co-chair, with &lt;a href=&#34;https://www.di.uminho.pt/~jfc/index.shtml&#34;&gt;José Campos&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org&#34;&gt;ACM SIGCHI Engineering Interactive Computing Systems conference series 2017-2020&lt;/a&gt; (steering committee chair)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2018/&#34;&gt;ACM EICS 2018&lt;/a&gt; (General Conference co-chair, with Emmanuel Pietriga)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2016/&#34;&gt;ACM EICS 2016&lt;/a&gt; (General Conference co-chair, with &lt;a href=&#34;https://www.irit.fr/recherches/ICS/people/palanque/&#34;&gt;Philippe Palanque&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2015/&#34;&gt;ACM EICS 2015&lt;/a&gt; (Workshops co-chair, with &lt;a href=&#34;https://profiles.waikato.ac.nz/judy.bowen&#34;&gt;Judy Bowen&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2014/&#34;&gt;ACM EICS 2014&lt;/a&gt; (Doctoral Consortium co-chair, with &lt;a href=&#34;https://iihm.imag.fr/nigay/&#34;&gt;Laurence Nigay&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://eics.acm.org/2013/&#34;&gt;ACM EICS 2013&lt;/a&gt; (Long papers co-chair, with &lt;a href=&#34;https://www.ncl.ac.uk/computing/people/profile/michaelharrison.html&#34;&gt;Michael Harrison&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://link.springer.com/book/10.1007/978-3-642-34898-3&#34;&gt;AMI 2012&lt;/a&gt; (Short Paper co-chair, with &lt;a href=&#34;https://scholar.google.com/citations?user=StGjk88AAAAJ&amp;amp;hl=nl&#34;&gt;Evert van Loenen&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;http://www.chi2008.org&#34;&gt;ACM CHI 2008&lt;/a&gt; (Work-in-Progress co-chair,  with &lt;a href=&#34;https://webpages.charlotte.edu/mperez19/&#34;&gt;Manuel Pérez-Quiñones&lt;/a&gt;)&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://dblp.org/db/conf/tamodia/tamodia2006.html&#34;&gt;TAMODIA 2006&lt;/a&gt; (Program Chair)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;as-a-workshop-co-organizer&#34;&gt;As a workshop (co-)organizer&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/3731406&#34;&gt;Engineering Interactive Computer Systems: EICS 2025 Workshop Companion Proceedings&lt;/a&gt; (2025) — co-editor with &lt;a href=&#34;https://profiles.waikato.ac.nz/judy.bowen&#34;&gt;Judy Bowen&lt;/a&gt;, Benjamin Weyers, and &lt;a href=&#34;https://scholar.google.com/citations?user=y0kz-FgAAAAJ&amp;amp;hl=en&#34;&gt;Luciana Zaina&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1007/978-3-031-91760-8&#34;&gt;Engineering Interactive Computer Systems: EICS 2024 International Workshops&lt;/a&gt; (2024) — revised selected papers, Springer LNCS 15518; co-editor with &lt;a href=&#34;https://scholar.google.com/citations?user=y0kz-FgAAAAJ&amp;amp;hl=en&#34;&gt;Luciana Zaina&lt;/a&gt;, José Creissac Campos, Lucio Davide Spano, &lt;a href=&#34;https://www.irit.fr/recherches/ICS/people/palanque/&#34;&gt;Philippe Palanque&lt;/a&gt; and others&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/3531073.3535257&#34;&gt;HCI and Worker Well-being in Manufacturing Industry&lt;/a&gt; (2022) — workshop at &lt;a href=&#34;https://avi2022.dibris.unige.it/&#34;&gt;AVI 2022&lt;/a&gt;, with Eva Geurts, Gustavo Rovelo Ruiz, Steven Houben, Benjamin Weyers, An Jacobs, and &lt;a href=&#34;https://www.irit.fr/recherches/ICS/people/palanque/&#34;&gt;Philippe Palanque&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;SmartObjects: Interacting with Smart Objects&lt;/strong&gt; — recurring workshop series at IUI and CHI (6 editions, 2011–2018):&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/3170427.3170606&#34;&gt;6th edition at ACM CHI 2018&lt;/a&gt; — with Florian Müller, Dirk Schnelle-Walka, Tobias Grosse-Puppendahl, Sebastian Günther, Markus Funk, Oliver Brdiczka, Niloofar Dezfuli, Max Mühlhäuser&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/3030024.3040249&#34;&gt;5th edition at ACM IUI 2017&lt;/a&gt; — with Dirk Schnelle-Walka, Florian Müller, Tobias Grosse-Puppendahl, Max Mühlhäuser, Oliver Brdiczka&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/2678025.2716269&#34;&gt;4th edition at ACM IUI 2015&lt;/a&gt; — with Dirk Schnelle-Walka, Max Mühlhäuser, Stefan Radomski, Oliver Brdiczka, Jochen Huber, Tobias Grosse-Puppendahl&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/2559184.2559940&#34;&gt;3rd edition at ACM IUI 2014&lt;/a&gt; — with Dirk Schnelle-Walka, Jochen Huber, Stefan Radomski, Oliver Brdiczka, Max Mühlhäuser&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/2451176.2451227&#34;&gt;2nd edition at ACM IUI 2013&lt;/a&gt; — with Dirk Schnelle-Walka, Jochen Huber, Roman Lissermann, Oliver Brdiczka, Max Mühlhäuser&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/1943403.1943511&#34;&gt;1st edition at ACM IUI 2011&lt;/a&gt; — with Melanie Hartmann, Daniel Schreiber, Oliver Brdiczka, Max Mühlhäuser&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/2876456.2882849&#34;&gt;SCWT: Smart Connected and Wearable Things&lt;/a&gt; (2016) — workshop at ACM IUI 2016, with Dirk Schnelle-Walka, Florian Müller, Massimo Mecella, Joel Lanir, Tsvi Kuflik, Oliver Brdiczka, Max Mühlhäuser, and others&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Engineering Patterns for Multi-Touch Interfaces&lt;/strong&gt; — workshop series at ACM EICS (2 editions, 2010–2011):&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/1996461.1996553&#34;&gt;2nd edition at ACM EICS 2011&lt;/a&gt; — with Davy Vanacken, Malte Weiss, Jan O. Borchers, Miguel A. Nacenta&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://doi.org/10.1145/1822018.1822084&#34;&gt;1st edition at ACM EICS 2010&lt;/a&gt; — with Davy Vanacken, Malte Weiss, Jan O. Borchers, Shahram Izadi, Daniel Wigdor&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;./files/pubs/pdfs/luyten-uixml2004.pdf&#34;&gt;Developing User Interfaces with XML: Advances on User Interface Description Language&lt;/a&gt; (2004) — workshop at &lt;a href=&#34;https://ieeexplore.ieee.org/xpl/conhome/9277/proceeding&#34;&gt;AVI 2004&lt;/a&gt;, Gallipoli, Italy; with Marc Abrams, Jean Vanderdonckt, and Quentin Limbourg &lt;a href=&#34;./files/pubs/pdfs/luyten-uixml2004.pdf&#34; target=&#34;_blank&#34; class=&#34;badge badge-small pdf&#34;&gt;&lt;i class=&#34;far fa-file-pdf&#34;&gt;&lt;/i&gt; PDF&lt;/a&gt;.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;editorial-work&#34;&gt;Editorial work&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://dl.acm.org/toc/pacmhci/2023/7/EICS&#34;&gt;PACMHCI - Engineering Interactive Computing Systems, June 2023 issue&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://dl.acm.org/toc/pacmhci/2022/6/EICS&#34;&gt;PACMHCI - Engineering Interactive Computing Systems, June 2022 issue&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://dl.acm.org/toc/tochi/2009/16/4&#34;&gt;ACM Transactions on Computer-Human Interaction, Vol. 16, Issue 4, November 2009&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;standardization-committees&#34;&gt;Standardization Committees&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;The &lt;a href=&#34;https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=uiml&#34;&gt;OASIS UIML Specification Technical Committee&lt;/a&gt; (closed)&lt;/li&gt;&#xA;&lt;/ul&gt;</description>
    </item>
    <item>
      <title>Human-AI Interaction</title>
      <link>https://www.krisluyten.net/teaching/haii-2025/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/haii-2025/</guid>
      <description>&lt;p&gt;Artificiële Intelligentie (AI) tracht de menselijke intelligentie te simuleren, werkt op allerlei data die betrekking heeft op of nut heeft voor mensen, en is alleen echt nuttig als het een positief effect heeft op het leven van mensen. In deze cursus bestuderen we hoe AI dan ook op een geschikte manier kan ingezet worden voor de menselijke gebruiker. Dit houdt in dat de menselijke gebruiker een beter begrip krijgt van een AI systeem, en een verhoogde controle op de werking van zulk een systeem.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Human-AI Interaction</title>
      <link>https://www.krisluyten.net/teaching/haii/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/haii/</guid>
      <description>&lt;p&gt;Artificiële Intelligentie (AI) tracht de menselijke intelligentie te simuleren, werkt op allerlei data die betrekking heeft op of nut heeft voor mensen, en is alleen echt nuttig als het een positief effect heeft op het leven van mensen. In deze cursus bestuderen we hoe AI dan ook op een geschikte manier kan ingezet worden voor de menselijke gebruiker. Dit houdt in dat de menselijke gebruiker een beter begrip krijgt van een AI systeem, en een verhoogde controle op de werking van zulk een systeem.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Programming Technologies</title>
      <link>https://www.krisluyten.net/teaching/programming-technologies-2025/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/programming-technologies-2025/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve thaught a programming technologies course in the past covering multiple programming languages in various programming paradigms, and I am currently co-teaching a similar course. For archival purposes you can find some of the old and more recent teaching artifacts (in &lt;em&gt;Dutch&lt;/em&gt;) here.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Programming Technologies</title>
      <link>https://www.krisluyten.net/teaching/programming-technologies/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/programming-technologies/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve thaught a programming technologies course in the past covering multiple programming languages in various programming paradigms, and I am currently co-teaching a similar course. For archival purposes you can find some of the old and more recent teaching artifacts (in &lt;em&gt;Dutch&lt;/em&gt;) here.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Geïntegreerd Groepsproject</title>
      <link>https://www.krisluyten.net/teaching/integrated-project-2025/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/integrated-project-2025/</guid>
      <description>&lt;p&gt;Studenten ervaren de volledige levenscyclus van softwareontwikkeling, van concept tot deployment. Het programma simuleert een realistische werkomgeving met typische rollen en activiteiten van een professioneel softwareontwikkelingsproces.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Geïntegreerd Groepsproject</title>
      <link>https://www.krisluyten.net/teaching/integrated-project/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/integrated-project/</guid>
      <description>&lt;p&gt;Studenten ervaren de volledige levenscyclus van softwareontwikkeling, van concept tot deployment. Het programma simuleert een realistische werkomgeving met typische rollen en activiteiten van een professioneel softwareontwikkelingsproces.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Information Visualisation</title>
      <link>https://www.krisluyten.net/teaching/information-visualisation-2025/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/information-visualisation-2025/</guid>
      <description>&lt;p&gt;This course examines visualization principles and techniques that facilitate improved understanding and analysis of data. Students develop critical perspectives on existing visualizations while learning to design and implement visualizations for complex datasets.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Information Visualisation</title>
      <link>https://www.krisluyten.net/teaching/information-visualisation/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/information-visualisation/</guid>
      <description>&lt;p&gt;This course examines visualization principles and techniques that facilitate improved understanding and analysis of data. Students develop critical perspectives on existing visualizations while learning to design and implement visualizations for complex datasets.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Linux Operating Systems (Archived 2006)</title>
      <link>https://www.krisluyten.net/teaching/linux/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/linux/</guid>
      <description>&lt;p&gt;&lt;strong&gt;[ARCHIVAAL/ARCHIVAL 2006]&lt;/strong&gt; Een historische introductiecursus over Linux besturingssystemen uit 2006, waarin de fundamentele concepten van Unix/Linux systemen werden behandeld. Deze cursus bood studenten een grondige basis in het werken met Linux commando&amp;rsquo;s, systeembeheer, en de filosofie achter Unix-gebaseerde systemen.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Making It Work Is the Work: attempts to progress HCIxFabrication research from the lab to the market</title>
      <link>https://www.krisluyten.net/news/2026/03/11/realfab2026_makingitwork/</link>
      <pubDate>Wed, 11 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2026/03/11/realfab2026_makingitwork/</guid>
      <description>&lt;h2 id=&#34;making-it-work-is-the-work-engineering-maturity-as-epistemic-work&#34;&gt;&lt;a href=&#34;./files/pubs/html/leen-realfab2026/&#34;&gt;Making It Work Is the Work: Engineering Maturity as Epistemic Work&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;We contribute some reflections on our attempts to progress HCIxFabrication research from the lab to the market in a short paper &lt;strong&gt;&amp;quot;&lt;a href=&#34;./publications/leen_realfab26/&#34;&gt;Making It Work Is the Work&lt;/a&gt;&amp;quot;&lt;/strong&gt; (to be discussed at the &lt;strong&gt;&lt;a href=&#34;https://fabricationresearch.wordpress.com&#34;&gt;RealFab&#39;26&lt;/a&gt;&lt;/strong&gt; workshop at CHI 2026 in Barcelona). This is work with my colleagues Danny Leen, Stig Konings, and Raf Ramakers at the &lt;a href=&#34;https://www.uhasselt.be/en/instituten-en/digitalfuturelab&#34;&gt;Digital Future Lab&lt;/a&gt; (UHasselt &amp;ndash; Flanders Make).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Extended Abstract accepted at CHI 2026: Teaching Cobots What to Do by Watching an Expert</title>
      <link>https://www.krisluyten.net/news/2026/02/25/chi2026_delegact/</link>
      <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2026/02/25/chi2026_delegact/</guid>
      <description>&lt;h2 id=&#34;delegact-let-the-robot-watch-then-decide-who-does-what&#34;&gt;&lt;a href=&#34;https://driescardinaels.be/papers/delegact/index.html&#34;&gt;DELEGACT: Let the Robot Watch, Then Decide Who Does What&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;Our extended abstract &lt;strong&gt;&amp;quot;&lt;a href=&#34;https://driescardinaels.be/papers/delegact/index.html&#34;&gt;Learning to Delegate and Act with DELEGACT: Multimodal Language Models for Task-Level Human–Cobot Planning in Industrial Assembly&lt;/a&gt;&amp;quot;&lt;/strong&gt; has been accepted at &lt;strong&gt;CHI 2026&lt;/strong&gt; in Barcelona. This is work by Bram Verstappen together with Dries Cardinaels, Danny Leen, and Raf Ramakers at the &lt;a href=&#34;https://digitalfuturelab.be/&#34;&gt;Digital Future Lab&lt;/a&gt; (UHasselt - Flanders Make).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Presented at EURECA-PRO Education &amp; Research Days: Teaching as Training</title>
      <link>https://www.krisluyten.net/news/2026/02/03/glocalising2026_teachingastraining/</link>
      <pubDate>Tue, 03 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2026/02/03/glocalising2026_teachingastraining/</guid>
      <description>&lt;h2 id=&#34;teaching-as-training-incremental-and-iterative-ai-skill-development&#34;&gt;Teaching as Training: Incremental and Iterative AI Skill Development&lt;/h2&gt;&#xA;&lt;p&gt;We presented our contribution &lt;strong&gt;&amp;ldquo;Teaching as Training: Iterative and Incremental AI Skill Development&amp;rdquo;&lt;/strong&gt; (&lt;a href=&#34;./files/pubs/pdfs/luyten-notermans-glocalising2026.pdf&#34;&gt;&lt;i class=&#34;&#xA;far&#xA;fa-file-pdf&#xA;&#34;&gt;&lt;/i&gt;&lt;/a&gt;) at the &lt;strong&gt;EURECA-PRO Education &amp;amp; Research Days&lt;/strong&gt; in Hasselt, held under the theme &lt;em&gt;&lt;a href=&#34;https://www.uhasselt.be/en/events-en/2025-2026/glocalising-universities-a-shifting-horizon&#34;&gt;Glocalising Universities: A Shifting Horizon&lt;/a&gt;&lt;/em&gt;. This is joint work with Jolien Notermans (Department of Educational Development, Policy and Quality Assurance) and Sarah Doumen (Faculty of Sciences) at &lt;a href=&#34;https://www.uhasselt.be&#34;&gt;Hasselt University&lt;/a&gt;. More details on the &lt;a href=&#34;./publications/luytennotermans2026&#34;&gt;publication page&lt;/a&gt;. The visual story is generated using &lt;a href=&#34;https://storybookly.app&#34;&gt;StoryBookly&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted at ICLR 2026: DIVERSE: Disagreement-Inducing Vector Evolution for Rashomon Set Exploration</title>
      <link>https://www.krisluyten.net/news/2026/01/26/iclr2026_diverse/</link>
      <pubDate>Mon, 26 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2026/01/26/iclr2026_diverse/</guid>
      <description>&lt;h2 id=&#34;diverse-finding-the-many-faces-of-ai-decision-making&#34;&gt;DIVERSE: Finding the Many Faces of AI Decision-Making&lt;/h2&gt;&#xA;&lt;p&gt;Our paper &lt;strong&gt;&amp;ldquo;DIVERSE: Disagreement-Inducing Vector Evolution for Rashomon Set Exploration&amp;rdquo;&lt;/strong&gt; (&lt;a href=&#34;./files/pubs/pdfs/eerlings-iclr2026.pdf&#34;&gt;&lt;i class=&#34;&#xA;far&#xA;fa-file-pdf&#xA;&#34;&gt;&lt;/i&gt;&lt;/a&gt;) has been accepted at &lt;strong&gt;ICLR 2026&lt;/strong&gt;, one of the top venues for machine learning research. This is joint work with my PhD student Gilles Eerlings, Brent Zoomers, Jori Liesenborgs, and Gustavo Rovelo Ruiz at the &lt;a href=&#34;https://digitalfuturelab.be/&#34;&gt;Digital Future Lab&lt;/a&gt;. More details on the &lt;a href=&#34;./publications/eerlings2026diverse&#34;&gt;publication page&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted at CHI 2026: Helping Humans Control Robots on the Moon</title>
      <link>https://www.krisluyten.net/news/2026/01/20/chi2026_telerobotics/</link>
      <pubDate>Tue, 20 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2026/01/20/chi2026_telerobotics/</guid>
      <description>&lt;h2 id=&#34;every-move-you-make-helping-operators-see-where-their-robot-will-go&#34;&gt;&lt;a href=&#34;https://driescardinaels.be/papers/every-move-you-make/index.html&#34;&gt;Every Move You Make: Helping Operators See Where Their Robot Will Go&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;Our paper &lt;strong&gt;&amp;quot;&lt;a href=&#34;https://driescardinaels.be/papers/every-move-you-make/index.html&#34;&gt;Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics&lt;/a&gt;&amp;quot;&lt;/strong&gt; (&lt;a href=&#34;./files/pubs/pdfs/cardinaels-chi2026.pdf&#34;&gt;&lt;i class=&#34;&#xA;far&#xA;fa-file-pdf&#xA;&#34;&gt;&lt;/i&gt;&lt;/a&gt;) has been accepted at &lt;strong&gt;&lt;a href=&#34;https://chi2026.acm.org&#34;&gt;CHI 2026&lt;/a&gt;&lt;/strong&gt; in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the &lt;a href=&#34;https://digitalfuturelab.be/&#34;&gt;Digital Future Lab&lt;/a&gt; (UHasselt - Flanders Make). More details on the &lt;a href=&#34;./publications/cardinaels2026everymove&#34;&gt;publication page&lt;/a&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;→ &lt;a href=&#34;https://driescardinaels.be/papers/every-move-you-make/index.html&#34;&gt;Paper page on driescardinaels.be&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>DIVERSE: Disagreement-inducing vector evolution for rashomon set exploration</title>
      <link>https://www.krisluyten.net/publications/eerlings2026diverse/</link>
      <pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/eerlings2026diverse/</guid>
      <description>&lt;p&gt;We propose DIVERSE, a framework for systematically exploring the Rashomon set of deep neural networks, the collection of models that match a reference model&#39;s accuracy while differing in their predictive behavior. DIVERSE augments a pretrained model with Feature-wise Linear Modulation (FiLM) layers and uses Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to search a latent modulation space, generating diverse model variants without retraining or gradient access. Across MNIST, PneumoniaMNIST, and CIFAR-10, DIVERSE uncovers multiple high-performing yet functionally distinct models. Our experiments show that DIVERSE offers a competitive and efficient exploration of the Rashomon set, making it feasible to construct diverse sets that maintain robustness and performance while supporting well-balanced model multiplicity. While retraining remains the baseline for generating Rashomon sets, DIVERSE achieves comparable diversity at reduced computational cost.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Every move you make: Visualizing near-future motion under delay for telerobotics</title>
      <link>https://www.krisluyten.net/publications/cardinaels2026everymove/</link>
      <pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/cardinaels2026everymove/</guid>
      <description>&lt;p&gt;Delays in direct teleoperation decouple operator input from robot feedback. We frame this not as a unitary problem but as three facets of operator uncertainty: (1) communication, when commands take effect, (2) trajectory, how inputs map to motion, and (3) environmental, how external factors alter outcomes. We externalized each facet through predictive visualizations: Network, Path, and Envelope. In a controlled study with 24 participants (novices in telerobotics) navigating a simulated robot under a fixed 2.56s round-trip delay, we compared these visualizations against a delayed-video baseline. Path significantly shortened task time, lowered perceived cognitive load, and reduced reliance on reactive &amp;amp;quot;move-and-wait&amp;amp;quot; behavior. Envelope lowered cognitive load but did not significantly reduce reactive behavior or improve performance, while Network had no measurable effect. These results indicate that predictive support is effective only when trajectory uncertainty is externalized, enabling operators to move from reactive to more proactive control&lt;/p&gt;</description>
    </item>
    <item>
      <title>Learning to delegate and act with DELEGACT: Multimodal language models for task-level human--cobot planning in industrial assembly</title>
      <link>https://www.krisluyten.net/publications/verstappen2026delegact/</link>
      <pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/verstappen2026delegact/</guid>
      <description>&lt;p&gt;Industrial assembly is shifting toward human-robot collaboration (HRC) to leverage the complementary strengths of both agents. However, traditional task allocation referred to as the Robotic Assembly Line Balancing Problem (RALBP) remains labor-intensive and often lacks transparency. We introduce DELEGACT, a framework designed to produce workable, intelligible human-cobot task allocations. The framework uses a Vision-Language Model (VLM) to extract atomic operations from expert demonstration videos, then employs a Large Language Model (LLM) to delegate these tasks based on robot specifications, operator competencies, and material definitions. We provide a proof-of-concept prototype and preliminary testing on illustrative cases. Results demonstrate the system&#39;s ability to reason about complex constraints such as precision, weight, and ergonomics. This paper illustrates how off-the-shelf foundation models can automate HRC decision-making via a human-in-the-loop paradigm while preserving operator agency and understanding.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Making it work is the work: Engineering maturity as epistemic work</title>
      <link>https://www.krisluyten.net/publications/leen_realfab26/</link>
      <pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/leen_realfab26/</guid>
      <description>&lt;p&gt;Many HCI fabrication systems are compelling as prototypes but remain difficult to reuse, extend, or transfer beyond their original publication. A common explanation is that adoption simply takes time. We argue that the issue is more fundamental. The knowledge needed to make fabrication systems transferable, namely how they behave across different materials, machines, and users, usually does not exist at the time of publication because the work required to generate this knowledge is rarely incentivized or rewarded. Drawing on engineering epistemology and prior debates in systems-oriented HCI, we reframe engineering maturity as epistemic work: sustained engineering effort that produces knowledge which prototyping alone cannot reveal. We propose six dimensions, Fab-ilities, as a vocabulary to describe what aspects of fabrication artifacts have become established and what knowledge remains tacit: (1) buildability, (2) executability, (3) reliability, (4) maintainability, (5) transferability, and (6) scalability. We describe five of our own projects (JigFab, StoryStick++, Silicone Devices, LamiFold, and PaperPulse), where varied attempts at dissemination, such as commercialization, spin-offs, and market exploration, each exposed different gaps between what we published and what transfer actually required.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Teaching as training: Iterative and incremental AI skill development</title>
      <link>https://www.krisluyten.net/publications/luytennotermans2026/</link>
      <pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytennotermans2026/</guid>
      <description>&lt;p&gt;Higher education must equip students with skills for complex, multidisciplinary challenges. Traditional approaches relying on fixed deadlines and traditional exams often limit opportunities for growth and continuous skill development. This contribution presents an iterative and incremental teaching method, applied for five years in a row in a master-level Computer Science course on Human&amp;ndash;AI Interaction. Our approach emphasizes formative feedback, collaborative learning, and individual progression. Students work on group assignments and an individual project, with no strict deadlines and unlimited opportunities during the semester to resubmit until a &amp;quot;pass&amp;quot; is achieved. Compact feedback sessions after each iteration serve both as assessment moments and teaching opportunities, clarifying expectations and guiding improvement. The method is grounded in mastery learning, formative assessment, and the High Impact Learning that Lasts model, fostering motivation and self-determination. Survey data and performance analysis of a study conducted two years ago, show positive effects on learning outcomes and student motivation: students valued the clarity of assessment, the removal of &amp;quot;one chance&amp;quot; exams, and the freedom to iteratively improve. Over five years of teaching, this approach has proven effective in balancing diverse prior knowledge, building applicable skills, and sustaining motivation during the semester. We conclude that incremental and iterative teaching constitutes a viable model for skill-oriented higher education, adaptable across contexts where collaboration, feedback, and progression are central.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Open-Source Slidev Plugins for Interactive CS Teaching</title>
      <link>https://www.krisluyten.net/news/2025/11/13/slidev_plugins_jdoodle/</link>
      <pubDate>Thu, 13 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/11/13/slidev_plugins_jdoodle/</guid>
      <description>Two new Slidev components—LLMQuery and JavaPlayGround—bring AI-powered explanations and live code execution directly into lecture slides.</description>
    </item>
    <item>
      <title>Two student projects from the UHasselt Human-AI Interaction course featured in SAI Update</title>
      <link>https://www.krisluyten.net/news/2025/11/13/haii_course_results/</link>
      <pubDate>Thu, 13 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/11/13/haii_course_results/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;https://www.sai.be/pagina/magazine/&#34;&gt;SAI Update magazine&lt;/a&gt; (&lt;a href=&#34;https://www.sai.be/bestand/file/magazine/25/&#34;&gt;Nov 2025&lt;/a&gt; , &lt;a href=&#34;https://www.sai.be/&#34;&gt;sia.be&lt;/a&gt;) selected two projects from our Human–AI Interaction (HAII) course for its &lt;strong&gt;Next Technology Generation&lt;/strong&gt; special. Proud of our students &lt;em&gt;Linsey Helsen&lt;/em&gt; and &lt;em&gt;Xander Vervaecke&lt;/em&gt; who turned their Human-AI Interaction project ideas into concrete, useful systems.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;1) A Multi-Agent Approach to Fact-Checking (&lt;a href=&#34;./files/pubs/SAI%20Update%2024%20UHasselt%20Xander%20Vervaecke.pdf&#34;&gt;&lt;i class=&#34;&#xA;far&#xA;fa-file-pdf&#xA;&#34;&gt;&lt;/i&gt;&lt;/a&gt;, &lt;a href=&#34;https://www.youtube.com/watch?v=cnLO6rG6ep4&amp;amp;list=PLCzqhoCko1gOvMB04bPwNhXwKBlTTy-ju&amp;amp;index=8&#34;&gt;&lt;i class=&#34;&#xA;fab&#xA;fa-youtube&#xA;&#34;&gt;&lt;/i&gt;&lt;/a&gt;) — Xander Vervaecke (UHasselt)&lt;/strong&gt;&#xA;&lt;em&gt;Xander&amp;rsquo;s LieSpy.ai&lt;/em&gt; coordinates multiple LLMs (e.g., GPT, Gemini, Mistral) to verify claims, compare reasoning, and aggregate evidence into a transparent verdict. The interface exposes sources, trust scores, and model rationales, moving fact-checking beyond a single-model answer. Key ideas: multi-agent collaboration, cross-validation, explainability.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Co-chairing EICS 2026 Doctoral Consortium and EICS 2027 Conference</title>
      <link>https://www.krisluyten.net/news/2025/08/08/co-chairing-eics-2026-doctoral-consortium-and-eics-2027-conference/</link>
      <pubDate>Fri, 08 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/08/08/co-chairing-eics-2026-doctoral-consortium-and-eics-2027-conference/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m excited to announce my involvement in two upcoming EICS conferences: I&amp;rsquo;m co-chairing the &lt;a href=&#34;https://eics.acm.org/2026/dc.html&#34;&gt;EICS 2026 Doctoral Consortium&lt;/a&gt; together with &lt;strong&gt;&lt;a href=&#34;https://giove.isti.cnr.it/Users/Fabio/index.html&#34;&gt;Fabio Paternò&lt;/a&gt;&lt;/strong&gt; from CNR-ISTI, Italy, and &lt;strong&gt;&lt;a href=&#34;https://www.ece.upatras.gr/index.php/en/ece-faculty/avouris-nikolaos.html&#34;&gt;Nikos Avouris&lt;/a&gt;&lt;/strong&gt;, and I will be a general co-chair together with &lt;strong&gt;&lt;a href=&#34;https://www.irit.fr/recherches/ICS/people/martinie/&#34;&gt;Célia Martinie&lt;/a&gt;&lt;/strong&gt; for EICS 2027.&lt;/p&gt;</description>
    </item>
    <item>
      <title>LLMQuery for Slidev: Integration of on-the-fly LLM Queries during your Presentation</title>
      <link>https://www.krisluyten.net/news/2025/07/29/slidev_plugins_llm/</link>
      <pubDate>Tue, 29 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/07/29/slidev_plugins_llm/</guid>
      <description>&lt;p&gt;I wanted to show my students appropriate ways of using LLMs for and during coding, so I started building (with some LLM help) a &lt;a href=&#34;https://sli.dev&#34;&gt;Slidev&lt;/a&gt; component, &lt;a href=&#34;https://www.krisluyten.net/files/software/LLMQuery.vue&#34;&gt;LLMQuery.vue&lt;/a&gt;, that adds LLM interactions to slides. It feels important to actively show students how these tools can amplify human knowledge and skill building rather than replace it altogether, even if I&amp;rsquo;m far from an expert. So with a bit of LLM help , I put together a sli.dev component in Vue that integrates &lt;a href=&#34;https://www.krisluyten.net/files/software/LLMQuery.vue&#34;&gt;LLMQuery&lt;/a&gt; right into my &lt;a href=&#34;https://sli.dev&#34;&gt;Slidev&lt;/a&gt; presentation. Maybe it&amp;rsquo;s useful for others too, so I&amp;rsquo;m sharing it here for download and further tinkering—people who are much better at web dev (there are many!) can probably turn it into something truly polished.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Student Project Results from Human-AI Interaction Course</title>
      <link>https://www.krisluyten.net/news/2025/07/29/haii_course_results/</link>
      <pubDate>Tue, 29 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/07/29/haii_course_results/</guid>
      <description>&lt;p&gt;I am pleased to share the results of individual student projects from our Human-AI Interaction (HAII) course at Hasselt University. The &lt;a href=&#34;https://www.youtube.com/playlist?list=PLCzqhoCko1gOvMB04bPwNhXwKBlTTy-ju&#34;&gt;YouTube playlist&lt;/a&gt; showcases some of the  work our students have done throughout the course.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Launch of the Digital Future Lab: Toward Intelligible, Trustworthy and Human-Centered Digital Systems</title>
      <link>https://www.krisluyten.net/news/2025/05/22/digital_future_lab_launch/</link>
      <pubDate>Thu, 22 May 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/05/22/digital_future_lab_launch/</guid>
      <description>&lt;p&gt;The official launch of the &lt;a href=&#34;https://www.uhasselt.be/en/instituten-en/digitalfuturelab&#34;&gt;Digital Future Lab (DFL)&lt;/a&gt; marks an exciting step forward for Hasselt University and for the ecosystem of digital innovation in Flanders. With over &lt;strong&gt;80 researchers across various interdisciplinary groups&lt;/strong&gt;, DFL focuses on creating &lt;strong&gt;well-designed, human-centered, trustworthy, and useful digital systems&lt;/strong&gt; that address both industrial and societal challenges. We did an interview (in Dutch, with Ann T&amp;rsquo;Syen) that can be found &lt;a href=&#34;https://view.publitas.com/universiteit-hasselt/magazine-juli-2025-online/page/14-15&#34;&gt;here&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Short paper on Delay-Invariant Telerobotic Interaction accepted for Intelligent User Interfaces 2025</title>
      <link>https://www.krisluyten.net/news/2025/02/05/iui_2025_paper/</link>
      <pubDate>Wed, 05 Feb 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2025/02/05/iui_2025_paper/</guid>
      <description>&lt;p&gt;Our short paper, &lt;em&gt;Challenges and Opportunities for Delay-Invariant Telerobotic Interactions&lt;/em&gt;, has been accepted for the &lt;a href=&#34;https://iui.acm.org/2025/&#34;&gt;29th ACM International Conference on Intelligent User Interfaces (IUI 2025)&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AI-spectra: A visual dashboard for model multiplicity to enhance informed and transparent decision-making</title>
      <link>https://www.krisluyten.net/publications/10_1007_978_3_031_91760_8_5/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/10_1007_978_3_031_91760_8_5/</guid>
      <description>&lt;p&gt;We present an approach, AI-Spectra, to leverage model multiplicity for interactive systems. Model multiplicity means using slightly different AI models yielding equally valid outcomes or predictions for the same task, thus relying on many simultaneous &amp;quot;expert advisors&amp;quot; that can have different opinions. Dealing with multiple AI models that generate potentially divergent results for the same task is challenging for users to deal with. It helps users understand and identify AI models are not always correct and might differ, but it can also result in an information overload when being confronted with multiple results instead of one. AI-Spectra leverages model multiplicity by using a visual dashboard designed for conveying what AI models generate which results while minimizing the cognitive effort to detect consensus among models and what type of models might have different opinions. We use a custom adaptation of Chernoff faces for AI-Spectra; Chernoff Bots. This visualization technique lets users quickly interpret complex, multivariate model configurations and compare predictions across multiple models. Our design is informed by building on established Human-AI Interaction guidelines and well know practices in information visualization. We validated our approach through a series of experiments training a wide variation of models with the MNIST dataset to perform number recognition. Our work contributes to the growing discourse on making AI systems more transparent, trustworthy, and effective through the strategic use of multiple models.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Challenges and opportunities for delay-invariant telerobotic interactions (short paper)</title>
      <link>https://www.krisluyten.net/publications/cardinaels2025delayinvarianttelerobotics/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/cardinaels2025delayinvarianttelerobotics/</guid>
      <description>&lt;p&gt;Effective operation in direct-control telerobotics relies heavily on real-time communication between the operator and the robot, as the operator retains full control over the robot&#39;s actions. However, in scenarios involving long distances, communication delays disrupt this feedback loop, creating significant challenges for precise control. To investigate these challenges, we conducted a user study where participants operated a TurtleBot3 Waffle Pi under varying delay conditions. Post-experiment brainstorming and analysis revealed recurring challenges, including over-correction, unpredictable robot behavior, and reduced situational awareness. Potential solutions identified include improving robot behavior predictability, integrating feedforward mechanisms, and enhancing visual feedback. These findings underscore the importance of designing intelligent interfaces to mitigate the impact of delays on telerobotic performance.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Companion proceedings of the 17th ACM SIGCHI symposium on engineering interactive computing systems, EICS 2025, trier, germany, june 23-27, 2025</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_2025c/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_2025c/</guid>
      <description></description>
    </item>
    <item>
      <title>EICS 2025 foreword</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_bowenwdmlz25/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_bowenwdmlz25/</guid>
      <description></description>
    </item>
    <item>
      <title>Engineering interactive computer systems. EICS 2024 international workshops - cagliari, sardinia, italy, june 24-26, 2024, revised selected papers</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_2024w/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_2024w/</guid>
      <description></description>
    </item>
    <item>
      <title>Engineering interactive systems embedding AI technologies (3rd workshop on)</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_barricelliclmpp25/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_barricelliclmpp25/</guid>
      <description></description>
    </item>
    <item>
      <title>Will astronauts fumble? Preparing for unpredictable floating tools with encountered haptics and virtual reality</title>
      <link>https://www.krisluyten.net/publications/claesen2025astronauts/</link>
      <pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/claesen2025astronauts/</guid>
      <description>&lt;p&gt;Astronauts routinely train spacewalks when on Earth. These spacewalks&amp;mdash;extravehicular activities (EVAs)&amp;mdash;are typically trained in neutral‑buoyancy pools or VR environments. However, neither environment captures the chaotic micro‑dynamics of a tethered tool in micro‑gravity. We designed and developed ZeroTraining: an encountered‑type haptic training rig (ZeroArm) paired with a VR simulation (ZeroPGT) that recreates the physical behavior of a tethered floating object in space. The integration of virtual and physical interactions supports dexterity training and improves transferability to real situations. We demonstrate feasibility using low‑cost components and validate the design in a formative study with ten participants.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Master thesis Maties Claesen nominated for the EOS thesis award!</title>
      <link>https://www.krisluyten.net/news/2024/11/26/master_thesis_eos_nomination/</link>
      <pubDate>Wed, 27 Nov 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/11/26/master_thesis_eos_nomination/</guid>
      <description>&lt;p&gt;Very proud of my thesis student Maties Claesen, who has been nominated for the EOS thesis award! His work, &amp;ldquo;ZeroTraining: Extending Zero-Gravity Objects Simulation in Virtual Reality Using Robotics,&amp;rdquo; combines virtual reality and robotics to simulate weightless objects more realistically – crucial for astronaut training and space exploration. Maties demonstrated some impressive creative problem-solving skills, especially in combining diverse fields to tackle complex challenges with limited hardware resources.&lt;/p&gt;&#xA;&lt;p&gt;Special thanks to Andreas Treuer, Martial Costantini, and Lionel Ferra at ESA for their support, valuable insights and feedback on this work. Andreas was particularly instrumental for this work by sharing his experiences and providing feedback throughout the project which was crucial in refining both the scope as well as the implementation of this work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Throwback 20 years to 2004: XML-based User Interface Description Languages</title>
      <link>https://www.krisluyten.net/news/2024/11/26/xmluidl/</link>
      <pubDate>Tue, 26 Nov 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/11/26/xmluidl/</guid>
      <description>&lt;p&gt;This year marks 20 years since I co-organized the &lt;strong&gt;Workshop on User Interface Description Languages (UIDLs)&lt;/strong&gt; during the &lt;a href=&#34;https://avi-conference.org/2004/&#34;&gt;Working Conference on Advanced Visual Interfaces&lt;/a&gt;, held in Gallipoli, Italy (May 25–28, 2004). Together with &lt;strong&gt;Marc Abrams&lt;/strong&gt;, &lt;strong&gt;Jean Vanderdonckt&lt;/strong&gt;, and &lt;strong&gt;Quentin Limbourg&lt;/strong&gt;, we created an event that surpassed all expectations in terms of attendance, engagement, and the quality of contributions.&lt;/p&gt;&#xA;&lt;p&gt;The early 2000s were a transformative period for the field of Human-Computer Interaction (HCI). Researchers and practitioners alike were grappling with the challenge of building &lt;strong&gt;flexible, reusable, and context-aware user interfaces&lt;/strong&gt; (UIs) that could adapt to the growing variety of devices and use cases. XML, with its ability to structure and abstract information, became the language of choice for UIDLs.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper on A Visual Dashboard for Model Multiplicity</title>
      <link>https://www.krisluyten.net/news/2024/11/19/haii_papers/</link>
      <pubDate>Tue, 19 Nov 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/11/19/haii_papers/</guid>
      <description>&lt;p&gt;In AI research, model multiplicity can help users better understand the diversity of AI predictions.&#xA;Our new system &amp;ldquo;AI-Spectra&amp;rdquo; provides a visual dashboard to harness this concept effectively. Instead of relying on a single AI model, AI-Spectra uses multiple models—each seen as an expert—to produce predictions for the same task. This helps users see not only what different models agree or disagree on, but also why these differences occur. &lt;a href=&#34;https://scholar.google.com/citations?user=4tD_uWkAAAAJ&amp;amp;hl=en&#34;&gt;Gilles Eerlings&lt;/a&gt; (a &lt;a href=&#34;https://www.flandersairesearch.be/nl&#34;&gt;FAIR&lt;/a&gt; PhD student ) and &lt;a href=&#34;https://scholar.google.com/citations?hl=en&amp;amp;authuser=1&amp;amp;user=CHuVofAAAAAJ&#34;&gt;Sebe Vanbrabant&lt;/a&gt; where the main contributors for this work and combined machine learning, model multiplicity and visualisations that focus on the characteristics of an AI model, instead of explaining the behaviour.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper on Anthropomorphic User Interfaces</title>
      <link>https://www.krisluyten.net/news/2024/10/17/haii_papers/</link>
      <pubDate>Thu, 17 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/10/17/haii_papers/</guid>
      <description>&lt;h1 id=&#34;anthropomorphic-user-interfaces&#34;&gt;Anthropomorphic User Interfaces&lt;/h1&gt;&#xA;&lt;p&gt;Together with Eva Geurts, we explored &lt;em&gt;Anthropomorphic User Interfaces&lt;/em&gt; (AUIs) and created a taxonomy that helps us to analyze, identify, and design appropriate AUIs. The paper is available &lt;a href=&#34;https://krisluyten.net/files/pubs/pdfs/geurts-ecce2024.pdf&#34;&gt;here&lt;/a&gt;, and&#xA;our interactive tool that helps you to find related resources for specific aspects from our technology is available at this URL: &lt;a href=&#34;https://anthropomorphic-ui.onrender.com&#34;&gt;https://anthropomorphic-ui.onrender.com&lt;/a&gt;.&lt;/p&gt;&#xA;&lt;h2 id=&#34;citation&#34;&gt;Citation&lt;/h2&gt;&#xA;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bibtex&#34; data-lang=&#34;bibtex&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nc&#34;&gt;@inproceedings&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;&lt;span class=&#34;nl&#34;&gt;geurtsantropomorphic2024&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;title&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Anthropomorphic User Interfaces: Past, Present and Future of&#xA;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;s&#34;&gt;Anthropomorphic Aspects for Sustainable Digital Interface Design}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;author&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Eva Geurts and Kris Luyten}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;booktitle&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Proceedings of the European Conference on Cognitive Ergonomics 2024}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;articleno&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{31}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;numpages&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{7}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;keywords&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Anthropomorphism, Human-like interfaces, Taxonomy, User interface design}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;location&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Paris, France}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;series&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{ECCE &amp;#39;24}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;year&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{2024}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;publisher&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Association for Computing Machinery}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;{https://anthropomorphic-ui.onrender.com}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;doi&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{10.1145/3673805.3673831}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;na&#34;&gt;isbn&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{9798400718243}&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;abstract&#34;&gt;Abstract&lt;/h2&gt;&#xA;&lt;p&gt;Interactions with computing systems and conversational services such as ChatGPT have become an inherent part of our daily lives. It is surprising that user interfaces, the gateways through which we communicate with an interactive intelligent system, are still predominantly devoid of hedonic aspects. There is little attempt to make communication through user interfaces intentionally more like communication with humans. Anthropomorphic user interfaces can transform interactions with intelligent software into more pleasant experiences by integrating human-like attributes. Anthropomorphic user interfaces expose human-like attributes that enable people to perceive, connect, and interact with the interfaces as social actors. This integration of human-like aspects not only enhances user experience but also holds the potential to make interfaces more sustainable, as they rely on familiar human interaction patterns, thus potentially reducing the learning curve and increasing user adoption rates. However, there is little consensus on how to build these anthropomorphic user interfaces. We conducted an extensive literature review on existing anthropomorphic user interfaces for software systems (past), in order to map and connect existing definitions and interpretations in an overarching taxonomy (present). The taxonomy is used to organize and structure examples of anthropomorphic user interfaces into an accessible collection. The taxonomy and an accompanying web tool provide designers with a reference framework for analyzing and dissecting existing anthropomorphic user interfaces, and for designing new anthropomorphic user interfaces (future).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Two contributions Accepted for ACM VRST 2024 - AR Pattern Guidance and VR Text Input Modalities</title>
      <link>https://www.krisluyten.net/news/2024/08/13/vrst_papers/</link>
      <pubDate>Tue, 13 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/08/13/vrst_papers/</guid>
      <description>&lt;h2 id=&#34;paper-and-poster-accepted-for-acm-vrst-2024-ar-pattern-guidance-and-vr-text-input-modalities&#34;&gt;Paper and Poster Accepted for ACM VRST 2024: AR Pattern Guidance and VR Text Input Modalities&lt;/h2&gt;&#xA;&lt;p&gt;We are excited to announce that both our paper and poster have been conditionally accepted for presentation at the ACM Symposium on Virtual Reality Software and Technology (VRST) 2024, which will take place in Trier, Germany.&lt;/p&gt;&#xA;&lt;h3 id=&#34;paper-evaluation-of-ar-pattern-guidance-methods-for-a-surface-cleaning-task&#34;&gt;Paper: Evaluation of AR Pattern Guidance Methods for a Surface Cleaning Task&lt;/h3&gt;&#xA;&lt;p&gt;Our full paper titled &amp;ldquo;Evaluation of AR Pattern Guidance Methods for a Surface Cleaning Task.&amp;rdquo; has been conditionally accepted.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted for ISMAR 2024 - The Art of Timing in AR Guidance</title>
      <link>https://www.krisluyten.net/news/2024/07/16/ismar_papers/</link>
      <pubDate>Tue, 16 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/07/16/ismar_papers/</guid>
      <description>&lt;h2 id=&#34;paper-accepted-for-ismar-2024-the-art-of-timing-in-ar-guidance&#34;&gt;Paper accepted for ISMAR 2024: The Art of Timing in AR Guidance&lt;/h2&gt;&#xA;&lt;p&gt;We are excited to announce that our paper titled &amp;ldquo;The Art of Timing: Effects of AR Guidance Timing on Speed Control&amp;rdquo; (with Jeroen Ceyssens, Bram van Deurzen, Gustavo Rovelo Ruiz and Fabian Di Fiore) has been accepted for presentation at the 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).&lt;/p&gt;&#xA;&lt;p&gt;&lt;img src=&#34;./img/ismar2024.png&#34; alt=&#34;graphical abstract&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;abstract&#34;&gt;Abstract&lt;/h3&gt;&#xA;&lt;p&gt;Augmented Reality (AR) holds significant potential to facilitate users in executing manual tasks. For effective support, however, we need to understand how showing movement instructions in AR affects how well people can follow those movements in real life. In this paper, we examine the degree to which users can synchronize the speed of their movements with speed cues presented through an AR environment. Specifically, we investigate the effects of timing in AR visual guidance. We assess performance using a highly realistic Mixed Reality (MR) welding simulation. Welding is a task that requires very precise timing and control over hand and arm motion. Our results show that upfront visual guidance (before manual task execution) alone often fails to transfer the knowledge of intended speeds, especially at higher target speeds. Live guidance during manual task execution provides more accurate speed results but typically requires a higher overshoot at the start. Optimal outcomes occur when visual guidance appears upfront and continues during the activity for users to follow through.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Special Issue Published: HCI and Worker Well-being in Industry 5.0</title>
      <link>https://www.krisluyten.net/news/2024/07/06/hci-worker-wellbeing-industry-5-0/</link>
      <pubDate>Sat, 06 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/07/06/hci-worker-wellbeing-industry-5-0/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m excited to announce the publication of a special issue I co-edited titled &amp;ldquo;Human-Centered Approaches to Worker Well-being in the Age of Industry 5.0&amp;rdquo; in Frontiers. This collection of papers explores diverse aspects of worker well-being within the Industry 5.0 framework, focusing on both physical and cognitive well-being while respecting workers&amp;rsquo; privacy.&lt;/p&gt;&#xA;&lt;h2 id=&#34;key-highlights&#34;&gt;Key Highlights:&lt;/h2&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Industry 5.0 and Worker Well-being&lt;/strong&gt;: The special issue examines how the Industry 5.0 paradigm complements technological advancements with an enhanced focus on human workers, addressing challenges in creating sustainable and healthy work environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A Comparison between Threads, Fibers and Coroutines for Developing Concurrent Software by Senne Bergmans</title>
      <link>https://www.krisluyten.net/news/2024/06/30/bach_thesis/</link>
      <pubDate>Sun, 30 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/06/30/bach_thesis/</guid>
      <description>&lt;p&gt;Senne Bergmans made an extensive comparison of Threads, Fibers anc Coroutines for developing concurrent software as part of his Bachelor thesis, and made his comparison and code available for everyone to use. If you start creating concurrent software and aren&amp;rsquo;t sure what is the best solution for your specific context, these resources can help:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://sennebergmans.github.io/AComparisonBetweenThreadsFibersAndCoroutines/&#34;&gt;A Comparison between Threads, Fibers and Coroutines for Developing Concurrent Software in Blog Posts&lt;/a&gt;;&lt;/li&gt;&#xA;&lt;li&gt;Senne also released both his sample code and code used for the comparisons through his &lt;a href=&#34;https://github.com/SenneBergmans/AComparisonBetweenThreadsFibersAndCoroutines/tree/main/code/ThreadsCoroutinesFibersExamples&#34;&gt;github page&lt;/a&gt;.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Contact Senne for more information.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ViRgilites: Multilevel feedforward for multimodal interaction in VR</title>
      <link>https://www.krisluyten.net/publications/artizzuvirgilites2024/</link>
      <pubDate>Sun, 16 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/artizzuvirgilites2024/</guid>
      <description>&lt;p&gt;Navigating the interaction landscape of Virtual Reality (VR) and Augmented Reality (AR) presents significant complexities due to the plethora of available input hardware and interaction modalities, compounded by spatially diverse visual interfaces. Such complexities elevate the likelihood of user errors, necessitating frequent backtracking. To address this, we introduce ViRgilites, a virtual guidance framework that delivers multi-level feedforward information covering the available interaction techniques as well as the future possibilities to interact with virtual objects, anticipating the interaction effects and how they fit with the overall user&#39;s goal. ViRgilites is engineered to facilitate task execution, empowering users to make informed decisions about action methodologies and alternative courses of action. This paper presents the architecture and functionality of ViRgilites and demonstrates its efficacy through evaluation with a formative user study&lt;/p&gt;</description>
    </item>
    <item>
      <title>Papers accepted on Anthropomorphic UIs and Model Multiplicity</title>
      <link>https://www.krisluyten.net/news/2024/06/06/haii_papers/</link>
      <pubDate>Thu, 06 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/06/06/haii_papers/</guid>
      <description>&lt;h3 id=&#34;model-multiplicity-in-interactive-software-systems&#34;&gt;Model Multiplicity in Interactive Software Systems&lt;/h3&gt;&#xA;&lt;p&gt;We got a workshop paper accepted, presenting the initial work of Gilles Eerlings et al. We explore how model multiplicity can be a potential answer to reduce overtrust in AI, as well as avoid undertrust. Still a lot of work that lies ahead, but this seems like a promising direction.&lt;/p&gt;&#xA;&lt;h4 id=&#34;citation&#34;&gt;Citation&lt;/h4&gt;&#xA;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-bibtex&#34; data-lang=&#34;bibtex&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nc&#34;&gt;@inproceedings&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;{&lt;/span&gt;&lt;span class=&#34;nl&#34;&gt;luyteneerlings-modelmultiplicity2024&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;na&#34;&gt;author&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Kris Luyten and Gilles Eerlings and Jori Liesenborgs and Gustavo {Rovelo Ruiz} and Sebe Vanbrabant and Davy Vanacken}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;na&#34;&gt;title&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{Opportunities and Challenges of Model Multiplicity in Interactive Software Systems}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;na&#34;&gt;booktitle&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{The Second Workshop on Engineering Interactive Systems Embedding AI Technologies}&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;,&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;  &lt;span class=&#34;na&#34;&gt;year&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;{2024}&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;p&#34;&gt;}&lt;/span&gt;&#xA;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h4 id=&#34;abstract&#34;&gt;Abstract&lt;/h4&gt;&#xA;&lt;p&gt;The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings—a phenomenon called model multiplicity—affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted: Conception, Approval and First Evaluation of a New Master Program Engineering Technology: Software Systems (Informatics) in Belgium</title>
      <link>https://www.krisluyten.net/news/2024/03/16/edu_papers/</link>
      <pubDate>Sat, 16 Mar 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/03/16/edu_papers/</guid>
      <description>&lt;p&gt;Our paper on the &lt;em&gt;Conception, Approval and First Evaluation of a New Master Program Engineering Technology: Software Systems (Informatics) in Belgium&lt;/em&gt;, has been accepted for &lt;a href=&#34;http://www.mipro.hr&#34;&gt;MIPRO 2024, the 47th ICT and Electronics Convention&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Second Workshop on Engineering Interactive Systems Embedding AI Technologies @ EICS&#39;2024</title>
      <link>https://www.krisluyten.net/news/2024/02/28/workshop-on-engineering-interactive-systems-embedding-ai-technologies/</link>
      <pubDate>Wed, 28 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/02/28/workshop-on-engineering-interactive-systems-embedding-ai-technologies/</guid>
      <description>&lt;p&gt;We will be organizing a workshop on &lt;a href=&#34;https://sites.google.com/view/engineering-interactive-system/&#34;&gt;Engineering Interactive Systems Embedding AI Technologies&lt;/a&gt; at the &lt;a href=&#34;https://eics.acm.org/2024/&#34;&gt;EICS 2024 conference&lt;/a&gt; &amp;ndash; Tuesday June 24th or June 25th 2024 in Caglieri, Italy. &lt;a href=&#34;https://easychair.org/my/conference?conf=eiseait2024&#34;&gt;Submissions welcome&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted: A VR Prototype for One-Dimensional Movement Visualizations for Robotic Arms</title>
      <link>https://www.krisluyten.net/news/2024/02/27/vr_papers/</link>
      <pubDate>Tue, 27 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/02/27/vr_papers/</guid>
      <description>&lt;p&gt;Our paper introducing &lt;em&gt;A VR Prototype for One-Dimensional Movement Visualizations for Robotic Arms&lt;/em&gt;, has been accepted for &lt;a href=&#34;https://vam-hri.github.io&#34;&gt;The 7th International Workshop on Virtual, Augmented, and Mixed-Reality for Human-Robot Interactions&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted: ViRgilites: Multilevel Feedforward for Multimodal Interaction in VR</title>
      <link>https://www.krisluyten.net/news/2024/02/22/eics_papers/</link>
      <pubDate>Thu, 22 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/02/22/eics_papers/</guid>
      <description>&lt;p&gt;Our paper introducing &lt;em&gt;Virgilites&lt;/em&gt;, feedforward as a meta-user interface component in VR, has been accepted for &lt;a href=&#34;https://dl.acm.org/toc/pacmhci/2017/1/EICS&#34;&gt;PACM HCI journal, in the Engineering Interactive Computing Systems issue&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted &amp; published: Substitute Buttons - Exploring Tactile Perception of Physical Buttons for Use as Haptic Proxies</title>
      <link>https://www.krisluyten.net/news/2024/02/21/mdpi_mti_papers/</link>
      <pubDate>Wed, 21 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/02/21/mdpi_mti_papers/</guid>
      <description>&lt;p&gt;Our &lt;a href=&#34;https://doi.org/10.3390/mti8030015&#34;&gt;paper on &lt;em&gt;substitute buttons&lt;/em&gt;&lt;/a&gt; is now available in &lt;a href=&#34;https://www.mdpi.com/journal/mti&#34;&gt;the MDPI journal on Multimodal Technologies and Interaction&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Late Breaking Reports accepted for Human-Robot Interaction 2024</title>
      <link>https://www.krisluyten.net/news/2024/01/11/hri_lbr_papers/</link>
      <pubDate>Thu, 11 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2024/01/11/hri_lbr_papers/</guid>
      <description>&lt;p&gt;Two Late-Breaking Results were accepted for the &lt;a href=&#34;https://humanrobotinteraction.org/2024/#&#34;&gt;19th Annual ACM/IEEE International&#xA;Conference on Human Robot Interaction (HRI)&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A visual design space for one-dimensional intelligible human-robot interaction visualizations</title>
      <link>https://www.krisluyten.net/publications/vandeurzen2024a/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vandeurzen2024a/</guid>
      <description>&lt;p&gt;To enable effective communication between users and autonomous robots, it is crucial to have a shared understanding of goals and actions. This is made possible through an intelligible interface that communicates relevant information. This intelligibility enhances user comprehension, enabling them to anticipate the robot&#39;s actions and respond appropriately. However, because robots can perform a wide variety of actions and communication resources are limited, such as the number of available &amp;amp;quot;pixels&amp;amp;quot;, visualizations must be carefully designed. To tackle this challenge, we have developed a visual design framework and design space that can be used to create intelligible visualizations for human-robot interaction. Our framework focuses on three key components: information type, pixel layout, and robot type. We demonstrate how intelligibility can be integrated into interactions through prototype visualizations featuring a one-dimensional pixel layout, laying the groundwork for developing more detailed and understandable visualizations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A VR prototype for one-dimensional movement visualizations for robotic arms</title>
      <link>https://www.krisluyten.net/publications/vandeurzenvr1dim2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vandeurzenvr1dim2024/</guid>
      <description>&lt;p&gt;To enable effective communication between users and autonomous robots, it is crucial to have a shared understanding of goals and actions. This is made possible through an intelligible interface that communicates relevant information. This intelligibility enhances user comprehension, enabling them to anticipate the robot&#39;s actions and respond appropriately. However, because robots can perform a wide variety of actions and communication resources are limited, such as the number of available &amp;quot;pixels&amp;quot;, visualizations must be carefully designed. To tackle this challenge, we have developed a visual design framework. Leveraging Unity, we developed a Virtual Reality implementation to prototype and evaluate our framework. Within this framework, we introduce two visualization techniques for visualizing the movement of a robotic arm, laying a foundation for subsequent development and user testing.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AI-spectra: A visual dashboard for model multiplicity to enhance informed and transparent decision-making</title>
      <link>https://www.krisluyten.net/publications/eerlings2024aispectravisualdashboardmodel/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/eerlings2024aispectravisualdashboardmodel/</guid>
      <description>&lt;p&gt;We present an approach, AI-Spectra, to leverage model multiplicity for interactive systems. Model multiplicity means using slightly different AI models yielding equally valid outcomes or predictions for the same task, thus relying on many simultaneous &amp;quot;expert advisors&amp;quot; that can have different opinions. Dealing with multiple AI models that generate potentially divergent results for the same task is challenging for users to deal with. It helps users understand and identify AI models are not always correct and might differ, but it can also result in an information overload when being confronted with multiple results instead of one. AI-Spectra leverages model multiplicity by using a visual dashboard designed for conveying what AI models generate which results while minimizing the cognitive effort to detect consensus among models and what type of models might have different opinions. We use a custom adaptation of Chernoff faces for AI-Spectra; Chernoff Bots. This visualization technique lets users quickly interpret complex, multivariate model configurations and compare predictions across multiple models. Our design is informed by building on established Human-AI Interaction guidelines and well know practices in information visualization. We validated our approach through a series of experiments training a wide variation of models with the MNIST dataset to perform number recognition. Our work contributes to the growing discourse on making AI systems more transparent, trustworthy, and effective through the strategic use of multiple models.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AntHand: Interaction techniques for precise telerobotic control using scaled objects in virtual environments</title>
      <link>https://www.krisluyten.net/publications/cardinaels2024a/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/cardinaels2024a/</guid>
      <description>&lt;p&gt;This paper introduces AntHand, a set of interaction techniques for enhancing precision and adaptability in telerobotics through the use of scaled objects in virtual environments. AntHand operates in three phases: up-scaling interaction, for detailed control through a magnified virtual model; constraining interaction, which locks movement dimensions for accuracy; and post-editing, allowing manipulation trace optimization and noise reduction. Leveraging a use-case related to surgery, the application of AntHand is showcased in a scenario demanding high accuracy and precise manipulation. AntHand demonstrates how collaboration between humans and robots can improve precise control of robot actions in telerobotic operations, while maintaining the familiar use of traditional tools, rather than relying on specialized controllers.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Anthropomorphic user interfaces: Past, present and future of anthropomorphic aspects for sustainable digital interface design</title>
      <link>https://www.krisluyten.net/publications/geurtsantropomorphic2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/geurtsantropomorphic2024/</guid>
      <description>&lt;p&gt;Interactions with computing systems and conversational services such as ChatGPT have become an inherent part of our daily lives. It is surprising that user interfaces, the gateways through which we communicate with an interactive intelligent system, are still predominantly devoid from hedonic aspects. There is little attempt to make communication through user interfaces intentionally more like communication with humans. Anthropomorphic user interfaces can transform interactions with intelligent software into more pleasant experiences by integrating human-like attributes. Anthropomorphic user interfaces expose human-like attributes that enable people to perceive, connect and interact with the interfaces as social actors. This integration of human-like aspects not only enhances user experience but also holds the potential to make interfaces more sustainable, as they rely on familiar human interaction patterns, thus potentially reducing the learning curve and increasing user adoption rates. However, there is little consensus on how to build these anthropomorphic user interfaces. We conducted an extensive literature review on existing anthropomorphic user interfaces for software systems (past), in order to map and connect existing definitions and interpretations in an overarching taxonomy (present). The taxonomy is used to organize and structure examples of anthropomorphic user interfaces into an accessible collection. The taxonomy and an accompanying web tool provides designers with a reference framework for analyzing and dissecting existing anthropomorphic user interfaces, and for designing new anthropomorphic user interfaces (future).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Conception, approval and first evaluation of a new master&#39;s program engineering technology: Software systems (informatics) in belgium</title>
      <link>https://www.krisluyten.net/publications/aertsiiwinf2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/aertsiiwinf2024/</guid>
      <description>&lt;p&gt;The demand for skilled software engineers continues to outweigh the number of new graduates by far. Although trends such as AI-based code generation and low-code software development might seem to lessen the need for software engineers, the digital transformation of our society is expected to speed up because of these trends, requiring engineers with fitting proficiencies. This paper highlights the crucial steps in the development and governmental accreditation process of a new curriculum in software systems, and describes the lessons learned after a first generation of graduates. Based on interviews with and studies from diverse actors (e.g., trade unions, local government, EU, and professional organizations such as ACM and IEEE) and in response to top-of-mind concerns from regional industry leaders, we designed and deployed an engineering program that meets the identified needs and aims to educate a new generation of software engineers for the forthcoming digital society. The program educates systems thinkers who engineer this digital society by designing and implementing resilient, intelligent, user-centered solutions that integrate with existing processes and enable new, innovative processes. Our master&#39;s program is a unique joint effort of two Flemish universities, Hasselt University and KU Leuven, and resides in the faculty of Engineering Technology.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Direct feedforward techniques for the ViRgilites system</title>
      <link>https://www.krisluyten.net/publications/artizzudemo2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/artizzudemo2024/</guid>
      <description>&lt;p&gt;In this poster we propose an implementation of direct feedforward for the ViRgilites system. The project defines two alternative uses, with respect to the current implementation, that only shows in an indirect way (icons, target object images, text) how to perform an interaction in the simulated environment. The first representation is a single avatar mode where the user sees a virtual avatar performing an action in the same environment as the user, while the second representation is a multiple avatar mode, where the user can choose to compare two interactions and see the avatar representations side by side in dedicated panels. We report on the initial ideas and proof-of-concepts, while we envision further modifications and a future evaluation of the final outcome.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Evaluation of AR pattern guidance methods for a surface cleaning task</title>
      <link>https://www.krisluyten.net/publications/ceyssensvrst2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/ceyssensvrst2024/</guid>
      <description>&lt;p&gt;We investigate the efficacy of augmented reality (AR) in enhancing a cleanroom cleaning task by implementing various pattern guidance designs. Cleanroom cleaning is an example of a surface coverage task that is hard to execute where the pattern should be followed correctly and the entire surface should be covered. We developed an AR guidance system for the cleaning procedure and evaluated four distinct pattern guidance methods: (1) breadcrumbs, (2) examples, (3) middle lines, and (4) outlines. We also varied the scale of the instructions, where information is present either on the entire surface or shows the instructions as a single step when they are necessary. To measure the performance, accuracy, and user satisfaction associated with each guidance method, we conducted a large-scale (n=864) between-subjects study. Our findings indicate that the single-step instructions proved to be more intuitive and efficient than the full instructions, especially for the breadcrumbs. We also discussed the implications of our results for the development of AR applications for surface coverage and pattern optimization, providing a multitude of observations related to instruction behaviors.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Exploring alternative text input modalities in virtual reality: A comparative study</title>
      <link>https://www.krisluyten.net/publications/jansvrst2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/jansvrst2024/</guid>
      <description>&lt;p&gt;Text input in Virtual Reality (VR) is crucial for various applications, including communication, search, and productivity. We compare different keyboard designs for text entry in VR, taking advantage of the flexibility and the tracking options that are available for a 3D environment. To assess the differences between the input modalities and the spatial keyboard layouts void of the user experience with specific keyboard layouts, the Dvorak keyboard layout was used. Four different settings were included in the comparison: (a) a floating keyboard with finger pointing input, (b) a keyboard attached on the back of the hand with finger pointing input, (c) a floating keyboard with eye tracking and finger pinch input, and (d) a keyboard laid out over a rolling shape with finger pointing input. Keyboards (b), (c), and (d) can move in 3D space, while keyboard design (a) is fixed. (a) and (d) showed similar typing efficiencies, however, users reported an increase in perceived usability and lower physical demand for keyboard design (d). Users also reported a higher physical demand, effort, and annoyance for keyboard design (b), and a lower physical demand for keyboard design (c), with higher mental demand, effort, and the highest error rate.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Object-georienteerd Programmeren 2</title>
      <link>https://www.krisluyten.net/teaching/oo2/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/teaching/oo2/</guid>
      <description>&lt;p&gt;De studenten verdiepen zich verder in object-georiënteerd programmeren met de nodige aandacht voor het ontwerp en programmeren van goed gestuctureerde, robuuste, uitbreidbare en elegante code. Java wordt gebruikt als de centrale object-georiënteerde programmeertaal, maar de aangeleerde concepten en technieken zijn van toepassing op vele object-georiënteerde programmeertalen.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Opportunities and challenges of model multiplicity in interactive software systems</title>
      <link>https://www.krisluyten.net/publications/luyteneerlingsmodelmultiplicity2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luyteneerlingsmodelmultiplicity2024/</guid>
      <description>&lt;p&gt;The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings &amp;ndash;a phenomenon called model multiplicity&amp;ndash; affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Substitute buttons: Exploring tactile perception of physical buttons for use as haptic proxies</title>
      <link>https://www.krisluyten.net/publications/vandeurzensubstitutebutton2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vandeurzensubstitutebutton2024/</guid>
      <description>&lt;p&gt;Buttons are everywhere and are one of the most common interaction elements in both physical and digital interfaces. While virtual buttons offer versatility, enhancing them with realistic haptic feedback is challenging. Achieving this requires a comprehensive understanding of the tactile perception of physical buttons and their transferability to virtual counterparts. This research investigates tactile perception concerning button attributes such as shape, size, and roundness and their potential generalization across diverse button types. In our study, participants interacted with each of the 36 buttons in our search space and provided a response to which one they thought they were touching. The findings were used to establish six substitute buttons capable of effectively emulating tactile experiences across various buttons. In a second study, these substitute buttons were validated against virtual buttons in VR. Highlighting the potential use of the substitute buttons as haptic proxies for applications such as encountered-type haptics.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The art of timing: Effects of AR guidance timing on speed control</title>
      <link>https://www.krisluyten.net/publications/ceyssens2024/</link>
      <pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/ceyssens2024/</guid>
      <description>&lt;p&gt;Augmented Reality (AR) holds significant potential to facilitate users in executing manual tasks. For effective support, however, we need to understand how showing movement instructions in AR affects how well people can follow those movements in real life. In this paper, we examine the degree to which users can synchronize the speed of their movements with speed cues presented through an AR environment. Specifically, we investigate the effects of timing in AR visual guidance. We assess performance using a highly realistic Mixed Reality (MR) welding simulation. Welding is a task that requires very precise timing and control over hand and arm motion. Our results show that upfront visual guidance (before manual task execution) alone often fails to transfer the knowledge of intended speeds, especially at higher target speeds. Live guidance (during manual task execution) during the activity provides more accurate speed results but typically requires a higher overshoot at the start. Optimal outcomes occur when visual guidance appears upfront and continues during the activity for users to follow through.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AR guidance design for line tracing speed control</title>
      <link>https://www.krisluyten.net/publications/ceyssens2023/</link>
      <pubDate>Mon, 16 Oct 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/ceyssens2023/</guid>
      <description>&lt;p&gt;In many jobs, workers execute precise line tracing tasks; welding, spray painting, or chiseling, for example. Training and support for such tasks can be done using VR and AR. However, to enable workers to achieve the required precision in movement and timing, the effect of visual guidance on continuous movement needs to be explored. In VR environments, we want to ensure people are trained so that the obtained skill is transferable to a real-world context, whereas, in AR, we want to ensure an ongoing task can be completed successfully when adding visual guidance. To simulate these various contexts, we employ a VR environment to investigate the effectiveness of different visualizations for motion-based guidance in a line tracing task. We tested five different visualizations, including faster and slower arrows on the pen, the same arrows on the line, a dynamic graph on the pen or line, and a ghost object to follow. Each visualization was tested with the same set of five lines of different target speeds (2cm/s to 10 cm/s in steps of 2 cm/s) with a training line of 5 cm/s. Our results show that the example ghost on the line turns out to be the most efficient visualization for allowing users to achieve a specific speed. Users also perceived this visualization as the most engaging and easy to use. These findings have significant implications for the development of AR-based guidance systems, specifically in the realm of speed control, across diverse domains such as industrial applications, training, and entertainment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Papers accepted for ISMAR 2023 and SCF 2023</title>
      <link>https://www.krisluyten.net/news/2023/09/05/ismar_and_scf_papers/</link>
      <pubDate>Tue, 05 Sep 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2023/09/05/ismar_and_scf_papers/</guid>
      <description>&lt;p&gt;Two papers accepted: one for &lt;a href=&#34;https://ismar23.org&#34;&gt;ISMAR 2023&lt;/a&gt; on AR guidance for Line Tracing, and one for &lt;a href=&#34;https://scf.acm.org/2023/&#34;&gt;SCF 2023&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Research positions in Human-Computer Interaction / Computer Science / Human-AI Interaction</title>
      <link>https://www.krisluyten.net/news/2023/08/01/vacancies/</link>
      <pubDate>Tue, 01 Aug 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2023/08/01/vacancies/</guid>
      <description>&lt;p&gt;HCI researcher positions (both PhD and optional PhD). One is a PhD position working with me and our team on Human-AI Interaction and related domains, another one is on worker wellbeing in the manufacturing industry.&lt;/p&gt;&#xA;&lt;p&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;&#xA;&lt;p&gt;Apply &lt;a href=&#34;https://www.uhasselt.be/nl/over-uhasselt/werken-bij-uhasselt/vacatures/detail/2828-doctoraatsbursaal-mens-machine-interactie&#34;&gt;here for the PhD position&lt;/a&gt; and &lt;a href=&#34;https://www.uhasselt.be/nl/over-uhasselt/werken-bij-uhasselt/vacatures/detail/2830-navorser-mens-machine-interactie&#34;&gt;here for the HCI researcher in worker wellbeing position&lt;/a&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>PACMHCI - Engineering Interactive Computing Systems, June 2023 issue online</title>
      <link>https://www.krisluyten.net/news/2023/06/21/eics2023/</link>
      <pubDate>Wed, 21 Jun 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2023/06/21/eics2023/</guid>
      <description>&lt;p&gt;Together with &lt;a href=&#34;https://scholar.google.com/citations?user=oqU-gx4AAAAJ&amp;amp;hl=en&#34;&gt;Carmen Santoro&lt;/a&gt; I edited the &lt;a href=&#34;https://eics.acm.org/pacm/&#34;&gt;PACMHCI EICS&lt;/a&gt; issue, with papers presented at &lt;a href=&#34;https://eics.acm.org/2023/&#34;&gt;EICS 2023&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PACMHCI - engineering interactive computing systems, june 2023: Editorial introduction</title>
      <link>https://www.krisluyten.net/publications/luyteneicsintro2023/</link>
      <pubDate>Fri, 16 Jun 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luyteneicsintro2023/</guid>
      <description>&lt;p&gt;Welcome to this issue of the Proceedings of the ACM on Human-Computer Interaction, bringing together contributions from the community on Engineering Interactive Computing Systems (EICS). The EICS track of the PACM-HCI is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering. This year, over the three rounds of submissions, for the issue of PACM-HCI we received 68 valid submissions (out of 90 submissions in total), of which we carefully selected 19 papers, bringing our acceptance rate to 27.9%. The result of this selection process is presented in this issue of the Proceedings of the ACM.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paper accepted for CHI&#39;2023</title>
      <link>https://www.krisluyten.net/news/2023/04/20/chi2023-paper/</link>
      <pubDate>Thu, 20 Apr 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2023/04/20/chi2023-paper/</guid>
      <description>&lt;p&gt;Our paper on measurements patterns for makers and manufacturing was accepted for the CHI&#39;2023 conference. Co-authors are Raf Ramakers, Danny Leen, Jeeeun Kim, Steven Houben and Tom Veuskens&lt;/p&gt;</description>
    </item>
    <item>
      <title>News Article on the Intelligible Interactive System research unit at EDM</title>
      <link>https://www.krisluyten.net/news/2023/03/09/an-interview-with-the-intelligible-interactive-systems-research-unit/</link>
      <pubDate>Thu, 09 Mar 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2023/03/09/an-interview-with-the-intelligible-interactive-systems-research-unit/</guid>
      <description></description>
    </item>
    <item>
      <title>History in motion: Interactive 3D animated visualizations for understanding and exploring the modeling history of 3D CAD designs</title>
      <link>https://www.krisluyten.net/publications/veuskens2023/</link>
      <pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/veuskens2023/</guid>
      <description>&lt;p&gt;We present History in Motion (HiM), an interactive visualization tool that enables CAD designers to interactively explore the design history of 3D CAD models. In contrast to manually exploring the modeling history of a CAD project, HiM finds relevant modeling features for geometry elements selected by the designer. We contribute a novel 3D interactive animation that visualizes how the modeling features interact, and are used on top of the CAD model, to realize the geometry. A control panel allows for a deeper exploration of the modeling features, with shortcuts for making modifications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Measurement patterns: User-oriented strategies for dealing with measurements and dimensions in making processes</title>
      <link>https://www.krisluyten.net/publications/ramakerslklhv23/</link>
      <pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/ramakerslklhv23/</guid>
      <description>&lt;p&gt;The majority of errors in making processes can be tracked back to errors in dimensional specifications. While technical aspects of measurement, such as precision and speed have been extensively studied in metrology, the user aspects of measurement received significantly less attention. While little research exists that specifically addresses the user aspects of handling dimensions, various systems have been built that embed new interactive modalities, processes, and techniques which significantly impact how users deal with dimensions or conduct measurements. However, these features are mostly hidden in larger system contributions. To uncover and articulate these techniques, we conducted a holistic literature survey on measurement practices in crafting techniques and systems for rapid prototyping. Based on this survey, we contribute 10 measurement patterns, which describe reusable elements and solutions for common difficulties when dealing with dimensions throughout workflows for making physical artifacts.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The impact of an incremental and iterative teaching method on student learning and motivation</title>
      <link>https://www.krisluyten.net/publications/notermans2023/</link>
      <pubDate>Sun, 01 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/notermans2023/</guid>
      <description>&lt;p&gt;We present a study on an iterative and incremental teaching and assessment method. Our approach focuses on skill development, early feedback and individual growth. We present the effects on the learning process and motivation of students.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Reality/Virtuality Continuum for Future Life and Work (Science Pub Panel)</title>
      <link>https://www.krisluyten.net/news/2022/11/12/a-panel-talk-on-vr/</link>
      <pubDate>Sat, 12 Nov 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/news/2022/11/12/a-panel-talk-on-vr/</guid>
      <description>&lt;p&gt;I was part of a local panel discussion on the Reality/Virtuality Continuum for Future Life and Work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Choreobot: A reference framework and online visual dashboard for supporting the design of intelligible robotic systems</title>
      <link>https://www.krisluyten.net/publications/deurzenbl22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/deurzenbl22/</guid>
      <description>&lt;p&gt;As robots are equipped with software that makes them increasingly autonomous, it becomes harder for humans to understand and control these robots. Human users should be able to understand and, to a certain amount, predict what the robot will do. The software that drives a robotic system is often very complex, hard to understand for human users, and there is only limited support for ensuring robotic systems are also intelligible. Adding intelligibility to the behavior of a robotic system improves the predictability, trust, safety, usability, and acceptance of such autonomous robotic systems. Applying intelligibility to the interface design can be challenging for developers and designers of robotic systems, as they are expert users in robot programming but not necessarily experts on interaction design. We propose Choreobot, an interactive, online, and visual dashboard to use with our reference framework to help identify where and when adding intelligibility to the interface design is required, desired, or optional. The reference framework and accompanying input cards allow developers and designers of robotic systems to specify a usage scenario as a set of actions and, for each action, capture the context data that is indispensable for revealing when feedforward is required. The Choreobot interactive dashboard generates a visualization that presents this data on a timeline for the sequence of actions that make up the usage scenario. A set of heuristics and rules are included that highlight where and when feedforward is desired. Based on these insights, the developers and designers can adjust the interactions to improve the interaction for the human users working with the robotic system.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Context-aware support of dexterity skills in cross-reality environments</title>
      <link>https://www.krisluyten.net/publications/ceyssensfl22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/ceyssensfl22/</guid>
      <description>&lt;p&gt;Within our work, we apply context-awareness to determine how AR/VR technology should adapt instructions based on the context to suit user needs. We focus on situations where the user must carry out a complex manual activity that requires additional information to be present during the activity to achieve the desired result. To this end, the emphasis is on activities that require fine-motor skills and in-depth expertise and training, for which XR is a powerful tool to support and guide users performing these tasks. The contexts we detect include user intentions, environmental conditions, and activity progressions. Our work builds on these contexts with the main focus on determining how XR should adapt for the end-user from a usability perspective. The feedback we request from ISMAR consists of input in detection, usability, and simulation categories, together with how to balance these categories to create real-time and user-friendly systems. The next steps of our work will consider how to content should adjust based on the cognitive load, activity space, and environmental conditions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>De impact van een incrementele en iteratieve onderwijsmethode op het leerproces en de motivatie van studenten</title>
      <link>https://www.krisluyten.net/publications/luytennotermans2022/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytennotermans2022/</guid>
      <description>&lt;p&gt;Deze paper beschrijft de incrementele en iteratieve onderwijsmethode van het opleidingsonderdeel Human-AI Interaction in de master informatica aan UHasselt. Centraal staat de ontwikkeling van vaardigheden, met name het toepassen van wat studenten geleerd hebben en de motivering van hun keuzes. De onderwijsmethode bestaat uit een aantal hoorcolleges, vijf groepsopdrachten met feedbackmomenten en pass/fail beoordeling en een individueel project. Er waren geen strikte deadlines of een traditioneel examen. Bij een pass op elke groepsopdracht, krijgen studenten een 12/20 als score. Dit wisten ze voor de start van de examenperiode. Daarna konden de studenten ervoor kiezen om een individueel project te maken voor de overige 8/20 punten. Tijdens wekelijkse feedbackmomenten bespraken de studenten de groepsopdrachten met het onderwijsteam, waarop de pass/fail beoordeling werd gebaseerd. Bij een fail kregen studenten feedback over hoe ze de opdrachten konden verbeteren. De feedback werd gepersonaliseerd op basis van de prestaties en evoluties van elke studentengroep. De studenten konden elke groepsopdracht meermaals itereren en indienen om te groeien tot een pass. De onderwijsmethode is gebaseerd op inzichten uit het High Impact Learning that Lasts-model (HILL-model), beheersingsleren en de zelfdeterminatietheorie. De zeven bouwstenen van het HILL-model, en vooral assessment for learning en assessment as learning (Dochy &amp;amp; Segers, 2018), zijn zichtbaar in de onderwijsmethode. Ook aspecten van het beheersingsleren zijn herkenbaar (Garner, Denny, &amp;amp; Luxton-Reilly, 2019), zoals de mogelijkheid om groepsopdrachten steeds te verbeteren. Hierdoor verwachten we een positief effect van de onderwijsmethode op het leerproces van studenten. Vertrekkende vanuit de zelfdeterminatietheorie en de drie basisbehoeften voor motivatie, namelijk autonomie, verbondenheid en competentie, zijn er ook elementen aanwezig die de motivatie van studenten positief kunnen beïnvloeden (Deci &amp;amp; Ryan, 1980; 2008).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Engineering interactive computing systems 2022: Editorial introduction</title>
      <link>https://www.krisluyten.net/publications/luytenpqw22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytenpqw22/</guid>
      <description>&lt;p&gt;The Engineering Interactive Computing Systems (EICS) track of the Proceedings of the ACM on Human-Computer Interaction (PACM-HCI) is the primary venue for research contributions at the intersection of Human-Computer Interaction (HCI) and Software Engineering. EICS 2022 is the fourteenth edition of the EICS conference, however, our community was the first to organize a scientific gathering to foster and exchange research ideas and contributions on how to engineer the effective interactive aspects of a computing system. In the seventies of the previous century, the Conference on Command Languages explored the emerging primary technologies to interact with computing systems, namely command languages. Since then, this conference has evolved into the Engineering HCI conference, and the same community organized sibling conferences such as CADUI (Computer-Aided Design of User Interfaces), Tamodia (Tasks, Models and Diagrams) and DSV-IS (Design Specification and Verification of Interactive Systems). These separate venues merged into one single ACM SIGCHI sponsored conference in 2010 EICS (see Fig.1). This conference became the primary venue for rigorous contributions, and dissemination of research results, that hold the interconnection between user interface design, software engineering and computational interaction.&lt;/p&gt;</description>
    </item>
    <item>
      <title>FortClash: Predicting and mediating unintended behavior in home automation</title>
      <link>https://www.krisluyten.net/publications/coppersvl22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/coppersvl22/</guid>
      <description>&lt;p&gt;Smart home inhabitants can specify trigger-condition-action rules to control the home&#39;s behavior. As the number of rules and their complexity grow, however, so does the probability of issues such as inconsistencies and redundancies. These can lead to unintended behavior, including security vulnerabilities and wasted resources, which harms the inhabitants&#39; trust in the system. Existing approaches to handle unintended behavior typically require inhabitants to define all-encompassing, permanent solutions by modifying the rules. Although this is fitting in certain situations, some unforeseen situations might occur. We argue that the user always must have the last word to avoid unwanted behaviors, without altering the overall behavior. With FortClash, we present an approach to predict many different types of unintended behavior, and contribute four novel mechanisms to mediate them that rely on making one-time exceptions. With FortClash, inhabitants gain a new tool to deal with unintended behavior in the short-term that is compatible with existing long-term approaches such as editing rules.&lt;/p&gt;</description>
    </item>
    <item>
      <title>HCI and worker well-being in manufacturing industry</title>
      <link>https://www.krisluyten.net/publications/geurtsrlhwjp22/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/geurtsrlhwjp22/</guid>
      <description>&lt;p&gt;Operators&#39; well-being is a key factor for the success of industrial production processes. Even though research has studied the well-being aspects of the industry, such as support and improvement of ergonomics, there is still a long way to go to achieve a sustainable and healthy work context for manufacturing industry. We believe the Human-Computer Interaction community can contribute by developing research on worker well-being in real-life settings. This workshop intends to offer a venue for HCI researchers that focus on worker well-being for the manufacturing industry and other industry domains.&lt;/p&gt;</description>
    </item>
    <item>
      <title>LaserSVG: Responsive laser-cutter templates</title>
      <link>https://www.krisluyten.net/publications/hellerlasersvg2022/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/hellerlasersvg2022/</guid>
      <description>&lt;p&gt;Laser cutters take vector data for the shapes they cut or engrave as input, however, re-using a given design with different material or on a different machine requires adaptation of the template. Unfortunately, vector drawings lack the semantic information required for an automated adjustment to new parameters, making the manual adjustment a tedious and error-prone process for end-users. We present LaserSVG, a standard-compliant vector-based file format, software library, and authoring tool to specify, generate, exchange and re-use responsive laser-cutting templates. With LaserSVG, designers can easily turn their vector-drawings into parametric templates that end-users can easily adjust to new materials or production parameters. Our tools provide various functions for parametric design that allows end-users and designers to adapt objects while ensuring overall consistency of the results.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Work-a-pose: Ergonomic feedback and posture improvement interfaces for long-term sustainable work</title>
      <link>https://www.krisluyten.net/publications/vandeurzenworkapose2022/</link>
      <pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vandeurzenworkapose2022/</guid>
      <description>&lt;p&gt;Non-ergonomic postures and the resulting musculoskeletal disorders are key factors in worker disability and well-being. This underlines the importance of designing ergonomic work environments and educating workers in performing tasks ergonomically. We present Work-a-Pose to increase awareness of non-ergonomic postures and promote long-term sustainable work postures. To this end, we combine camera-based posture tracking with the automatic application of ergonomic guidelines. Glanceable visualizations highlight the worker&#39;s posture and potential ergonomic risks. A complementary, personal tool provides a more detailed overview of the worker&#39;s ergonomic score and motivates the worker to strive for a healthy work posture through simple gamification techniques.&lt;/p&gt;</description>
    </item>
    <item>
      <title>An interactive design space for wearable displays</title>
      <link>https://www.krisluyten.net/publications/hellertl21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/hellertl21/</guid>
      <description>&lt;p&gt;The promise of on-body interactions has led to widespread development of wearable displays. They manifest themselves in highly variable shapes and form, and are realized using technologies with fundamentally different properties. Through an extensive survey of the field of wearable displays, we characterize existing systems based on key qualities of displays and wearables, such as location on the body, intended viewers or audience, and the information density of rendered content. We present the results of this analysis in an open, web-based interactive design space that supports exploration and refinement along various parameters. The design space, which currently encapsulates 129 cases of wearable displays, aims to inform researchers and practitioners on existing solutions and designs, and enable the identification of gaps and opportunities for novel research and applications. Further, it seeks to provide them with a thinking tool to deliberate on how the displayed content should be adapted based on key design parameters. Through this work, we aim to facilitate progress in wearable displays, informed by existing solutions, by providing researchers with an interactive platform for discovery and reflection.&lt;/p&gt;</description>
    </item>
    <item>
      <title>HapticPanel: An open system to render haptic interfaces in virtual reality for manufacturing industry</title>
      <link>https://www.krisluyten.net/publications/deurzengwvl21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/deurzengwvl21/</guid>
      <description>&lt;p&gt;Virtual Reality (VR) allows simulation of machine control panels without physical access to the machine, enabling easier and faster initial exploration, testing, and validation of machine panel designs. However, haptic feedback is indispensable if we want to interact with these simulated panels in a realistic manner. We present HapticPanel, an encountered-type haptic system that provides realistic haptic feedback for machine control panels in VR. To ensure a realistic manipulation of input elements, the user&#39;s hand is continuously tracked during interaction with the virtual interface. Based on which virtual element the user intends to manipulate, a motorized panel with stepper motors moves a corresponding physical input element in front of the user&#39;s hand, enabling realistic physical interaction.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Model-based engineering of feedforward usability function for GUI widgets</title>
      <link>https://www.krisluyten.net/publications/navarrepclv21/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/navarrepclv21/</guid>
      <description>&lt;p&gt;Feedback and feedforward are two fundamental mechanisms that support users&#39; activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. For interactive applications where making a mistake has more impact than just reduced user comfort, correct feedforward is an essential step toward correctly informed, and thus safe, usage. Our approach, Fortunettes, is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers&#39; tasks to detect and correct defects. We build upon an existing formal notation based on Petri Nets for describing the behavior of interactive applications and present an approach that allows for adding correct and consistent feedforward.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Semi-automatic extraction of digital work instructions from CAD models</title>
      <link>https://www.krisluyten.net/publications/gors202139/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/gors202139/</guid>
      <description>&lt;p&gt;Currently process engineers are using documents or authoring tools to bring the assembly instructions to the work floor. This is a time-consuming task, as instructions need to be created for each assembly operation. Furthermore, the engineer needs to be familiar with the assembly sequence. To assist the engineer, a tool is developed that i) uses a heuristic based on visibility, part similarity and proximity to semi-automatically determine the assembly sequence from a CAD model and ii) according to the computed sequence generates digital work instructions including visualizations and animations extracted from the CAD model. In essence, the assembly sequence generation works reversely: it determines the order in which components can be removed from the assembly, by evaluating whether the visibility of a component is obstructed by the remaining assembly. The reversed order is then returned as assembly sequence. During this process the engineer can modify the proposed sequence, add annotations and alter the visualizations of the proposed instructions, i.e., images or 3D-animations. We illustrate that the developed tool effectively supports process engineers and speeds up the creation of digital work instructions by some industrial validation cases, e.g., the assembly of a weaving machine.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Attracktion: Field evaluation of multi-track audio as unobtrusive cues for pedestrian navigation</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_mhci_helleral20/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_mhci_helleral20/</guid>
      <description>&lt;p&gt;Listening to music while being on the move is common in our headphone society. However, if we want assistance in navigation from our smartphone, existing approaches either demand exclusive playback through the headphones or impact the listening experience of the music. We present a field evaluation of Attracktion, a spatial audio navigation system that leverages the access to single stems in a multi-track recording to minimize the impact on the listening experience. We compared Attracktion against current turn-by-turn navigation instructions in a field-study with 22 users and found that users perceived acoustic overlays with additional navigation information to have no impact on the listening experience. In terms of path efficiency, errors, and mental workload, Attracktion is on par with spoken turn-by-turn navigation instructions, and users liked it for the aspect of serendipity.&lt;/p&gt;</description>
    </item>
    <item>
      <title>FORTNIoT: Intelligible predictions to improve user understanding of smart home behavior</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_imwut_coppersvl20/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_imwut_coppersvl20/</guid>
      <description>&lt;p&gt;Ubiquitous environments, such as smart homes, are becoming more intelligent and autonomous. As a result, their behavior becomes harder to grasp and unintended behavior becomes more likely. Researchers have contributed tools to better understand and validate an environments&#39; past behavior (e.g. logs, end-user debugging), and to prevent unintended behavior. There is, however, a lack of tools that help users understand the future behavior of such an environment. Information about the actions it will perform, and why it will perform them, remains concealed. In this paper, we contribute FORTNIoT, a well-defined approach that combines self-sustaining predictions (e.g. weather forecasts) and simulations of trigger-condition-action rules to deduce when these rules will trigger in the future and what state changes they will cause to connected smart home entities. We implemented a proof-of-concept of this approach, as well as a visual demonstrator that shows such predictions, including causes and effects, in an overview of a smart home&#39;s behavior. A between-subject evaluation with 42 participants indicates that FORTNIoT predictions lead to a more accurate understanding of the future behavior, more confidence in that understanding, and more appropriate trust in what the system will (not) do. We envision a wide variety of situations where predictions about the future are beneficial to inhabitants of smart homes, such as debugging unintended behavior and managing conflicts by exception, and hope to spark a new generation of intelligible tools for ubiquitous environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Impact of situational impairment on interaction with wearable displays</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_mhci_hellervgl20/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_mhci_hellervgl20/</guid>
      <description>&lt;p&gt;The number of wearable devices that we carry increases, with smaller companion devices like smartwatches providing quick access for simple tasks. These devices are, however, not necessarily in direct sight of the user and during everyday activities, it is unlikely, even undesirable, that the user constantly focuses on or interacts with these screens. Furthermore, interaction is often limited because our hands are occupied carrying or holding items such as bags, papers, boxes, or tools. In this paper, we evaluate how encumbrance affects, among others, the time it takes to perceive and react to a notification depending on the placement of the companion device. Our experimental results can assist designers in choosing the right device for the task.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Individualising graphical layouts with predictive visual search models</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_tiis_todijlo20/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_tiis_todijlo20/</guid>
      <description>&lt;p&gt;In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) on an interface. This article contributes individualised predictive models of visual search, and a computational approach to restructure graphical layouts for an individual user such that features on a new, unvisited interface can be found quicker. It explores four technical principles inspired by the human visual system (HVS) to predict expected positions of features and create individualised layout templates: (I) the interface with highest frequency is chosen as the template; (II) the interface with highest predicted recall probability (serial position curve) is chosen as the template; (III) the most probable locations for features across interfaces are chosen (visual statistical learning) to generate the template; (IV) based on a generative cognitive model, the most likely visual search locations for features are chosen (visual sampling modelling) to generate the template. Given a history of previously seen interfaces, we restructure the spatial layout of a new (unseen) interface with the goal of making its features more easily findable. The four HVS principles are implemented in Familiariser, a web browser that automatically restructures webpage layouts based on the visual history of the user. Evaluation of Familiariser (using visual statistical learning) with users provides first evidence that our approach reduces visual search time by over 10&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rataplan: Resilient automation of user interface actions with multi-modal proxies</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_imwut_veuskenslr20/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_imwut_veuskenslr20/</guid>
      <description>&lt;p&gt;We present Rataplan, a robust and resilient pixel-based approach for linking multi-modal proxies to automated sequences of actions in graphical user interfaces (GUIs). With Rataplan, users demonstrate a sequence of actions and answer human-readable follow-up questions to clarify their desire for automation. After demonstrating a sequence, the user can link a proxy input control to the action which can then be used as a shortcut for automating a sequence. Alternatively, output proxies use a notification model in which content is pushed when it becomes available. As an example use case, Rataplan uses keyboard shortcuts and tangible user interfaces (TUIs) as input proxies, and TUIs as output proxies. Instead of relying on available APIs, Rataplan automates GUIs using pixel-based reverse engineering. This ensures our approach can be used with all applications that offer a GUI, including web applications. We implemented a set of important strategies to support robust automation of modern interfaces that have a flat and minimal style, have frequent data and state changes, and have dynamic viewports.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Enhancing patient motivation through intelligibility in cardiac tele-rehabilitation</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_iwc_sankaranlhdc19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_iwc_sankaranlhdc19/</guid>
      <description>&lt;p&gt;Physical exercise training and medication compliance are primary components of cardiac rehabilitation. When rehabilitating independently at home, patients often fail to comply with their prescribed medication and find it challenging to interpret exercise targets or be aware of the expected efforts. Our work aims to assist cardiac patients in understanding their condition better, promoting medication adherence and motivating them to achieve their exercise targets in a tele-rehabilitation setting. We introduce a patient-centric intelligible visualization approach to present prescribed medication and exercise targets to patients. We assessed efficacy of intelligible visualizations on patients&#39; comprehension in two lab studies. We evaluated the impact on patient motivation and health outcomes in field studies. Patients were able to adhere to medication prescriptions, manage their physical exercises, monitor their progress and gained better self-awareness on how they achieved their rehabilitation targets. Patients confirmed that the intelligible visualizations motivated them to achieve their targets better. We observed an improvement in overall physical activity levels and health outcomes of patients.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Fortune nets for fortunettes: Formal, petri nets-based engineering of feedforward for GUI widgets</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_fm_navarrepclv19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_fm_navarrepclv19/</guid>
      <description>&lt;p&gt;Feedback and feedforward are two fundamental mechanisms that supports users&#39; activities while interacting with computing devices. While feedback can be easily solved by providing information to the users following the triggering of an action, feedforward is much more complex as it must provide information before an action is performed. Fortunettes is a generic mechanism providing a systematic way of designing feedforward addressing both action and presentation problems. Including a feedforward mechanism significantly increases the complexity of the interactive application hardening developers&#39; tasks to detect and correct defects. This paper proposes the use of an existing formal notation for describing the behavior of interactive applications and how to exploit that formal model to extend the behavior to offer feedforward. We use a small login example to demonstrate the process and the results.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Fortunettes: Feedforward about the future state of GUI widgets</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_pacmhci_copperslvnpg19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_pacmhci_copperslvnpg19/</guid>
      <description>&lt;p&gt;Feedback is commonly used to explain what happened in an interface. What if questions, on the other hand, remain mostly unanswered. In this paper, we present the concept of enhanced widgets capable of visualizing their future state, which helps users to understand what will happen without committing to an action. We describe two approaches to extend GUI toolkits to support widget-level feedforward, and illustrate the usefulness of widget-level feedforward in a standardized interface to control the weather radar in commercial aircraft. In our evaluation, we found that users require less clicks to achieve tasks and are more confident about their actions when feedforward information was available. These findings suggest that widget-level feedforward is highly suitable in applications the user is unfamiliar with, or when high confidence is desirable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Improving the translation environment for professional translators</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_informatics_vandeghinstevab19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_informatics_vandeghinstevab19/</guid>
      <description>&lt;p&gt;When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>JigFab: Computational fabrication of constraints to facilitate woodworking with power tools</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_leenvlr19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_leenvlr19/</guid>
      <description>&lt;p&gt;We present JigFab, an integrated end-to-end system that supports casual makers in designing and fabricating con- structions with power tools. Starting from a digital version of the construction, JigFab achieves this by generating vari- ous types of constraints that configure and physically aid the movement of a power tool. Constraints are generated for ev- ery operation and are custom to the work piece. Constraints are laser cut and assembled together with predefined parts to reduce waste. JigFab&#39;s constraints are used according to an interactive step-by-step manual. JigFab internalizes all the required domain knowledge for designing and building intri- cate structures, consisting of various types of finger joints, tenon &amp;amp; mortise joints, grooves, and dowels. Building such structures is normally reserved for artisans or automated with advanced CNC machinery.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TaskHerder: A wearable minimal interaction interface for mobile and long-lived task execution</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_hellerl19/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_hellerl19/</guid>
      <description>&lt;p&gt;Notifications have become a core component of the smart-phone as our ubiquitous companion. Many of these only require minimal interaction, for which the smartwatch is a helpful companion device. However, its design and placement is influenced by its traditional ancestors. For applications where the user is constrained because of a specific usage situation, or performs tasks with both hands simultaneously, interaction with the smartwatch can be cumbersome. In this paper, we propose a wearable armstrap for minimal interaction in long-lived tasks. Placed around the elbow, it is outside the hands&#39; proximal working space which reduces interference. Its flexible e-ink display provides screen space to provide overview information at minimal energy consumption for longer uptime. We designed the wearable for a professional use-case, meaning that is can easily be placed above protective clothing as its flexible round shape easily adjusts to various diameters. Capacitive touch sensing allows gesture input even under rough conditions, e.g., with gloves.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Exploring the role of artefacts to coordinate design meetings</title>
      <link>https://www.krisluyten.net/publications/lopez2018exploring/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/lopez2018exploring/</guid>
      <description>&lt;p&gt;Design artefacts are vital to communicate design outcomes, both in remote and co-located settings. However, it is unclear how artefacts are used to mediate interactions between designers and stakeholders of the design process. The purpose of this paper is exploring how professional design teams use artefacts to guide and capture discussions involving multidisciplinary stakeholders while they work in a co-located setting. An earlier draft of this paper was paper published in the Proceedings of the European Conference on Cognitive Ergonomics (ECCE 2017). This work adds substantial clarification of the methodology followed, further details and photographs of the case studies, and an extended discussion about our findings and their relevance for designing interactive systems. We report the observations of six design meetings in three different projects, involving professional design teams that follow a user-centered design approach. Meetings with stakeholders are instrumental for design projects. However, design teams face the challenge of synthesizing large amounts of information, often in a limited time, and with minimal common ground between meeting attendees. We found that all the observed design meetings had a similar structure consisting of a series of particular phases, in which design activities were organized around artefacts. These artefacts were used as input to disseminate and gather feedback of previous design outcomes, or as output to collect and process a variety of perspectives. We discuss the challenges faced by design teams during design meetings, and propose three design directions for interactive systems to coordinate design meetings revolving around artefacts.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Familiarisation: Restructuring layouts with visual learning models</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_todijlo18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_todijlo18/</guid>
      <description>&lt;p&gt;In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) in familiar locations. This paper contributes computational approaches to restructuring layouts such that features on a new, unvisited interface can be found quicker. We explore four concepts of familiarisation, inspired by the human visual system (HVS), to automatically generate a familiar design for each user.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Intellingo: An intelligible translation environment</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_coppers0lclvv18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_coppers0lclvv18/</guid>
      <description>&lt;p&gt;Translation environments offer various translation aids to support professional translators. However, translation aids typically provide only limited justification for the translation suggestions they propose. In this paper we present Intellingo, a translation environment that explores intelligibility for translation aids, to enable more sensible usage of translation suggestions. We performed a comparative study between an intelligible version and a non-intelligible version of Intellingo. The results show that although adding intelligibility does not necessarily result in significant changes to the user experience, translators can better assess translation suggestions without a negative impact on their performance. Intelligibility is preferred by translators when the additional information it conveys benefits the translation process and when this information is not part of the translator&#39;s readily available knowledge.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Re-thinking traceability: A prototype to record and revisit the evolution of design artefacts</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_group_lopezrlhc18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_group_lopezrlhc18/</guid>
      <description>&lt;p&gt;Keeping track of design processes is a cumbersome task due to the apparently unconstrained and unstructured nature of creative work. Traceability is fundamental to revisit and reflect on the design narratives that describe artefact evolution. In this paper, we aim to identify what characteristics are necessary to facilitate traceability of creative design processes. For this end, we use a functional prototype to connect artefacts, design rationale, and decisions in a shared workspace. We evaluated this prototype for 15 weeks with six pairs of students engaged in a user-centered design project. Our findings show that having a lean repository of artefacts annotated with design rationale can facilitate tracking progress in different phases of the process. We found that creating a record of the participants&#39; design work is useful to reflect on and for team agreement, ensure consistency of evolving artefacts, and help in planning future steps in the design project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Silicone devices: A scalable DIY approach for fabricating self-contained multi-layered soft circuits using microfluidics</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_nagelsrld18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_nagelsrld18/</guid>
      <description>&lt;p&gt;We present a scalable Do-It-Yourself (DIY) fabrication workflow for prototyping highly stretchable yet robust devices using a CO2 laser cutter, which we call Silicone Devices. Silicone Devices are self-contained and thus embed components for input, output, processing, and power. Our approach scales to arbitrary complex devices as it supports techniques to make multi-layered stretchable circuits and buried VIAs. Additionally, high-frequency signals are supported as our circuits consist of liquid metal and are therefore highly conductive and durable. To enable makers and interaction designers to prototype a wide variety of Silicone Devices, we also contribute a stretchable sensor toolkit, consisting of touch, proximity, sliding, pressure, and strain sensors. We demonstrate the versatility and novel opportunities of our technique by prototyping various samples and exploring their use cases. Strain tests report on the reliability of our circuits and preliminary user feedback reports on the user-experience of our workflow by non-engineers.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Smart computer-aided translation environment (SCATE) : highlights</title>
      <link>https://www.krisluyten.net/publications/vandenghinste_scate2018/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vandenghinste_scate2018/</guid>
      <description>&lt;p&gt;We present key results of SCATE (Smart Computer-Aided Translation Environment). The project investigated algorithms, user interfaces and methods that can contribute to the development of more efficient tools for translation work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Smart makerspace: A web platform implementation</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_ijet_lickstl18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_ijet_lickstl18/</guid>
      <description>&lt;p&gt;Makerspaces are creative and learning environments, home to activities such as fabrication processes and Do-It-Yourself (DIY) tasks. How- ever, containing equipment that are not commonly seen or handled, these spaces can look rather challenging to novice users. This paper is based on the Smart Makerspace research from Autodesk, which uses a smart workbench for an immersive instructional space for DIY tasks. Having its functionalities in mind and trying to overcome some of its limitations, we approach the concept build- ing an immersive instructional space as a web platform. The platform, intro- duced to users in a makerspace, had a feedback that reflects its potential be- tween novice and intermediate users, for creating facilitators and encouraging these users.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SmartObjects: Sixth workshop on interacting with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_0003sggflbdm18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_0003sggflbdm18/</guid>
      <description></description>
    </item>
    <item>
      <title>Towards tool-support for robot-assisted product creation in fab labs</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hcse_0001dvrl18/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hcse_0001dvrl18/</guid>
      <description>&lt;p&gt;Collaborative robot-assisted production has great potential for high variety low volume production lines. These type of production lines are common in both personal fabrication settings as well as in several types of flexible production lines. Moreover, many assembly tasks are in fact hard to complete by a single user or a single robot, and benefit greatly from a fluent collaboration between both. However, programming such systems is cumbersome, given the wide variation of tasks and the complexity of instructing a robot how it should move and operate in collaboration with a human user. In this paper we explore the case of collaborative assembly for personal fabrication. Based on a CAD model of the envisioned product, our software analyzes how this can be composed from a set of standardized pieces and suggests a series of collaborative assembly steps to complete the product. The proposed tool removes the need for the end-user to perform additional programming of the robot. We use a low-cost robot setup that is accessible and usable for typical personal fabrication activities in Fab Labs and Makerspaces. Participants in a first experimental study testified that our approach leads to a fluent collaborative assembly process. Based on this preliminary evaluation, we present next steps and potential implications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Capturing design decision rationale with decision cards</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_interact_lopezrhlc17/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_interact_lopezrhlc17/</guid>
      <description>&lt;p&gt;In the design process, designers make a wide variety of decisions that are essential to transform a design from a conceptual idea into a concrete solution. Recording and tracking design decisions, a first step to capturing the rationale of the design process, are tasks that until now are considered as cumbersome and too constraining. We used a holistic approach to design, deploy, and verify decision cards; a low threshold tool to capture, externalize, and contextualize design decisions during early stages of the design process. We evaluated the usefulness and validity of decision cards with both novice and expert designers. Our exploration results in valuable insights into how such decision cards are used, into the type of information that practitioners document as design decisions, and highlight the properties that make a recorded decision useful for supporting awareness and traceability on the design process.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Coaching compliance: A tool for personalized e-coaching in cardiac rehabilitation</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_interact_sankaranhdlc17/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_interact_sankaranhdlc17/</guid>
      <description>&lt;p&gt;Patient coaching is integral to cardiac rehabilitation programs to help patients understand, cope better with their condition and become active participants in their care. The introduction of remote patient monitoring technologies and tele-monitoring solutions have proven to be effective and paved way for novel remote rehabilitation approaches. Nonetheless, these solutions focus largely on monitoring patients without a specific focus on coaching patients. Additionally, these systems lack personalization and a deeper understanding of individual patient needs. In our demonstration, we present a tool to personalize e-coaching based on individual patient risk factors, adherence rates and personal preferences of patients using a tele-rehabilitation solution. We developed the tool after conducting a workshop and multiple brainstorms with various caregivers involved in coaching cardiac patients to connect their perspectives with patient needs. It was integrated into a comprehensive tele-rehabilitation application.&lt;/p&gt;</description>
    </item>
    <item>
      <title>DICE-R: Defining human-robot interaction with composite events</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_berghl17/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_berghl17/</guid>
      <description></description>
    </item>
    <item>
      <title>SmartObjects: Fifth workshop on interacting with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walka017/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walka017/</guid>
      <description></description>
    </item>
    <item>
      <title>StrutModeling: A low-fidelity construction kit to iteratively model, test, and adapt 3D objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_uist_leenrl17/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_uist_leenrl17/</guid>
      <description>&lt;p&gt;We present StrutModeling, a computationally enhanced con- struction kit that enables users without a 3D modeling back- ground to prototype 3D models by assembling struts and hub primitives in physical space. Physical 3D models are imme- diately captured in software and result in readily available models for 3D printing. Given the concrete physical format of StrutModels, modeled objects can be tested and fine tuned in the presence of existing objects and specific needs of users. StrutModeling avoids puzzling with pieces by contributing an adjustable strut and universal hub design. Struts can be adjusted in length and snap to magnetic hubs in any configu- ration. As such, arbitrarily complex models can be modeled, tested, and adjusted during the design phase. In addition, the embedded sensing capabilities allow struts to be used as mea- suring devices for lengths and angles, and tune physical mesh models according to existing physical objects.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The path-to-purchase is paved with digital opportunities: An inventory of shopper-oriented retail technologies</title>
      <link>https://www.krisluyten.net/publications/willems2017228/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/willems2017228/</guid>
      <description>&lt;p&gt;This study focuses on innovative ways to digitally instrument the servicescape in bricks-and-mortar retailing. In the present digital era, technological developments allow for augmenting the shopping experience and capturing moments-of-truth along the shopper&#39;s path-to-purchase. This article provides an encompassing inventory of retail technologies resulting from a systematic screening of three secondary data sources, over 2008&amp;ndash;2016: (1) the academic marketing literature, (2) retailing related scientific ICT publications, and (3) business practices (e.g., publications from retail labs and R&amp;amp;D departments). An affinity diagram approach allows for clustering the retail technologies from an HCI perspective. Additionally, a categorization of the technologies takes place in terms of the type of shopping value that they offer, and the stage in the path-to-purchase they prevail. This in-depth analysis results in a comprehensive inventory of retail technologies that allows for verifying the suitability of these technologies for targeted in-store shopper marketing objectives (cf. the resulting online faceted-search repository at &lt;a href=&#34;http://www.retail-tech.org&#34;&gt;http://www.retail-tech.org&lt;/a&gt;). The findings indicate that the majority of the inventoried technologies provide cost savings, convenience and utilitarian value, whereas few offer hedonic or symbolic benefits. Moreover, at present the earlier stages of the path-to-purchase appear to be the most instrumented. The article concludes with a research agenda.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Untangling design meetings: Artefacts as input and output of design activities</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ecce_lopezlvc17/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ecce_lopezlvc17/</guid>
      <description>&lt;p&gt;Design meetings with multidisciplinary stakeholders are instrumental for design projects. However, design teams face the challenges of synthetizing large amounts of information, often in a limited time, and with minimal common ground. We investigate these challenges through in-the-wild observations of six design meetings in three different projects, with professional design teams that follow a user-centered design methodology. We found that all the observed design meetings had a similar structure consisting of particular phases, in which design activities were organized around artefacts. These artefacts were used as input to disseminate and gather feedback of previous design outcomes, or as output to collect and process a variety of perspectives. From these findings, we synthetize practical guidelines to optimize artefact-based interactions during design meetings.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A grounded approach for applying behavior change techniques in mobile cardiac tele-rehabilitation</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_petra_sankaranfhdlc16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_petra_sankaranfhdlc16/</guid>
      <description>&lt;p&gt;In mobile tele-rehabilitation applications for Coronary Artery Disease (CAD) patients, behavior change plays a central role in influencing better therapy adherence and prevention of disease recurrence. However, creating sustainab le behavior chan ge that holds a beneficial impact over a prolonged period of time remains an important challenge. In this paper we discuss various models and frameworks related to persuas ion and behav ior chan ge, and investigate how to incorporate these with a multidisciplinary user-centered design approach for creating a mobile tele-rehabilitation application. By implementing different concepts that contribute to behavior change and applying a set of distinct persuasive design patterns, we were able to translate the high-level goals of behavior theory into a mobile application that explicitly incorporates behavior change techniques and also offers a good overall user experience. We evaluated our system, HeartHab, in a lab setting and show that our approach leads to a high user acceptance and willingness to use the system in daily activities.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Back on bike: The BoB mobile cycling app for secondary prevention in cardiac patients</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_mhci_geurtshdlc16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_mhci_geurtshdlc16/</guid>
      <description>&lt;p&gt;Persons that suffered from a cardiac disease are often recommended to integrate a sufficient level of physical exercise in their daily life. Initially, cardiac rehabilitation takes place in a closely monitored setting in a hospital or a rehabilitation center. Sustaining the effort once the patient has left the ambulatory, supervised environment is a challenge, and drop-out rates are high. Emerging approaches such as telemonitoring and telerehabilitation have been proven to show the potential to support the cardiac patient in adhering to the advised physical exercise. However, most telerehabilitation solutions only support a limited range of physical exercise, such as step-counting during walking. We propose BoB (Back on Bike), a mobile application that guides cardiac patients while cycling. Design choices are explained according to three pillars: ease of use, reduce fear, and direct and indirect motivation. In this paper, we report the results from a field study with cardiac patients.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Calculating and visualising energy expenditure to monitor physical activity in tele-rehabilitation</title>
      <link>https://www.krisluyten.net/publications/sankaran_calculating2016/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/sankaran_calculating2016/</guid>
      <description>&lt;p&gt;We have developed an approach that presents patients with an intelligible, user-friendly yet correct visualisation to check progress and verify adherence to the prescribed physical exercise program. Integrated in a comprehensive, mobile self-monitoring app, this patient-centric approach facilitates keeping patients motivated and engaged while rehabilitating remotely.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Hasselt: Rapid prototyping of multimodal interactions with composite event-driven programming</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_ijpop_cuencablc16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_ijpop_cuencablc16/</guid>
      <description></description>
    </item>
    <item>
      <title>Hidden in plain sight: An exploration of a visual language for near-eye out-of-focus displays in the peripheral view</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_luytendrcv16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_luytendrcv16/</guid>
      <description>&lt;p&gt;In this paper, we set out to find what encompasses an appropriate visual language for information presented on near-eye out-of-focus displays. These displays are positioned in a user&#39;s peripheral view, very near to the user&#39;s eyes, for example on the inside of the temples of a pair of glasses. We explored the usable display area, the role of spatial and retinal variables, and the influence of motion and interaction for such a language. Our findings show that a usable visual language can be accomplished by limiting the possible shapes and by making clever use of orientation and meaningful motion. We found that especially motion is very important to improve perception and comprehension of what is being displayed on near-eye out-of-focus displays, and that perception is further improved if direct interaction with the content is allowed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Integrating serious games and tangible objects for functional handgrip training: A user study of handly in persons with multiple sclerosis</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_acmdis_vandermaesenwfl16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_acmdis_vandermaesenwfl16/</guid>
      <description></description>
    </item>
    <item>
      <title>Proceedings of the 8th ACM SIGCHI symposium on engineering interactive computing systems, EICS 2016, brussels, belgium, june 21-24, 2016</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_2016/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_2016/</guid>
      <description></description>
    </item>
    <item>
      <title>Purpose-centric appropriation of everyday objects as game controllers</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_todidbfkl16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_todidbfkl16/</guid>
      <description>&lt;p&gt;Generic multi-button controllers are the most common input devices used for video games. In contrast, dedicated game controllers and gestural interactions increase immersion and playability. Room-sized gaming has opened up possibilities to further enhance the immersive experience, and provides players with opportunities to use full-body movements as input. We present a purpose-centric approach to appropriating everyday objects as physical game controllers, for immersive room-sized gaming. Virtual manipulations supported by such physical controllers mimic real-world function and usage. Doing so opens up new possibilities for interactions that flow seamlessly from the physical into the virtual world. As a proof-of-concept, we present a &#39;Tower Defence&#39; styled game, that uses four everyday household objects as game controllers, each of which serves as a weapon to defend the base of the players from enemy bots. Players can use 1) a mop (or a broom) to sweep away enemy bots directionally; 2) a fan to scatter them away; 3) a vacuum cleaner to suck them; 4) a mouse trap to destroy them. Each controller is tracked using a motion capture system. A physics engine is integrated in the game, and ensures virtual objects act as though they are manipulated by the actual physical controller, thus providing players with a highly-immersive gaming experience.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ReHappy: The house elf that serves your rehabilitation exercises</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_coninxwll16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_coninxwll16/</guid>
      <description>&lt;p&gt;Intense and frequent motor training is essential in persons with neurological disorders as there are MS and stroke. Technology-based rehabilitation has been proven to be beneficial for specific patient groups, as it shows to be effective on muscle strength and active range of motion of the upper limbs. Personalized training in technology-supported rehabilitation setups using motivational techniques such as serious games have the potential to make repetitive training efforts more endurable. Most neurological rehabilitation approaches suffer from a strict separation between training scenarios and activities in daily living, but have difficulties to bridge the gap between exercising on a functional level and performing on the level of activities in daily living. To improve the integration of motor skill training in a daily living context we propose an approach and proof-of-concept implementation of the training device ReHappy, a tangible character that engages patient in performing additional training that complements the daily activities.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SCWT: A joint workshop on smart connected and wearable things</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkal16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkal16/</guid>
      <description></description>
    </item>
    <item>
      <title>Storyboards as a lingua franca in multidisciplinary design teams</title>
      <link>https://www.krisluyten.net/publications/dblp_books_sp_16_haesenvlc16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_books_sp_16_haesenvlc16/</guid>
      <description>&lt;p&gt;Design, and in particular user-centered design processes for interactive systems, typically involve multidisciplinary teams. The different and complemen- tary perspectives of the team members enrich the design ideas and decisions, and the involvement of all team members is needed to achieve a user interface for a system that carefully considers all aspects, ranging from user needs to technical requirements. The difficulty is getting all team members involved in the early stages of design and communicating design ideas and decisions in a way that all team members can understand them and use them in an appropriate way in later stages of the process. This chapter describes the COMuICSer storyboarding technique, which presents the scenario of use of a future system in a way that is understandable for each team member, regardless of their background. Based on an observational study in which multidisciplinary teams collaboratively created storyboards during a co-located session, we present recommendations for the facilitation of co-located collaborative storyboarding sessions for multidisciplinary teams and digital tool support for this type of group work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Toward specifying human-robot collaboration with composite events</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ro_man_berghllc16/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ro_man_berghllc16/</guid>
      <description></description>
    </item>
    <item>
      <title>Whom-i-approach: A system that provides cues on approachability of bystanders for blind users</title>
      <link>https://www.krisluyten.net/publications/tubiblio98343/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/tubiblio98343/</guid>
      <description>&lt;p&gt;Body posture is one of many visual cues used by sighted persons to determine if someone would be open to initiate a conversation. These cues are inaccessible for individuals with blindness leading to difficulties when deciding whom to approach for eventual assistance. Current camera technologies, such as depth cameras, enable to automatically scan the environment to assess the approachability of nearby persons. We present Whom-I-Approach, a system that translates postures of bystanders into a measure of approachability and communicates this information using auditory and tactile cues. The system scans the environment and determines the approachability based on body posture for the persons in the vicinity of the user. Efficiency as well as perceived system usability and psychosocial attitudes are measured in a user study showing the potential to improve competence for users with blindness prior to engagement in social interactions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A user study for comparing the programming efficiency of modifying executable multimodal interaction descriptions: A domain-specific language versus equivalent event-callback code</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_oopsla_cuencablc15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_oopsla_cuencablc15/</guid>
      <description></description>
    </item>
    <item>
      <title>Augmenting social interactions: Realtime behavioural feedback using social signal processing techniques</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_damiantbsla15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_damiantbsla15/</guid>
      <description>&lt;p&gt;Nonverbal and unconscious behaviour is an important component of daily human-human interaction. This is especially true in situations such as public speaking, job interviews or information sensitive conversations, where researchers have shown that an increased awareness of one&#39;s behaviour can improve the outcome of the interaction. With wearable technology, such as Google Glass, we now have the opportunity to augment social interactions and provide realtime feedback on one&#39;s behaviour in an unobtrusive way. In this paper we present Logue, a system that provides realtime feedback on the presenters&#39; openness, body energy and speech rate during public speaking. The system analyses the user&#39;s nonverbal behaviour using social signal processing techniques and gives visual feedback on a head-mounted display. We conducted two user studies with a staged and a real presentation scenario which yielded that Logue&#39;s feedback was perceived helpful and had a positive impact on the speaker&#39;s performance.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Empirical study: Comparing hasselt with c\# to describe multimodal dialogs</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_models_cuencablc15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_models_cuencablc15/</guid>
      <description>&lt;p&gt;Previous research has proposed guidelines for creating domain-specific languages for modeling human-machine multimodal dialogs. One of these guidelines suggests the use of multiple levels of abstraction so that the descriptions of multimodal events can be separated from the human-machine dialog model. In line with this guideline, we implemented Hasselt, a domain-specific language that combines textual and visual models, each of them aiming at describing different aspects of the intended dialog system. We conducted a user study to measure whether the proposed language provides benefits over equivalent event-callback code. During the user study participants had to modify the Hasselt models and the equivalent C# code. The completion times obtained for C# were on average shorter, although the difference was not statiscally significant. Subjective responses were collected using standardized questionnaires and an interview, which both indicated that participants saw value in the proposed models. We provide possible explanations for the results and discuss some lessons learned regarding the design of the empirical study.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Gestu-wan - an intelligible mid-air gesture guidance system for walk-up-and-use displays</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_interact_rovelodvlc15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_interact_rovelodvlc15/</guid>
      <description>&lt;p&gt;We present Gestu-Wan, an intelligible gesture guidance system designed to support mid-air gesture-based interaction for walk-up-and-use displays. Although gesture-based interfaces have become more prevalent, there is currently very little uniformity with regard to gesture sets and the way gestures can be executed. This leads to confusion, bad user experiences and users who rather avoid than engage in interaction using mid-air gesturing. Our approach improves the visibility of gesture-based interfaces and facilitates execution of mid-air gestures without prior training. We compare Gestu-Wan with a static gesture guide, which shows that it can help users with both performing complex gestures as well as understanding how the gesture recognizer works.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Graphical toolkits for rapid prototyping of multimodal systems: A survey</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_iwc_cuencacvl15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_iwc_cuencacvl15/</guid>
      <description></description>
    </item>
    <item>
      <title>Hasselt UIMS: A tool for describing multimodal interactions with composite events</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_cuencablc15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_cuencablc15/</guid>
      <description></description>
    </item>
    <item>
      <title>Helaba: A system to highlight design rationale in collaborative design processes</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cdve_lopezhlc15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cdve_lopezhlc15/</guid>
      <description>&lt;p&gt;Design activities associated to the ideation phase of design processes require mutual understanding and clear communication based on artefacts. However, this is often a challenge for remote and multidisciplinary teams due to the lack of ad hoc tools for this purpose. Our approach is to solve these limitations by explicitly connecting pieces of information related to design rationale, feedback, and evolution with the artefacts that are subject of communication. We propose Helaba, a system that creates a shared workspace to support communication revolving around design artefacts and activities within multidisciplinary teams. Helaba supports design communication and rationale, and potentially leads to more satisfying outcomes from the design process.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PaperPulse: An integrated approach for embedding electronics in paper designs</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerstl15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerstl15/</guid>
      <description>&lt;p&gt;We present PaperPulse, a design and fabrication approach that enables designers without a technical background to produce standalone interactive paper artifacts by augmenting them with electronics. With PaperPulse, designers overlay pre-designed visual elements with widgets available in our design tool. PaperPulse provides designers with three families of widgets designed for smooth integration with paper, for an overall of 20 different interactive components. We also contribute a logic demonstration and recording approach, Pulsation, that allows for specifying functional relationships between widgets. Using the final design and the recorded Pulsation logic, PaperPulse generates layered electronic circuit designs, and code that can be deployed on a microcontroller. By following automatically generated assembly instructions, designers can seamlessly integrate the microcontroller and widgets in the final paper artifact.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PaperPulse: An integrated approach for embedding electronics in paper designs</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_siggraph_ramakerstl15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_siggraph_ramakerstl15/</guid>
      <description></description>
    </item>
    <item>
      <title>PaperPulse: An integrated approach for embedding electronics in paper designs</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_siggraph_ramakerstl15a/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_siggraph_ramakerstl15a/</guid>
      <description></description>
    </item>
    <item>
      <title>PaperPulse: An integrated approach to fabricating interactive paper</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerstl15a/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerstl15a/</guid>
      <description></description>
    </item>
    <item>
      <title>Proxemic flow: Dynamic peripheral floor visualizations for revealing and mediating large surface interactions</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_interact_vermeulenlcmb15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_interact_vermeulenlcmb15/</guid>
      <description></description>
    </item>
    <item>
      <title>SmartObjects: Fourth workshop on interacting with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkam15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkam15/</guid>
      <description></description>
    </item>
    <item>
      <title>Study and analysis of collaborative design practices</title>
      <link>https://www.krisluyten.net/publications/gutierrez_cdp2015/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/gutierrez_cdp2015/</guid>
      <description>&lt;p&gt;Current digital design tools that have a high connectivity offer a wide range of possibilities for both co-located and remote collaborative design activities. However, from the point of view of conventional collaborative design practices we identified with practitioners and design companies, these tools lack integrated and comprehensive support during the ideation phase. Consequently, we propose a reference framework with solutions for supporting collaboration among professional designers with digital tools in the early stages of design.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Web-powered virtual site exploration based on augmented 360 degree video via gesture-based interaction</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tvx_wijnantsrdqll15/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tvx_wijnantsrdqll15/</guid>
      <description></description>
    </item>
    <item>
      <title>A domain-specific textual language for rapid prototyping of multimodal interactive systems</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_cuencablc14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_cuencablc14/</guid>
      <description>&lt;p&gt;There are currently toolkits that allow the specification of executable multimodal human-machine interaction models. Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. Others instead, interpret concise specifications written in existing textual languages even though their non-specialized notations prevent the productivity improvement achievable through domain-specific ones. We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. The language provides notations and constructs specially tailored to compactly declare the event patterns raised during the execution of multimodal commands. The toolkit detects the occurrence of these patterns and invokes the functionality of a back-end system in response.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Break-it, hack-it, make-it: The &#39;hack-a-thing&#39; workshop series as a showcase for the integration of creative thinking processes into FabLab genk</title>
      <link>https://www.krisluyten.net/publications/dreessenhackathing2014/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dreessenhackathing2014/</guid>
      <description>&lt;p&gt;FabLabs are mostly known for their problem-solving approach since they allow people to develop and perfect a prototype of &#39;almost any product&#39;, using the available infrastructure, facilities and knowhow (Mandavilli, 2006). Since 2012, FabLab Genk too has become a hotbed for problem-solving activities. FabLab Genk is situated in a creative context and is used by many media, arts and design students, researchers, designers and artists, for creating a wide variety of physical objects that they could otherwise only imagine. However, we noticed that the creative thinking processes that take place before the actual problem-solving do not take place within the environment of FabLab Genk. As a way of including these creative thinking processes into its environment, FabLab Genk organised a series of workshops called &#39;Hack-a-Thing&#39;. This paper shows how &#39;Hack-a-Thing&#39; proved to be a setup that facilitates new ways of learning and creative thinking in the environment of FabLab Genk. First, this paper illustrates that the &#39;Hack-a-Thing&#39; workshop series allowed FabLab Genk to become an environment that fosters a new, more informal and creative form of learning. Second, this paper shows how &#39;Hack- a-Thing&#39; stimulated a more creative way of using and thinking, particularly about alternative relationships with technological objects.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Exploring social augmentation concepts for public speaking using peripheral feedback and real-time behavior analysis</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ismar_damiantbsla14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ismar_damiantbsla14/</guid>
      <description></description>
    </item>
    <item>
      <title>Game of tones: Learning to play songs on a piano using projected instructions and games</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_raymaekersvlc14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_raymaekersvlc14/</guid>
      <description>&lt;p&gt;Learning to play a musical instrument such as the piano requires a substantial amount of practice and perseverance in learning to read and play from sheet music. Our interactivity demo allows people to learn to play songs without requiring sheet music reading skills. We project a graphical notation on top of a piano that indicates what key(s) need to be pressed and create a feedback loop that monitors the player&#39;s performance. We implemented The Augmented Piano (TAP), which is a straightforward combination of a physical piano with our alternative notation projected on top. Piano Attack (PAT) extends TAP with a shooting game that continuously provides game-based incentives for learning to play the piano.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Interdisciplinary design of a pervasive fall handling system: A case study</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ph_berghlaejb14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ph_berghlaejb14/</guid>
      <description></description>
    </item>
    <item>
      <title>Investigating the effects of using biofeedback as visual stress indicator during video-mediated collaboration</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_tanslc14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_tanslc14/</guid>
      <description>&lt;p&gt;During remote video-mediated assistance, instructors often guide workers through problems and instruct them to perform unfamiliar or complex operations. However, the workers&#39; performance might deteriorate due to stress. We argue that informing biofeedback to the instructor, can improve communication and lead to lower stress. This paper presents a thorough investigation on mental workload and stress perceived by twenty participants, paired up in an instructor-worker scenario, performing remote video-mediated tasks. The interface conditions differ in task, facial and biofeedback communication. Two self-report measures are used to assess mental workload and stress. Results show that pairs reported lower mental workload and stress when instructors are using the biofeedback as compared to using interfaces with facial view. Significant correlations were found on task performance with reducing stress (i.e. increased task engagement and decreased worry) for instructors and declining mental workload (i.e. in- creased performance) for workers. Our findings provide insights to advance video-mediated interfaces for remote collaborative work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Multi-viewer gesture-based interaction for omni-directional video</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_ruizvlac14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_ruizvlac14/</guid>
      <description>&lt;p&gt;Omni-directional video (ODV) is a novel medium that offers viewers a 360º panoramic recording. This type of content will become more common within our living rooms in the near future, seeing that immersive displaying technologies such as 3D television are on the rise. However, little attention has been given to how to interact with ODV content. We present a gesture elicitation study in which we asked users to perform mid-air gestures that they consider to be appropriate for ODV interaction, both for individual as well as collocated settings. We are interested in the gesture variations and adaptations that come forth from individual and collocated usage. To this end, we gathered quantitative and qualitative data by means of observations, motion capture, questionnaires and interviews. This data resulted in a user-defined gesture set for ODV, alongside an in-depth analysis of the variation in gestures we observed during the study.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paddle: Highly deformable mobile devices with physical controls</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerssl14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerssl14/</guid>
      <description>&lt;p&gt;Touch screens have been widely adopted in mobile devices. Although touch input is very flexible in that it can be used for a wide variety of applications on mobile devices, they do not provide physical affordances, encourage eyes-free use or utilize the full dexterity of our hands due to the lack of physical controls. On the other hand, physical controls are often tailored to the task at hand, making them less flexible and therefore less suitable for general purpose use in mobile settings. In this paper, we show how to combine the flexibility of touch screens with the physical qualities that real world controls provide in a mobile context. We do so using a deformable device that can be transformed into various special-purpose physical controls. We present Paddle, a highly deformable device that can be transformed to different shapes. Paddle bridges the gap between differently sized mobile available devices nowadays, such as phones and tablets. Additionally, Paddle demonstrates a novel opportunity for deformable devices to transform into differently shaped physical controls that provide clear physical affordances for the task at hand. Physical controls have the advantage of exploiting people&#39;s innate abilities for manipulating physical objects in the real world. We designed and implemented a prototyped system of which the engineering principles are based on the design of the Rubik&#39;s magic, a folding plate puzzle. Additionally, we explore the interaction techniques enabled by this concept and conduct an in-depth study to evaluate our transformable physical controls. Our findings show that these physical controls provide several benefits over traditional touch interaction techniques commonly used on mobile devices.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Paddle: Highly deformable mobile devices with physical controls</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerssl14a/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_ramakerssl14a/</guid>
      <description>&lt;p&gt;Paddle is a highly deformable mobile device that leverages engineering principles from the design of the Rubik&#39;s Magic, a folding plate puzzle. The various transformations supported by Paddle bridges the gap between differently sized mobile devices available nowadays, such as phones, armbands, tablets and game controllers. Besides this, Paddle can be transformed to different physical controls in only a few steps, such as peeking options, a ring to scroll through lists and a book-like form factor to leaf through pages. These special-purpose physical controls have the advantage of providing clear physical affordances and exploiting people&#39;s innate abilities for manipulating objects in the real world. We investigated the benefits of these interaction techniques in detail in [1]. In contrast to traditional touch screens, physical controls are usually less flexible and therefore less suitable for mobile settings. Paddle, shows how mobile devices can be designed to bring physical controls to mobile devices and thus combine the flexibility of touch screens with the physical qualities that real world controls provide. Our current prototype is tracked with an optical tracking system and uses a projector to provide visual output. In the future, we envision devices similar to Paddle that are entirely self-contained, using tiny integrated displays.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PhysiCube: Providing tangible interaction in a pervasive upper-limb rehabilitation system</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tei_vandermaesenwlc14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tei_vandermaesenwlc14/</guid>
      <description></description>
    </item>
    <item>
      <title>Raising awareness on smartphone privacy issues with SASQUATCH, and solving them with PrivacyPolice</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_mobiquitous_bonnelql14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_mobiquitous_bonnelql14/</guid>
      <description>&lt;p&gt;Smartphones leak personal information about their owner when they use it to connect to the Internet. Despite recent coverage of these issues in popular media, raising awareness remains problematic since it remains largely invisible to the users. We designed a system, SASQUATCH, consisting of a network scanner and a public display, to draw the visitor&#39;s attention and inform them about these issues. SASQUATCH first gathers private information about previous whereabouts, and then shows an anonymized version of this data on the public display to draw the visitor&#39;s attention. Next, SASQUATCH offers an interactive component that allows people to view the information their own smartphone is leaking in private, and then provides solutions (including a fully-automated smartphone application) for securing against future privacy leaks. A set of initial field trails has shown that SASQUATCH is highly effective in raising awareness.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ReHoblet - A home-based rehabilitation game on the tablet</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_healthcom_vandermaesenrlc14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_healthcom_vandermaesenrlc14/</guid>
      <description></description>
    </item>
    <item>
      <title>SmartObjects: Third workshop on interacting with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkahrblm14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkahrblm14/</guid>
      <description></description>
    </item>
    <item>
      <title>Suit up!: Enabling eyes-free interactions on jacket buttons</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_todil14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_todil14/</guid>
      <description>&lt;p&gt;We present a new interaction space for wearables by integrating interactive elements, in the form of buttons, into outdoor clothing, specifically jackets and coats. Interactive buttons, or &amp;quot;iButtons&amp;quot;, allow users to perform specific tasks using subtle, inconspicuous gestures. They are intended for outdoor settings, where reaching for a mobile phone or an other device may not be convenient or appropriate. Different types of buttons serve dedicated functions, and appropriate placement of these buttons make them easily accessible, without requiring visual contact. By adding context sensitivity, these buttons can also be repurposed to fit other functions. By linking multiple buttons, it is possible to create workflows for specific tasks. We provide a description of an initial iButton design space and highlight some scenarios to illustrate the envisioned usage of interactive buttons.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The design of slow-motion feedback</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_acmdis_vermeulenlcm14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_acmdis_vermeulenlcm14/</guid>
      <description>&lt;p&gt;The misalignment between the timeframe of systems and that of their users can cause problems, especially when the system relies on implicit interaction. It makes it hard for users to understand what is happening and leaves them little chance to intervene. This paper introduces the design concept of slow-motion feedback, which can help to address this issue. A definition is provided, together with an overview of existing applications of this technique.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The EICS 2014 doctoral consortium</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_nigayl14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_nigayl14/</guid>
      <description></description>
    </item>
    <item>
      <title>The role of physiological cues during remote collaboration</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_presence_tanlbsc14/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_presence_tanlbsc14/</guid>
      <description>&lt;p&gt;Empathic communication allows individuals to perceive and understand the feeling and emotion of the person with whom they are interacting. This could be particularly important during remote collaboration (such as remote assistance or distance learning) to enhance the social and emotional understanding of geographically distributed partners. However, supporting awareness in remote collaboration is very challenging especially when the interaction with the remote parties results in less information that can be communicated than in a physical interaction. We explore the effect of visualization using physiological cues that allow users to interpret emotional behaviors of remote parties with whom they are interacting in real time. The proposed visual representation allows users to infer emotional patterns from physiological cues that can potentially influence their communication approach toward a more aggressive style or maintain passive and peaceful interaction. We conducted a study involving participants who were paired up for a collaborative assessment task, interacting via voice only, videoconference, or a visual representation of the physiological measurements. Participants perceived the usage of our visual representation with higher group cohesiveness than using voice-only interaction. Further analysis shows that the visual representation significantly increases the positive affect score (i.e., participants are perceived to be more alert and demonstrate less distress) during remote collaboration. We discuss the possibilities of the proposed visual representation to support empathic communication during remote collaboration, and the benefits to the remote partners of having positive affect and group cohesiveness.&lt;/p&gt;</description>
    </item>
    <item>
      <title>\&#34;With a little help from my friends\&#34;: Context aware help and guidance using the social network</title>
      <link>https://www.krisluyten.net/publications/dblp_series_sci_mahmudlc13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_series_sci_mahmudlc13/</guid>
      <description></description>
    </item>
    <item>
      <title>ACM SIGCHI symposium on engineering interactive computing systems, EICS&#39;13, london, united kingdom - june 24 - 27, 2013</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_2013/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_2013/</guid>
      <description></description>
    </item>
    <item>
      <title>Activity-centric support for ad hoc knowledge work: A case study of co-activity manager</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_houbenbvlc13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_houbenbvlc13/</guid>
      <description>&lt;p&gt;Modern knowledge work consists of both individual and highly collaborative activities that are typically composed of a number of configuration, coordination and articulation processes. The desktop interface today, however, provides very little support for these processes and rather forces knowledge workers to adapt to the technology. We introduce co-Activity Manager, an activity-centric desktop system that (i) provides tools for ad hoc dynamic configuration of a desk- top working context, (ii) supports both explicit and implicit articulation of ongoing work through a built-in collaboration manager and (iii) provides the means to coordinate and share working context with other users and devices. In this paper, we discuss the activity theory informed design of co-Activity Manager and report on a 14 day field deployment in a multi-disciplinary software development team. The study showed that the activity-centric workspace supports different individ- ual and collaborative work configuration practices and that activity-centric collaboration is a two-phase process consisting of an activity sharing and per-activity coordination phase.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Assessing the support provided by a toolkit for rapid prototyping of multimodal systems</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_cuencavcl13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_cuencavcl13/</guid>
      <description></description>
    </item>
    <item>
      <title>Bro-cam: Improving game experience with empathic feedback using posture tracking</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_persuasive_tansslc13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_persuasive_tansslc13/</guid>
      <description>&lt;p&gt;In todays videogames user feedback is often provided through raw statistics and scoreboards. We envision that incorporating empathic feedback matching the player&#39;s current mood will improve the overall gaming experience. In this paper we present Bro-cam, a novel system that provides empathic feedback to the player based on their body postures. Different body postures of the players are used as an indicator for their openness. From their level of openness, Bro-cam profiles the players into different personality types ranging from introvert to extrovert. Empathic feedback is then automatically generated and matched to their preferences for certain humoristic feedback statements. We use a depth camera to track the player&#39;s body postures and movements during the game and analyze these to provide customized feedback. We conducted a user study involving 32 players to investigate their subjective assessment on the empathic game feedback. Semi-structured interviews reveal that participants were positive about the empathic feedback and Bro-cam significantly improves their game experience.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Crossing the bridge over norman&#39;s gulf of execution: Revealing feedforward&#39;s true identity</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_vermeulenlhc13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_vermeulenlhc13/</guid>
      <description>&lt;p&gt;Feedback and affordances are two of the most well-known principles in interaction design. Unfortunately, the related and equally important notion of feedforward has not been given as much consideration. Nevertheless, feedforward is a powerful design principle for bridging Norman&#39;s Gulf of Execution. We reframe feedforward by disambiguating it from related design principles such as feedback and perceived affordances, and identify new classes of feedforward. In addition, we present a reference framework that provides a means for designers to explore and recognize different opportunities for feedforward.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Empathic television experiences with second screens</title>
      <link>https://www.krisluyten.net/publications/willaert_empathictv2013/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/willaert_empathictv2013/</guid>
      <description>&lt;p&gt;The television remains a central hub in the home environment. We believe that in order to maintain its central role future TV&#39;s will need to incorporate empathic features. These will be delivered by interacting with other personal devices in home and services in the cloud. This position paper illustrates the common as well as the individual views of several Belgian partners working around a common scenario in the ITEA2 &#39;Empathic Products&#39; project.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Entwicklung einer storyboard language für personen mit leichter demenz</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_hmd_luytenmv13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_hmd_luytenmv13/</guid>
      <description></description>
    </item>
    <item>
      <title>Finding a needle in a haystack: An interactive video archive explorer for professional video searchers</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_mta_haesenmlcbtppm13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_mta_haesenmlcbtppm13/</guid>
      <description>&lt;p&gt;Professional video searchers typically have to search for partic- ular video fragments in a vast video archive that contains many hours of video data. Without having the right video archive exploration tools, this is a difficult and time consuming task that induces hours of video skimming. We propose the video archive explorer, a video exploration tool that provides visual representations of automatically detected concepts to facilitate individual and collaborative video search tasks. This video archive explorer is developed by employing a user-centred methodology, which ensures that the tool is more likely to fit to the end user needs. A qualitative evaluation with professional video searchers shows that the combination of automatic video indexing, interactive visualisations and user-centred design can result in an increased usability, user satisfaction and productivity.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Informing intelligent user interfaces by inferring affective states from body postures in ubiquitous computing environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_tanslc13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_tanslc13/</guid>
      <description>&lt;p&gt;Intelligent User Interfaces can benefit from having knowledge on the user&#39;s emotion. However, current implementations to detect affective states, are often constraining the user&#39;s freedom of movement by instrumenting her with sensors. This prevents affective computing from being deployed in naturalistic and ubiquitous computing contexts. In this paper, we present a novel system called mASqUE, which uses a set of association rules to infer someone&#39;s affective state from their body postures. This is done without any user instrumentation and using off-the-shelf and non-expensive commodity hardware: a depth camera tracks the body posture of the users and their postures are also used as an indicator of their openness. By combining the posture information with physiological sensors measurements we were able to mine a set of association rules relating postures to affective states. We demonstrate the possibility of inferring affective states from body postures in ubiquitous computing environments and our study also provides insights how this opens up new possibilities for IUI to access the affective states of users from body postures in a nonintrusive way.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Introduction to the special issue on interaction with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_tiis_schreiberlmbh13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_tiis_schreiberlmbh13/</guid>
      <description></description>
    </item>
    <item>
      <title>Liftacube: A prototype for pervasive rehabilitation in a residential setting</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_petra_vandermaesenwclg13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_petra_vandermaesenwclg13/</guid>
      <description></description>
    </item>
    <item>
      <title>SmartObjects: Second workshop on interacting with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkahlblm13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_schnelle_walkahlblm13/</guid>
      <description></description>
    </item>
    <item>
      <title>Timisto: A technique to extract usage sequences from storyboards</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_vogthlcm13/</link>
      <pubDate>Tue, 01 Jan 2013 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_vogthlcm13/</guid>
      <description>&lt;p&gt;Storyboarding is a technique that is often used for the conception of new interactive systems. A storyboard illustrates graphically how a system is used by its users and what a typical context of usage is. Although the informal notation of a storyboard stimulates creativity, and makes them easy to understand for everyone, it is more difficult to integrate in further steps in the engineering process. We present an approach, &amp;quot;Time In Storyboards&amp;quot; (Timisto), to extract valuable information on how various interactions with the system are positioned in time with respect to each other. Timisto does not interfere with the creative process of storyboarding, but maximizes the structured information about time that can be deduced from a storyboard.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A cognitive network for intelligent environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_imis_liqll12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_imis_liqll12/</guid>
      <description></description>
    </item>
    <item>
      <title>Ambient intelligence - third international joint conference, AmI 2012, pisa, italy, november 13-15, 2012. proceedings</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ami_2012/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ami_2012/</guid>
      <description></description>
    </item>
    <item>
      <title>Carpus: A non-intrusive user identification technique for interactive surfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_uist_ramakersvlcs12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_uist_ramakersvlcs12/</guid>
      <description>&lt;p&gt;Interactive surfaces have great potential for co-located collaboration because of their ability to track multiple inputs simultaneously. However, the multi-user experience on these devices could be enriched significantly if touch points could be associated with a particular user. Existing approaches to user identification are intrusive, require users to stay in a fixed position, or suffer from poor accuracy. We present a non-intrusive, high-accuracy technique for mapping touches to their corresponding user in a collaborative environment. By mounting a high-resolution camera above the interactive surface, we are able to identify touches reliably without any extra instrumentation, and users are able to move around the surface at will. Our technique, which leverages the back of users&#39; hands as identifiers, supports walk-up-and-use situations in which multiple people interact on a shared surface.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Co-activity manager: Integrating activity-based collaboration into the desktop interface</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_avi_houbenvlc12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_avi_houbenvlc12/</guid>
      <description>&lt;p&gt;Activity-Based Computing (ABC) has been proposed as an organisational structure for local desktop management and knowledge work. Knowledge work, however, typically occurs in partially overlapping subgroups and involves the use of multiple devices. We introduce co-Activity Manager, an ABC approach that (i) supports activity sharing for multiple collaborative contexts, (ii) includes collaborative tools into the activity abstraction and (iii) supports multiple devices by seamlessly integrated cloud support for documents and activity storage. Our 14 day ﬁeld deployment in a multidisciplinary software development team showed that activity sharing is used as a starting point for long-term collaboration while integrated communication tools and cloud support are used extensively during the collaborative activities. The study also showed that activities are used in different ways ranging from project descriptions to to-do lists, thereby conﬁrming that a document-driven activity roaming model seems to be a good match for collaborative knowledge work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Informing the design of situated glyphs for a care facility</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_vl_vermeulenksklc12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_vl_vermeulenksklc12/</guid>
      <description></description>
    </item>
    <item>
      <title>Mixed-initiative context filtering and group selection for improving ubiquitous help systems</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_isami_mahmudlc12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_isami_mahmudlc12/</guid>
      <description></description>
    </item>
    <item>
      <title>Novel applications integrate location and context information</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_pervasive_strobbeloddtdl12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_pervasive_strobbeloddtdl12/</guid>
      <description></description>
    </item>
    <item>
      <title>O brother, where art thou located?: Raising awareness of variability in location tracking for users of location-based pervasive applications</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_jlbs_aksenovlc12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_jlbs_aksenovlc12/</guid>
      <description></description>
    </item>
    <item>
      <title>Putting dementia into context - A selective literature review of assistive applications for users with dementia and their caregivers</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hcse_vogtlbcm12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hcse_vogtlbcm12/</guid>
      <description>&lt;p&gt;People with dementia face a decline of their cognitive functions, including memory impairment and diﬃculty to orient in time and space. Assistive applications can ease the eﬀects of dementia by assuming and supporting impaired functions. Context-awareness is an accepted paradigm for assistive applications. It enables interactive systems to react appropriately to situations that occur during daily routines of people with dementia. However, there currently is no recommended framework to view symptoms of dementia in terms of context and context-awareness. The aim of this paper is to inform designers in the early design stage of assistive applications how requirements and needs of people with dementia can be represented in a context-aware application. Based on a systematic literature review, we elicit which context types are linked to the needs of people with dementia and their caregivers and how they are used in existing assistive applications in dementia care. Our focus is on applications evaluated and assessed with people with dementia. We also classify these assistive applications by the oﬀered context-aware services. We observe that these should not be limited within the realm of the local residence; context types that are valuable in-house can, to a certain extent, also be leveraged outside a local residence. We believe the proposed framework is a tool for application builders and interface designers to accomplish an informed design of systems for people with dementia.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Understanding complex environments with the feedforward torch</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ami_vermeulenlc12/</link>
      <pubDate>Sun, 01 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ami_vermeulenlc12/</guid>
      <description>&lt;p&gt;In contrast with design flaws that occur in user interfaces, design flaws in physical spaces have a much higher cost and impact. Software is in fact fairly easy to change and update in contrast with legacy physical constructions where updating their physical appearance is often not an option. We present the Feedforward Torch, a mobile projection system that targets the augmentation of legacy hardware with feedforward information. Feedforward explains users what the results of their action will be, and can thus be seen as the opposite of feedback. A first user study suggests that providing feedforward in these environments could improve their usability.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A Unified Approach to Uncertainty-Aware Ubiquitous Localisation of Mobile Users</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_ijitwe_aksenovlc11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_ijitwe_aksenovlc11/</guid>
      <description></description>
    </item>
    <item>
      <title>A unified scalable model of user localisation with uncertainty awareness for large-scale pervasive environments (best paper)</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ngmast_aksenovlc11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ngmast_aksenovlc11/</guid>
      <description>&lt;p&gt;Localisation has become a standard feature in many mobile applications. Numerous techniques for both indoor and outdoor location tracking are available today, providing a diversity of ways positioning information can be delivered to a mobile application (e.g., a location-based service). Such factors as the variation of precision over time and covered areas or the difference in quality and reliability make the adoption of several techniques for one application cumbersome. This work presents an approach that models the capabilities of localisation systems and then uses this model to build a unified view on localisation, with special attention paid to uncertainty coming from different localisation conditions and its presentation to the user. We discuss technical considerations, challenges and issues of the approach and report about a user study on users&#39; acceptance of the suggested behaviour of an application based on the approach. The results of the study showed the feasibility of the approach and revealed users&#39; preference towards automatic but yet informed changes they experienced while using the application.&lt;/p&gt;</description>
    </item>
    <item>
      <title>CAP3: Context-sensitive abstract user interface specification</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_berghlc11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_berghlc11/</guid>
      <description>&lt;p&gt;Despite the fact many proposals have been made for abstract user interface models it was not given a detailed context in which it should or could be used in a user-centered design process. This paper presents a clear role for the abstract user interface model in user-centered and model-based development, provides an overview of the stakeholders that may cre- ate and/or use abstract user interface models and presents a modular abstract user interface modeling language, CAP3, that makes relations with other models explicit and builds on the foundation of existing abstract user interface models. The proposed modeling notation is supported by a tool and applied to some case studies from literature and in some projects. It is also validated based on state-of-the-art knowledge on domain-specific modeling languages and visual notations and some case studies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Exploring the design space for situated glyphs to support dynamic work environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_pervasive_kawsarvslk11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_pervasive_kawsarvslk11/</guid>
      <description></description>
    </item>
    <item>
      <title>GRIP: Get better results from interactive prototypes</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_berghshlc11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_berghshlc11/</guid>
      <description>&lt;p&gt;Despite the fact many proposals have been made for abstract user interface models it was not given a detailed context in which it should or could be used in a user-centered design process. This paper presents a clear role for the abstract user interface model in user-centered and model-based development, provides an overview of the stakeholders that may create and/or use abstract user interface models and presents a modular abstract user interface modeling language, CAP3, that makes relations with other models explicit and builds on the foundation of existing abstract user interface models. The proposed modeling notation is supported by a tool and applied to some case studies from literature and in some projects. It is also validated based on state-of-the-art knowledge on domain-specific modeling languages and visual notations and some case studies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>GUIDE2ux: A GUI design environment for enhancing the user experience</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_meskenslslcm11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_meskenslslcm11/</guid>
      <description></description>
    </item>
    <item>
      <title>iDiscover: Towards the next generation of contextualised mobile museum guides</title>
      <link>https://www.krisluyten.net/publications/luytenkui11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytenkui11/</guid>
      <description>&lt;p&gt;In this paper we present a conceptual reference framework &amp;ndash; the iDiscover framework &amp;ndash; to make informed decisions on integrating technology in a museum environment in order to enhance the visitor experience. Our framework presents four dimensions of context that have proven to be indispensible for situation the most appropriate solutions for a specific cultural heritage site: the degree of mobility, the degree of personalisation, the degree of interaction with the environment and the degree of social interactions. Three cases are described in which we succesfully used this reference framework for creating mobile guides that fit both with the context of use and with the needs of the cultural heritage institute.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Proceedings of the 3rd ACM SIGCHI symposium on engineering interactive computing system, EICS 2011, pisa, italy, june 13-16, 2011</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_2011/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_2011/</guid>
      <description></description>
    </item>
    <item>
      <title>Second workshop on engineering patterns for multi-touch interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_luytenvwbn11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_luytenvwbn11/</guid>
      <description></description>
    </item>
    <item>
      <title>Squeeze me and i&#39;ll change: An exploration of frustration-triggered adaptation for multimodal interaction</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_3dui_octaviacl11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_3dui_octaviacl11/</guid>
      <description>&lt;p&gt;Complex 3D interaction in virtual environments may inhibit user interaction and cause frustration. Supporting adaptivity based on the detected user frustration can be considered as one promising solution to enhance user interaction. Our work proposes to provide adaptive assistance to users who are frustrated during their interac- tion with 3D user interfaces in virtual environments. The obtrusive- ness of physiological measurements to detect frustration inspired us to investigate the pressure patterns exerted on a 3D input de- vice for this purpose. The experiment presented in this paper has shown a great potential on utilizing the finger pressure measures as an alternative to physiological measures to indicate user frustration during interaction. Furthermore, the findings in this particular con- text showed that adaptation of haptic interaction was effective in increasing the user&#39;s performance and making users feel less frus- trated in performing their tasks in the 3D environment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>User driven evolution of user interface models - the FLEPR approach</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_interact_hennigblb11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_interact_hennigblb11/</guid>
      <description>&lt;p&gt;In model-based user interface development, models at different levels of abstraction are used. While ideas may initially only be expressed in more abstract models, modifications and improvements according to user&#39;s feedback will likely be made at the concrete level, which may lead to model inconsistencies that need to be fixed in every iteration. Transformations form the bridge between these models. Because one-to-one mappings between models cannot always be defined, these transformations are completely manual or they require manual post-treatment. We propose interactive but automatic transformations to address the mapping problem while still allowing designer&#39;s creativity. To manage consistency and semantic correctness within and between models and therefore to foster iterative development processes, we are combining these with techniques to track decisions and modifications and techniques of intra- and inter-model validation. Our approach has been implemented for abstract and concrete user interface models using Eclipse-based frameworks for model-driven engineering. Our approach and tool support is illustrated by a case study.&lt;/p&gt;</description>
    </item>
    <item>
      <title>User modeling approaches towards adaptation of users&#39; roles to improve group interaction in collaborative 3D games</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hci_octaviabcql11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hci_octaviabcql11/</guid>
      <description></description>
    </item>
    <item>
      <title>Using storyboards to integrate models and informal design knowledge</title>
      <link>https://www.krisluyten.net/publications/dblp_series_sci_haesenbmlddc11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_series_sci_haesenbmlddc11/</guid>
      <description>&lt;p&gt;Model-driven development of user interfaces has become increasingly powerful in recent years. Unfortunately, model-driven approaches have the inherent limitation that they cannot handle the informal nature of some of the artifacts used in truly multidisciplinary user interface development such as storyboards, sketches, scenarios and personas. In this chapter, we present an approach and tool support for multidisciplinary user interface development bridging informal and formal artifacts in the design and development process. Key features of the approach are the usage of annotated storyboards, which can be connected to other models through an underlying meta-model, and cross-toolkit design support based on an abstract user interface model.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Workshop on interacting with smart objects</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_hartmannslbm11/</link>
      <pubDate>Sat, 01 Jan 2011 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_hartmannslbm11/</guid>
      <description></description>
    </item>
    <item>
      <title>Comparing user interaction with low and high fidelity prototypes of tabletop surfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_nordichi_derbovenrvgsl10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_nordichi_derbovenrvgsl10/</guid>
      <description></description>
    </item>
    <item>
      <title>Comparing user interaction with low and high fidelity prototypes of tabletop surfaces</title>
      <link>https://www.krisluyten.net/publications/derbovenrvgsl10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/derbovenrvgsl10/</guid>
      <description>&lt;p&gt;This paper describes a comparative study between the usage of low-fidelity and a high-fidelity prototyping for the creation of multi-user multi-touch interfaces. The multi-touch interface presented in this paper allows users to collaboratively search for existing multimedia content, create new compositions with this content, and finally integrate it in a layout for presenting it. The study we conducted consists of a series of parallel user tests using both low-fidelity and high-fidelity prototypes to inform the design of the multi-touch interface. Based on a comparison of the two test sessions, we found that one should be cautious in generalising high-level user interactions from a low towards a high-fidelity prototype. However, the low-fidelity prototype approach presented proved to be very valuable to generate design ideas concerning both high and low-level user interactions on a multi-touch tabletop.&lt;/p&gt;</description>
    </item>
    <item>
      <title>D-macs: Building multi-device user interfaces by demonstrating, sharing and replaying design actions</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_uist_meskenslc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_uist_meskenslc10/</guid>
      <description>&lt;p&gt;Multi-device user interface design mostly implies creating a suitable interface for each targeted device, using a diverse set of design tools and toolkits. This is a time consuming activity, concerning a lot of repetitive design actions without support for reusing this effort in later designs. In this pa- per, we propose D-Macs: a design tool that allows designers to record their design actions across devices, to share these actions with other designers and to replay their own design actions and those of others. D-Macs lowers the burden in multi-device user interface design and can reduce the neces- sity for manually repeating design actions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Dazed and confused considered normal: An approach to create interactive systems for people with dementia</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hcse_mahmudvlsbc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hcse_mahmudvlsbc10/</guid>
      <description>&lt;p&gt;In Western society, the elderly represent a rapidly growing demographic group. For this group, dementia has become an important cause of dependencies on others and causes diﬃculties with independent living. Typical symptoms of the dementia syndrome are decreased location awareness and diﬃculties in situating ones activities in time, thus hindering long term plans and activities. We present our approach in creating an interactive system tailored for the needs of the early phases of the dementia syndrome. Given the increasing literacy with mobile technologies in this group, we propose an approach that exploits mobile technology in combination with the physical and social context to support prolonged independent living. Our system strengthens the involvement of caregivers through the patient&#39;s social network. We show that applications for people suﬀering from dementia can be created by explicitly taking into account context in the design process. Context dependencies that are defined in an early stage in the development process are propagated as part of the runtime behavior of the interactive system.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Draw me a storyboard: Incorporating principles and techniques of comics to ease communication and artefact creation in user-centred design.</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_bcshci_haesenmlc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_bcshci_haesenmlc10/</guid>
      <description>&lt;p&gt;Storyboards are used in user-centred design (UCD) to clarify a scenario that describes the future use of a system. Although there are many styles of storyboarding, the graphical notation and language are very accessible for all team members of a multidisciplinary team. This papers describes how principles and techniques from comics can facilitate storyboarding in our COMuICSer approach and tool. COMuICSer formalises the way that storyboards are created, while preserving creative aspects of storyboarding. In combination with tool support for COMuICSer, this simplifies the relation of storyboards with other artefacts created in UCD such as structured models and UI designs and supports communication in multidisciplinary teams.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Engineering patterns for multi-touch interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_luytenvwbiw10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_luytenvwbiw10/</guid>
      <description></description>
    </item>
    <item>
      <title>Geo-social interaction: Context-aware help in large scale public spaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ami_mahmudayplcb10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ami_mahmudayplcb10/</guid>
      <description></description>
    </item>
    <item>
      <title>Jelly: A multi-device design environment for managing consistency across devices</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_avi_meskenslc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_avi_meskenslc10/</guid>
      <description>&lt;p&gt;When creating applications that should be available on multiple computing platforms, designers have to cope with dif- ferent design tools and user interface toolkits. Incompatibilities between these design tools and toolkits make it hard to keep multi-device user interfaces consistent. This paper presents Jelly, a flexible design environment that can target a broad set of computing devices and toolkits. Jelly enables designers to copy parts of a user interface from one device to another and to maintain the different user interfaces in concert using linked editing. Our approach lowers the burden of designing multi-device user interfaces by eliminating the need to switch between different design tools and by providing tool support for keeping the user interfaces consistent across different platforms and toolkits.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Multi-user multi-touch setups for collaborative learning in an educational setting</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cdve_schneiderdlvbrv10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cdve_schneiderdlvbrv10/</guid>
      <description></description>
    </item>
    <item>
      <title>On a journey from message to observable pervasive application</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cisis_vanderhulstlc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cisis_vanderhulstlc10/</guid>
      <description></description>
    </item>
    <item>
      <title>On stories, models and notations: Storyboard creation as an entry point for model-based interface development with UsiXML</title>
      <link>https://www.krisluyten.net/publications/luytenusixml2010/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytenusixml2010/</guid>
      <description>&lt;p&gt;Storyboards are excellent tools to create a high level specification of an interactive system. Because of the emphasis on graphical depiction they are both an accessible means for communicating the requirements and properties of an interactive system and allow the specification of complex context-aware systems while avoiding the need for technical details. We present a storyboard meta-model that captures the high level information from a storyboard and al- lows relating this information with other models that are common for engineering interactive systems. We show that a storyboard can be used as an entry point for using UsiXML models. Finally, this approach is accompanied by a tool set to make the connection between the storyboard model, UsiXML models and the program code required for maintaining these connections throughout the engineering process.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PerCraft: Towards live deployment of pervasive applications</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_intenv_vanderhulstlc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_intenv_vanderhulstlc10/</guid>
      <description></description>
    </item>
    <item>
      <title>Pervasive maps: Explore and interact with pervasive environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_percom_vanderhulstlc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_percom_vanderhulstlc10/</guid>
      <description>&lt;p&gt;Efficient discovery of nearby devices and services is one of the preconditions to obtain a usable pervasive environment. Typical user interfaces in these environments hide the heterogeneity of the environment for end-users which often makes it hard to perceive the provided functionality. We present Pervasive Maps, an approach and tool that allows to create an intuitive user interface for exploring and controlling the environment. Pervasive Maps offers user-oriented views on the user&#39;s environment based on pictures of this environment. We show how users can model, explore and finally interact with complex pervasive environments using Pervasive Maps.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PervasiveCrystal: Asking and answering why and why not questions about pervasive computing applications</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_intenv_vermeulenvlc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_intenv_vermeulenvlc10/</guid>
      <description>&lt;p&gt;Users often become frustrated when they are unable to understand and control a pervasive computing environment. Previous studies have shown that allowing users to pose why and why not questions about context-aware applications resulted in better understanding and stronger feelings of trust. Although why and why not questions have been used before to aid in debugging and to clarify graphical user interfaces, it is currently not clear how they can be integrated into pervasive computing systems. We explain in detail how we have extended an existing pervasive computing framework with support for why and why not questions. This resulted in PervasiveCrystal, a system for asking and answering why and why not questions in pervasive computing environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rewiring strategies for changing environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_isami_lauriervpl10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_isami_lauriervpl10/</guid>
      <description>&lt;p&gt;A typical pervasive application executes in a changing environment: people, computing resources, software services and network connections come and go continuously. A robust pervasive application needs adapt to this changing context as long as there is an appropriate rewiring strategy that guarantees correct behavior. We combine the MERODE modeling methodology with the ReWiRe framework for creating interactive pervasive applications that can cope with changing environments. The core of our approach is a consistent environment model, which is essential to create (re)configurable context-aware pervasive applications. We aggregate different ontologies that provide the required semantics to describe almost any target environment. We present a case study that shows a interactive pervasive application for media access that incorporates parental control on media content and can migrate between devices. The application builds upon models of the run-time environment represented as system states for dedicated rewiring strategies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>SemSon - connecting ontologies and web applications</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_webist_vanderhulstlc10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_webist_vanderhulstlc10/</guid>
      <description></description>
    </item>
    <item>
      <title>Where people and cars meet: Social interactions to improve information sharing in large scale vehicular networks</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_sac_yasarmplcb10/</link>
      <pubDate>Fri, 01 Jan 2010 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_sac_yasarmplcb10/</guid>
      <description></description>
    </item>
    <item>
      <title>Ambient compass: One approach to model spatial relations</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hci_aksenovvlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hci_aksenovvlc09/</guid>
      <description></description>
    </item>
    <item>
      <title>Answering why and why not questions in ubiquitous computing</title>
      <link>https://www.krisluyten.net/publications/vermeulen_whywhynot2009/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vermeulen_whywhynot2009/</guid>
      <description>&lt;p&gt;Users often find it hard to understand and control the behavior of a Ubicomp system. This gives rise to usability problems and can lead to loss of user trust, which may hamper the acceptance of these systems. We are extending an existing Ubicomp framework to allow users to pose why and why not questions about its behavior. Initial experiments suggest that these questions are easy to use and could help users in understanding how Ubicomp systems work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Context aware help and guidance for large-scale public spaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_smap_mahmudlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_smap_mahmudlc09/</guid>
      <description></description>
    </item>
    <item>
      <title>Coping with variability of location sensing in large-scale ubicomp environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_icumt_aksenovlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_icumt_aksenovlc09/</guid>
      <description>&lt;p&gt;The work addresses the problem of coping with a diversity of location tracking techniques available in ubiquitous computing environments. We investigate how this diversity can be embedded in the environment in a way that typical difficulties coming from using location-awareness are hidden. We present an approach to improve location-awareness of these environments by means of integrating the knowledge about different location systems into an existing framework for designing pervasive environments in the form of an ontology. Emerging challenges are also discussed in the context of continuous and smooth communication.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Edit, inspect and connect your surroundings: A reference framework for meta-UIs</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_vanderhulstslmc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_vanderhulstslmc09/</guid>
      <description>&lt;p&gt;Discovering and unlocking the full potential of complex pervasive environments is still approached in application-centric ways. A set of statically deployed applications often defines the possible interactions within the environment. However, the increasing dynamics of such environments require a more versatile and generic approach which allows the end-user to inspect, configure and control the overall behavior of such an environment. A meta-UI addresses these needs by providing the end-user with an interactive view on a physical or virtual environment which can then be observed and manipulated at runtime. The meta-UI bridges the gap between the resource providers and the end-users by abstracting a resource&#39;s features as executable activities that can be assembled at runtime to reach a common goal. In order to allow software services to automatically integrate with a pervasive computing environment, the minimal requirements of the environment&#39;s meta-UI must be identified and agreed on. In this paper we present Meta-STUD, a goal- and service-oriented reference framework that supports the creation of meta-UIs for usage in pervasive environments. The framework is validated using two independent implementation approaches designed with different technologies and focuses.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Get your requirements straight: Storyboarding revisited</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_interact_haesenlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_interact_haesenlc09/</guid>
      <description>&lt;p&gt;Current user-centred software engineering (UCSE) approaches provide many techniques to combine know-how available in multidisciplinary teams. Although the involvement of various disciplines is beneficial for the user experience of the future application, the transition from a user needs analysis to a structured interaction analysis and UI design is not always straightforward. We propose storyboards, enriched by metadata, to specify functional and non-functional requirements. Accompanying tool support should facilitate the creation and use of storyboards. We used a meta-storyboard for the verification of storyboarding approaches.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Human-centered engineering of interactive systems with the user interface markup language</title>
      <link>https://www.krisluyten.net/publications/dblp_series_hci_helmsslvacv09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_series_hci_helmsslvacv09/</guid>
      <description></description>
    </item>
    <item>
      <title>I bet you look good on the wall: Making the invisible computer visible</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ami_vermeulenslc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ami_vermeulenslc09/</guid>
      <description>&lt;p&gt;The design ideal of the invisible computer, prevalent in the vision of ambient intelligence (AmI), has led to a number of interaction challenges. The complex nature of AmI environments together with limited feedback and insufficient means to override the system can result in users who feel frustrated and out of control. In this paper, we explore the potential of visualizing the system state to improve user understanding. We use projectors to overlay the environment with a graphical representation that connects sensors and devices with the actions they trigger and the effects those actions produce. We also provided users with a simple voice-controlled command to cancel the last action. A small first-use study suggested that our technique could indeed improve understanding and support users in forming a reliable mental model.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Introduction to the special issue on UIDL for next-generation user interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_tochi_shaerjgl09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_tochi_shaerjgl09/</guid>
      <description></description>
    </item>
    <item>
      <title>Mobiele ICT en erfgoed: De bezoekerservaring verrijken met mobiele gidsen</title>
      <link>https://www.krisluyten.net/publications/klde09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/klde09/</guid>
      <description>&lt;p&gt;Gedurende de laatste jaren steeg de populariteit van mobiele computers om de bezoekerservaring te verrijken en vonden deze apparaten hun weg naar musea en andere erfgoedinstellingen. De voordelen van het gebruik van mobiele computers zijn duidelijk: informatie kan op een dynamische manier via het toestel gepresenteerd worden, zonder dat de fysieke ruimte zelf erdoor wordt verstoord. Bovendien kan de presentatie op een multimediale manier gebeuren: een mobiele computer kan verscheidene media tonen, zoals foto&#39;s, audio- en videofragmenten, tekst, &amp;hellip; Omdat de bezoeker op verschillende manieren met het toestel kan interageren, worden ook interactieve spelen mogelijk. De dynamiek en autonomie die met deze toestellen behaald kunnen worden, zorgen er bovendien voor dat gebruikers hun eigen tempo kunnen aanhouden en informatie kunnen verkrijgen die afgestemd is op het persoonlijk interesseprofel.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Photo-based user interfaces: Picture it, tag it, use it</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_otm_vanderhulstlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_otm_vanderhulstlc09/</guid>
      <description></description>
    </item>
    <item>
      <title>Plug-and-design: embracing mobile devices as part of the design environment</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_eics_meskenslc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_eics_meskenslc09/</guid>
      <description></description>
    </item>
    <item>
      <title>Plug-and-design: Embracing mobile devices as part of the design environment</title>
      <link>https://www.krisluyten.net/publications/meskens_eics2009/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/meskens_eics2009/</guid>
      <description>&lt;p&gt;Due to the large amount of mobile devices that continue to appear on the consumer market, mobile user interface design becomes increasingly important. The major issue with many existing mobile user interface design approaches is the time and effort that is needed to deploy a user interface design to the target device. In order to address this issue, we propose the plug-and-design tool that relies on a continuous multi-device mouse pointer to design user interfaces directly on the mobile target device. This will shorten iteration time since designers can continuously test and validate each design action they take. Using our approach, designers can empirically learn the specialities of a target device which will help them while creating user interfaces for devices they are not familiar with.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Shortening user interface design iterations through realtime visualisation of design actions on the target device</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_vl_meskenslc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_vl_meskenslc09/</guid>
      <description></description>
    </item>
    <item>
      <title>Supporting multidisciplinary teams and early design stages using storyboards</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hci_haesenmlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hci_haesenmlc09/</guid>
      <description>&lt;p&gt;Current tools for multidisciplinary teams in user-centered software engineering (UCSE) provide little support for the different approaches of the various disciplines in the project team. Although multidisciplinary teams are getting more and more involved in UCSE projects, an efficient approach to communicate clearly and to pass results of a user needs analysis to other team members without loss of information is still missing. Based on previous experiences, we propose storyboards as a key component in such tools. Storyboards contain sketched information of users, activities, devices and the context of a future application. The comprehensible and intuitive notation and accompanying tool support presented in this paper will enhance communication and efficiency within the multidisciplinary team during UCSE projects.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The design of context-specific educational mobile games</title>
      <link>https://www.krisluyten.net/publications/kg1kl09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/kg1kl09/</guid>
      <description></description>
    </item>
    <item>
      <title>The five commandments of activity-aware ubiquitous computing applications</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hci_mahmudvlc09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hci_mahmudvlc09/</guid>
      <description></description>
    </item>
    <item>
      <title>UIML based design of multimodal interactive applications with strict synchronization requirements</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_achi_lerouxvtdtl09/</link>
      <pubDate>Thu, 01 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_achi_lerouxvtdtl09/</guid>
      <description>&lt;p&gt;As the variety in network service platforms and end user devices grows rapidly, content providers must constantly adapt their production system to support these new technologies. In this paper, we present a middleware platform for deploying highly interactive (television) applications over a diverse collection of networks and end user devices. As the user interface of such interactive applications may vary depending on the capabilities of the different target devices, our middleware uses UIML for the description of generic user interfaces. Our middleware platform also provides a pluggable support for new networks. A factor that highly complicates the design is the need for strict synchronization between an interactive application and video or audio data that is broadcasted. In order to support a maximum of functionality, downloadable application logic is used to provide the interactive services. As a test case, an evaluation setup was built, targeting both set-top boxes and mobile phones.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Collaborative gaming in the gallo-roman museum to increase attractiveness of learning cultural heritage for youngsters</title>
      <link>https://www.krisluyten.net/publications/luyten_collabgaming2008/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luyten_collabgaming2008/</guid>
      <description></description>
    </item>
    <item>
      <title>Design by example of graphical user interfaces adapting to available screen size</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cadui_demeuremlc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cadui_demeuremlc08/</guid>
      <description></description>
    </item>
    <item>
      <title>Eunomia: Toward a framework for multi-touch information displays in public spaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_bcshci_cuypersstlb08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_bcshci_cuypersstlb08/</guid>
      <description></description>
    </item>
    <item>
      <title>Ghosts in the interface: Meta-user interface visualizations as guides for multi-touch interaction</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tabletop_vanackendlc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tabletop_vanackendlc08/</guid>
      <description>&lt;p&gt;Multi-touch large display interfaces are becoming increasingly popular in public spaces. These spaces impose specific requirements on the accessibility of the user interfaces: most users are not familiar with the interface and expectations with regard to user experience are very high. Multi-touch interaction beyond the traditional move-rotate-scale interactions is often unknown to the public and can become exceedingly complex. We introduce TouchGhosts: visual guides that are embedded in the multi-touch user interface and that demonstrate the available interactions to the user. TouchGhosts are activated while using an interface, providing guidance on the fly and within the context-of-use. Our approach allows to define reconfigurable strategies to decide how or when a TouchGhost should be activated and which particular visualization will be presented to the user.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Gummy for multi-platform user interface designs: Shape me, multiply me, fix me, use me</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_avi_meskensvlc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_avi_meskensvlc08/</guid>
      <description>&lt;p&gt;Designers still often create a specific user interface for every target platform they wish to support, which is time-consuming and error-prone. The need for a multi-platform user interface design approach that designers feel comfortable with increases as people expect their applications and data to go where they go. We present Gummy, a multi-platform graphical user interface builder that can generate an initial design for a new platform by adapting and combining features of existing user interfaces created for the same application. Our approach makes it easy to target new plat- forms and keep all user interfaces consistent without requiring designers to considerably change their work practice.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Meta-gui-builders: Generating domain-specific interface builders for multi-device user interface creation</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_luytenmvc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_luytenmvc08/</guid>
      <description></description>
    </item>
    <item>
      <title>Mobile photography within a social context</title>
      <link>https://www.krisluyten.net/publications/dblp_series_sci_luytenttc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_series_sci_luytenttc08/</guid>
      <description></description>
    </item>
    <item>
      <title>MuiCSer: A multi-disciplinary user-centered software engineering process to increase the overal user experience</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iceis_haesenlcbr08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iceis_haesenlcbr08/</guid>
      <description>&lt;p&gt;In this paper we present an incremental and user-centered process to create suitable and usable user interfaces. Validation is done throughout the process by prototyping, the prototypes evolve from low-fidelity to the final user interface. Applications developed with this process are more likely to correspond to users&#39; expectations. Furthermore, the process takes into account the need for sustainable evolution often required by modern soft- ware configurations, by combining traditional software engineering with a user-centered approach. We think our approach is beneficial in its scope, since it considers evolving software beyond the deployment stage and supports a multi-disciplinary team.&lt;/p&gt;</description>
    </item>
    <item>
      <title>MuiCSer: A process framework for multi-disciplinary user-centred software engineering processes</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tamodia_haesencbl08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tamodia_haesencbl08/</guid>
      <description></description>
    </item>
    <item>
      <title>Reasoning over spatial relations for context-aware distributed user interfaces</title>
      <link>https://www.krisluyten.net/publications/aksenov_mrc2008/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/aksenov_mrc2008/</guid>
      <description>&lt;p&gt;Considering the amount of devices a user owns nowadays, a distributed user interface can become increasingly important. This requires reasoning techniques that allow making predictions of future values in the spatial model because these devices can be expected to change their location during usage. Our primary attention will be devoted to the problem of re-distribution of user interfaces in a constantly changing environment. So that a change in spatial topology, i.e. in the way the devices are located relative to one another, will be detected on time and interpreted in a proper way, resulting in redistribution of a user interface the devices are sharing.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ReWiRe: Creating interactive pervasive systems that cope with changing environments by rewiring</title>
      <link>https://www.krisluyten.net/publications/vanderhulst_rewire2008/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vanderhulst_rewire2008/</guid>
      <description>&lt;p&gt;The increasing complexity of pervasive computing environments puts the current software development methods to the test. There is a large variation in different types of hardware that need to be addressed. Besides, there is no guarantee the environment does not evolve, making the software developed for the initial environment deprecated and in need for updates or reconfiguration. Software deployed in such an environment should be sufficiently dynamic to cope with new environment configurations, even while the system is in use. This goes beyond coping with new contexts of use and building context-aware systems: while most approaches are mainly focused on how the software behavior adapts according to the changing context in a fixed environment, our approach, ReWiRe, allows the environment configuration to change over time.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ReWiRe: Designing reactive systems for pervasive environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dsvis_vanderhulstlc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dsvis_vanderhulstlc08/</guid>
      <description>&lt;p&gt;The design of interactive software that populates an ambient space is a complex and ad-hoc process with traditional software development approaches. In an ambient space, important building blocks can be both physical objects within the user&#39;s reach and software objects accessible from within that space. However, putting many heterogeneous resources together to create a single system mostly requires writing a large amount of glue code before such a system is operational. Besides, users all have their own needs and preferences to interact with various kinds of environments which often means that the system behavior should be adapted to a specific context of use while the system is being used. In this paper we present a methodology to orchestrate resources on an abstract level and hence configure a pervasive computing environment. We use a semantic layer to model behavior and illustrate its use in an application.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Toward multi-disciplinary model-based (re)design of sustainable user interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dsvis_berghhlnc08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dsvis_berghhlnc08/</guid>
      <description></description>
    </item>
    <item>
      <title>Training social learning skills by collaborative mobile gaming in museums</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_acmace_schroyengltrcfm08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_acmace_schroyengltrcfm08/</guid>
      <description></description>
    </item>
    <item>
      <title>User interface description languages for next generation user interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_shaerjgl08/</link>
      <pubDate>Tue, 01 Jan 2008 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_shaerjgl08/</guid>
      <description></description>
    </item>
    <item>
      <title>A comparison between decision trees and markov models to support proactive interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dexaw_boeckvlc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dexaw_boeckvlc07/</guid>
      <description></description>
    </item>
    <item>
      <title>A web-based central gateway infrastructure in the automotive after-sales market - business interoperability through the web</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iceis_houbenlcs07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iceis_houbenlcs07/</guid>
      <description>&lt;p&gt;The Block Excemption Regulation of the European Commission was enacted in 2002 with the goal to strengthen competition between dependent and independent repairers in the automotive after-sales market. The FP6 MYCAREVENT project embraces these goals while triggering new business opportunities by establishing a mobile accessible infrastructure as single gateway to different kinds of resources. This information procurement framework allows customers to find specific vehicle repair and diagnostic data from different car manufacturers and 3rd parties in the same way. In order to provide a higher degree of accessibility, extensibility and adaptivity, our service-oriented infrastructure presented in this paper is web-based and consists of three main components: Mobile Clients, Service Portal and Remote Services. New communication and multimedia technologies are invoked to improve interoperability, usability and maintenance of the underlying Mobile Service World. In this paper we focus on the architecture of our highly flexible procurement infrastructure. Standardized elements and methodologies ensure an integrated solution and enable easy expandability with new content, services and components.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Ad-hoc co-located collaborative work with mobile devices</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_mhci_luytenvc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_mhci_luytenvc07/</guid>
      <description></description>
    </item>
    <item>
      <title>Beyond mere information provisioning: A handheld museum guide based on social activities and playful learning</title>
      <link>https://www.krisluyten.net/publications/schroyenmuseumguide2007/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/schroyenmuseumguide2007/</guid>
      <description>&lt;p&gt;During a museum visit, social interaction can improve intellectual, social, personal and cultural development. With the advances in technology, the use of personal mobile handheld devices &amp;ndash; such as Personal Digital Assistants (PDAs) &amp;ndash; that replace the traditional paper guidebooks is becoming a common sight at various heritage sites all over the world. This technology often leads to problems such as isolating visitors from their companions and distracting visitors away from their surroundings. We believe careful design of mobile applications and taking advantage of low-cost networking infrastructure can avoid such isolation of the visitor from his or her surroundings and encourage interaction with both surroundings and companions. In this paper, we describe our approach to create a mobile handheld guide that supports the learning process by exploiting social interaction between visitors and subtly matching the content and concepts shown on the hand- held guide with what can be found in the museum.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Focus&#43;roles: Socio-organizational conflict resolution in collaborative user interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hci_vanackenrlc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hci_vanackenrlc07/</guid>
      <description></description>
    </item>
    <item>
      <title>Making bits and atoms talk today: A practical architecture for smart object interaction</title>
      <link>https://www.krisluyten.net/publications/vermeulen_dipso2007/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vermeulen_dipso2007/</guid>
      <description>&lt;p&gt;Bringing together the physical and digital worlds has been the subject of research for some time now. In particular, a number of successful prototypes that link physical objects with digital information (often called smart object systems) have already been presented. However, a generally accepted architecture to design such systems has not yet emerged. This paper presents a reusable and practical framework for developing smart object applications today. At the basis of our approach lies the use of Semantic Web technology to drive interaction between the physical and digital worlds. We used this framework to develop SemaNews, a novel application that combines the advantages of digital news feeds with those of physical newspapers. To verify the reusability of our architecture, we built a second prototype in a different application domain: STalkingObjects provides the basic components of a store of the future.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Middleware for ubiquitous service-oriented spaces on the web</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_aina_vanderhulstlc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_aina_vanderhulstlc07/</guid>
      <description></description>
    </item>
    <item>
      <title>Runtime personalization of multi-device user interfaces: Enhanced accessibility for media consumption in heterogeneous environments by user interface adaptation</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_smap_bruninxrlc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_smap_bruninxrlc07/</guid>
      <description>&lt;p&gt;The diversity of end-user devices in combination with a growing user base poses important challenges for providing easy access to the huge amount of content and services currently available. Each device has its typical set of capabilities and characteristics that must be taken into account to create an appropriate user interface that provides interactive access to multimedia data and services. Furthermore, end-users also have their specific requirements that influence the accessibility of data and services for individual access. The approach we present in this paper is geared towards the idea of universal access to interactive multimedia data and services for everyone, independent of the user characteristics or end-user device capabilities. For this purpose we combine user and device models with high-level user interface description languages in order to decouple the interface presentation from its platform, and to generate the most suitable interface on a per-user, per- device basis making use of the semantics that are provided by user and device profile.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Service-interaction descriptions: Augmenting services with user interface models</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ehci_vermeulenvclc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ehci_vermeulenvclc07/</guid>
      <description>&lt;p&gt;Semantic service descriptions have paved the way for flexible interaction with services in a mobile computing environment. Services can be automatically discovered, invoked and even composed. On the contrary, the user interfaces for interacting with these services are often still designed by hand. This approach poses a serious threat to the overall flexibility of the system. To make the user interface design process scale, it should be automated as much as possible. We propose to augment service descriptions with high-level user interface models to support automatic user interface adaptation. Our method builds upon OWL-S, an ontology for Semantic Web Services, by connecting a collection of OWL-S services to a hierarchical task structure and selected presentation information. This allows end-users to interact with services on a variety of platforms.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Supporting social interaction: A collaborative trading game on PDA</title>
      <link>https://www.krisluyten.net/publications/vanloontradinggame2007/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/vanloontradinggame2007/</guid>
      <description>&lt;p&gt;ARCHIE is a research project in which the educational staff of the Gallo-Roman Museum collaborates with the Human-Computer Interaction research group of the Expertise Centre for Digital Media (Hasselt University) in the context of the expansion of the museum. The starting point of this interdisciplinary collaboration is our strong belief that handheld guides are a promising medium to enhance visitors&#39; learning experiences in a museum and strengthen the experience of a group visit.In this paper we present a first application: a collaborative trading game for (school)groups of children; from conceptual stage towards final implementation and conclude with user test results. We designed the museum game so that every player is dependent on the concrete actions of other players; only through social interaction and cooperation can they come to a good result.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Task models and diagrams for users interface design, 5th international workshop, TAMODIA 2006, hasselt, belgium, october 23-24, 2006. Revised papers</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tamodia_2006/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tamodia_2006/</guid>
      <description></description>
    </item>
    <item>
      <title>Task-based prediction of interaction patterns for ambient intelligence environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_hci_verpoortenlc07/</link>
      <pubDate>Mon, 01 Jan 2007 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_hci_verpoortenlc07/</guid>
      <description></description>
    </item>
    <item>
      <title>A generic approach for multi-device user interface rendering with UIML</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cadui_luytentvc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cadui_luytentvc06/</guid>
      <description></description>
    </item>
    <item>
      <title>A prototype-driven development process for context-aware user interfaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tamodia_clerckxvlc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tamodia_clerckxvlc06/</guid>
      <description></description>
    </item>
    <item>
      <title>A situation-aware mobile system to support fire brigades in emergency situations</title>
      <link>https://www.krisluyten.net/publications/10_1007_11915072_105/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/10_1007_11915072_105/</guid>
      <description>&lt;p&gt;In a firefighter emergency mission it is essential for the members of a fire brigade to get an intelligent and reliable overview of the complete situation, presented according to the role of each member. In this paper we report on the design and development of a system to support a fire brigade on site with a set of mobile services that offers a role-based focus+context user interface. It provides the required overview over the emergency situation according to the user task and context, while life-saving information is emphasized. The implementation of a context-rule-based decision module enhances the visualization of required information. Interaction with the user interface is designed for use in the wild; which in this case comes down to providing a &amp;quot;fat finger&amp;quot; interface that allows firemen to interact with the user interface on site with his gloves on.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A situation-aware mobile system to support fire brigades in emergency situations</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_otm_luytenwcnm06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_otm_luytenwcnm06/</guid>
      <description></description>
    </item>
    <item>
      <title>A task-driven user interface architecture for ambient intelligent environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_iui_clerckxvlc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_iui_clerckxvlc06/</guid>
      <description></description>
    </item>
    <item>
      <title>ARCHIE: Disclosing a museum by a socially-aware mobile guide</title>
      <link>https://www.krisluyten.net/publications/luytenarchie2006/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytenarchie2006/</guid>
      <description>&lt;p&gt;We present ARCHIE, a research project which aims to discover how handheld guides can be used as powerful instruments to enhance the visitor&#39;s learning experience. Although mobile devices are becoming a common aid to support a museum visit, they often lead to an individualized experience. However, most people do not visit a museum alone, and recent research has pointed out that social interaction is a prerequisite for an intensified and improved learning process. To accommodate the shortcomings in many of the current solutions, we are designing a platform that enables us to create a socially-aware handheld guide that stimulates interaction between group members. They can communicate with each other either directly (by voice) or indirectly (by collaborative games) by means of their mobile guides. Besides the aforementioned communication possibilities, handheld guides can also provide a way to present per- sonalized content. By using a personal profile, it is possible to adapt the interface and tailor the information to the needs and interests of every visitor. The combination of personalized content and interfaces, communication channels between visitors in the same group and support for localization might lead to an innovative mobile guide that integrates with the museum as well as with other visitors. Our platform enables social, and, in many cases, playful interactions with other visitors in the same group. At the same time the context-awareness (proximity and personalization) increases the involvement of the visitor with the content presented in the museum.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Constraint adaptability of multi-device user interfaces</title>
      <link>https://www.krisluyten.net/publications/luyten_multipleui_2006/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luyten_multipleui_2006/</guid>
      <description>&lt;p&gt;Methods to support the creation of multi-device user interfaces typically use some type of abstraction of the user interface design. To retrieve the final user interface from the abstraction a transformation will be applied that specializes the abstraction for a particular target platform. The User Interface Markup Language (UIML) offers a way to create multi-device user interface descriptions while maintaining the consistency of certain aspects of a user interface across platforms. We extended the UIML language with support for layout constraints. Designers can create layout templates based on constraints that limit the ways a user interface can rearrange across platforms. This results in a higher degree of consistency and reusability of interface designs.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Designing distributed user interfaces for ambient intelligent environments using models and simulations</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_cg_luytenbvc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_cg_luytenbvc06/</guid>
      <description></description>
    </item>
    <item>
      <title>Designing for interaction: Socially-aware museum handheld guides</title>
      <link>https://www.krisluyten.net/publications/luytennodem2006/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luytennodem2006/</guid>
      <description>&lt;p&gt;We present ARCHIE, an interdisciplinary research project of the Expertise Centre for Digital Media (Hasselt University) and the Gallo-Roman Museum of Tongeren (Province of Limburg) which aims to discover how a handheld guide can be used to enhance the museum learning experience. Because we stress on the important role of social interaction as a prerequisite for intellectual, social, personal and cultural development, one of the main objectives of the ARCHIE project is to encourage and stimulate interaction with the museum, the PDA and fellow visitors. Designing for interaction however asks for a mental switch. At this point, we developed a first application: a collaborative trading game.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Extending social networks with implicit human-human interaction</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_otm_clerckxhlc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_otm_clerckxhlc06/</guid>
      <description>&lt;p&gt;This paper describes a framework to enable implicit interaction between mobile users in order to establish and maintain social networks according to the preferences and needs of each individual. A user model is proposed which can be constructed by the user and appended with information regarding the user&#39;s privacy preferences. Design choices and tool support regarding the framework are discussed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>High-level modeling of multi-user interactive applications</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tamodia_berghlc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tamodia_berghlc06/</guid>
      <description></description>
    </item>
    <item>
      <title>PhotoFOAF: A community building service driven by socially-aware mobile imaging</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_smap_thystlc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_smap_thystlc06/</guid>
      <description></description>
    </item>
    <item>
      <title>Seamless interaction between multiple devices and meeting rooms</title>
      <link>https://www.krisluyten.net/publications/cardinaels2006seamless/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/cardinaels2006seamless/</guid>
      <description>&lt;p&gt;Meetings often suffer from the inability of participants to be physically present in one room. Moreover, with current networking technologies, meeting environments can be distributed over multiple rooms. The goal of the iConnect project is to provide collaboration services while interconnecting both collocated and remote users. We focus on smooth engagement by allowing participants to share arbitrary data through heterogeneous input devices and displays&lt;/p&gt;</description>
    </item>
    <item>
      <title>Telebuddies on the move: Social stitching to enhance the networked gaming experience</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_netgames_luytenthc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_netgames_luytenthc06/</guid>
      <description></description>
    </item>
    <item>
      <title>Telebuddies: Social stitching with interactive television</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_chi_luytenthc06/</link>
      <pubDate>Sun, 01 Jan 2006 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_chi_luytenthc06/</guid>
      <description>&lt;p&gt;In this paper we report on our work to enable &amp;quot;laid-back&amp;quot; social interactions using television as a primary interaction medium. By integrating semantic web techniques with interactive television we were able to create smart applications that can run as extensions of television shows and stimulate groups of users to communicate. Groups are based on the shared characteristics that can be found for subsets of spectators. Communication between spectators is brought about at two levels: direct communication like instant messaging and indirect communication like cooperating in a team to win a quiz. Our system does not necessarily require a new television format, but is able to reuse existing television shows and to &amp;quot;socialize&amp;quot; them so they can be re-broadcasted with support for group interaction.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A component-based infrastructure for pervasive user interaction</title>
      <link>https://www.krisluyten.net/publications/rigole_component2005/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/rigole_component2005/</guid>
      <description>&lt;p&gt;Since a growing number of different mobile computing devices are used in pervasive and ubiquitous environments, the need to adopt new approaches for designing and implementing pervasive interactive software with minor effort is emerging. In this paper we present a process that facilitates the design of next-generation interactive software for pervasive environments. We created a distributed runtime infrastructure that enables the distribution of software components on heterogeneous, networked and embedded hardware systems. Some of these components or compositions of components will require interaction by human users from a large range of different devices. To make the deployment of consistent and functional User Interfaces in these pervasive environments easier, Interaction Components are introduced into the runtime infrastructure which enable the presentation of component and service behavior to human users.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Context-sensitive user interfaces for ambient environments: Design, development and deployment</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dagstuhl_luytenvbc05/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dagstuhl_luytenvbc05/</guid>
      <description></description>
    </item>
    <item>
      <title>Designing interactive systems in context: From prototype to deployment</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_bcshci_clerckxlc05/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_bcshci_clerckxlc05/</guid>
      <description></description>
    </item>
    <item>
      <title>Distributed user interface elements to support smart interaction spaces</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ism_luytenc05/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ism_luytenc05/</guid>
      <description></description>
    </item>
    <item>
      <title>Interactive data units: A framework to support rich graphical data presentations on heterogeneous devices</title>
      <link>https://www.krisluyten.net/publications/houben2005idu/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/houben2005idu/</guid>
      <description>&lt;p&gt;The use of mobile computing systems continues to increase among a wider diversity of end-users. On desktop computers, a lot of research in the area of interactive visualization of graphical information has been done, but there is a growing opportunity to have the possibility of creating scalable and animated graphical data presentations on heterogeneous mobile devices. Latest trends in the mobile phone and PDA community (e.g. games and MMS) emphasize the demand for a better support of such rich graphical data that is scalable over multiple platforms. In this paper we present mobile services for the automotive after sales market that support the user in coping with the increasing functionality and complexity of cars and their repair procedures. We introduce a generic framework to sup- port the retrieval and visualization of operating instructions and car repair information according to the device, end- user (or consumer of the data) and car repair guidelines. As its core the framework provides the concept of Interactive Data Units (IDUs): graphical data blocks with a clean separation of structure, style and animation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Light-weight distributed web interfaces: Preparing the web for heterogeneous environments</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_icwe_vandervelpenvlc05/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_icwe_vandervelpenvlc05/</guid>
      <description>&lt;p&gt;In this paper we show an approach that allows web interfaces to be dynamically distributed among several interconnected heterogeneous devices in an environment to support the tasks and activities the user performs. The approach uses a light-weight HTTP-based daemon as a distribution manager and RelaxNG schemas to describe the service user interfaces offered by native applications. From these service descriptions, the XHTML-based user interface is generated.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Profile-aware multi-device interfaces: An MPEG-21-based approach for accessible user interfaces</title>
      <link>https://www.krisluyten.net/publications/luyten2005profile/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luyten2005profile/</guid>
      <description>&lt;p&gt;The wide diversity of consumer devices has led to new methodologies and techniques to make digital content available over a broad range of devices with minimal effort. In particular the design of the interactive parts of a system has been the subject of a lot of research efforts because these parts are the most visible and are critical for the usability (and thus use) of a system. One thing that is missing in many current approaches is the ability to combine these new methodologies and techniques with a user-centric approach to ensure preferences from and requirements for a specific user are taken into account besides the device adaptations. In this paper we analysed the applicability of MPEG-21, part 7: Digital Item Adaptation, for the adaptation of a user interface to user characteristics. We show how the high-level XML-based user interface description language UIML in combination with an MPEG-21-based user profile enables designers to create accessible and personalised multi-device user interfaces. Using this combination results in user interfaces that can be deployed on a broad range of devices while taking into account user preferences with minimal effort. This approach enhances accessibility to digital items on various platforms, since all interactions with digital items should be supported by an appropriate user interface.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Task modeling for ambient intelligent environments: Design support for situated task executions</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tamodia_luytenvc05/</link>
      <pubDate>Sat, 01 Jan 2005 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tamodia_luytenvc05/</guid>
      <description></description>
    </item>
    <item>
      <title>Building user interfaces with tasks, dialogs and XML</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cadui_coninx04/</link>
      <pubDate>Thu, 01 Jan 2004 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cadui_coninx04/</guid>
      <description>&lt;p&gt;We present two ongoing research efforts, both aim to support the use of models for designing (multi- and multiple-device) User Interfaces. The first tool, a part of the Dygimes framework, shows how context and tasks can be combined. It allows to generate prototype interfaces from context-sensitive task models. It builds upon a runtime environment in Java and an XML-based High Level User Interface Description (HLUID) language to generate the prototype interface. The second tool, uiml.net, experiments with another HLUID and another runtime enviroment to generate interfaces. Both tools are work in progress.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Developing user interfaces with XML: Advances on user interface description language</title>
      <link>https://www.krisluyten.net/publications/luyten_uixml2004/</link>
      <pubDate>Thu, 01 Jan 2004 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/luyten_uixml2004/</guid>
      <description>&lt;p&gt;This collection highlights advancements in User Interface Description Languages (UIDLs), focusing on XML-based solutions for device-independent and context-sensitive user interface development. Contributions cover a range of topics including dynamic generation of multimodal interfaces, model-driven UIDL integration, and language extensibility for multi-device scenarios. Innovations in UIDL frameworks, such as UIML and USIXML, showcase methods to enhance reusability, adaptability, and scalability in HCI tools. Practical applications, case studies, and evaluations of UIDL frameworks underscore their potential in improving usability, accessibility, and integration across diverse computing environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>DynaMo-AID: A design process and a runtime architecture for dynamic model-based user interface development</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_ehci_clerckxlc04/</link>
      <pubDate>Thu, 01 Jan 2004 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_ehci_clerckxlc04/</guid>
      <description></description>
    </item>
    <item>
      <title>Generating context-sensitive multiple device interfaces from design</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cadui_clerckxlc04/</link>
      <pubDate>Thu, 01 Jan 2004 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cadui_clerckxlc04/</guid>
      <description>&lt;p&gt;This paper shows a technique that allows adaptive user interfaces, span- ning multiple devices, to be rendered from the task specification at runtime taking into account the context of use. The designer can spec- ify a task model using the ConcurTaskTrees Notation and its context- dependent parts, and deploy the user interface immediately from the specification. By defining a set of context-rules in the design stage, the appropriate context-dependent parts of the task specification will be se- lected before the concrete interfaces will be rendered. The context will be resolved by the runtime environment and does not require any man- ual intervention. This way the same task specification can be deployed for several different contexts of use. Traditionally, a context-sensitive task specification only took into account a variable single deployment device. This paper extends this approach as it takes into account task specifications that can be executed by multiple co-operating devices.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The mapping problem back and forth: Customizing dynamic models while preserving consistency</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_tamodia_clerckxlc04/</link>
      <pubDate>Thu, 01 Jan 2004 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_tamodia_clerckxlc04/</guid>
      <description>&lt;p&gt;Model-Based User Interface Development uses a multitude of models which are related in one way or another. Usually there is some kind of process that starts with the design of the abstract models and progresses gradually towards the more concrete models, resulting in the final user interface when the design process is complete. Progressing from one model to another involves transforming the model and mapping pieces of information contained in the source model onto the target model. Most existing development environments propose solutions that apply these steps (semi-)automatically in one way only (from abstract to concrete models). Manual intervention that changes the target model (e.g. dialog model) to the designer preferences is not reflected in the source model (e.g. task model), thus this step can introduce inconsistencies between the different models. In this paper, we identify some rules that can be manually applied to the model after a transformation has taken place. The effect on the target and source models are shown together with how different models involved in the transformation can be updated accordingly to ensure consistency between models.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UIML.NET: An open UIML renderer for the .net framework</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cadui_luytenc04/</link>
      <pubDate>Thu, 01 Jan 2004 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cadui_luytenc04/</guid>
      <description>&lt;p&gt;As the diversity of available computing devices increases it becomes more difficult to adapt User Interface development to support the full range of available devices. One of the difficulties are the different GUI libraries: to use an alternative library or device one is forced to redevelop the interface completely for the alternative GUI library. To overcome these problems the User Interface Mark-up Language (UIML) specification has been proposed, as a way of glueing the interface design to different GUI libraries in different environments without further efforts. In contrast with other approaches UIML has matured and has some implementations proving its usefulness. We introduce the first UIML renderer for the .Net framework, a framework that can be accessed by different kinds of programming languages and can use different kinds of widget sets. We show that its properties, among them its reflection mechanism, are suitable for the development of a reusable and portable UIML renderer. The suitability for multi-device rendering is discussed in comparison with our own multi-device UI framework Dygimes. The focus is on how layout management can be generalised in the specification to allow the GUI to adapt to different screen sizes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Derivation of a dialog model from a task model by activity chain extraction</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dsvis_luytenccv03/</link>
      <pubDate>Wed, 01 Jan 2003 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dsvis_luytenccv03/</guid>
      <description>&lt;p&gt;Over the last few years, Model-Based User Interface Design has become an important tool for creating multi-device User Interfaces. By providing information about several aspects of the User Interface, such as the task for which it is being built, different User Interfaces can be generated for fulfilling the same needs although they have a differ- ent concrete appearance. In the process of making User Interfaces with a Model-Based Design approach, several models can be used: a task model, a dialog model, a user model, a data model,etc. Intuitively, using more models provides more (detailed) information and will create more appro- priate User Interfaces. Nevertheless, the designer must take care to keep the different models consistent with respect to each other. This paper presents an algorithm to extract the dialog model (partially) from the task model. A task model and dialog model are closely related because the dialog model defines a sequence of user interactions, an activity chain, to reach the goal postulated in the task specification. We formalise the activity chain as a State Transition Network, and in addition this chain can be partially extracted out of the task specification. The designer benefits of this approach since the task and dialog model are consistent. This approach is useful in automatic User Interface generation where several different dialogs are involved: the transitions between dialogs can be handled smoothly without explicitely implementing them.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Dygimes: Dynamically generating interfaces for mobile computing devices and embedded systems</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_mhci_coninxlvbc03/</link>
      <pubDate>Wed, 01 Jan 2003 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_mhci_coninxlvbc03/</guid>
      <description>&lt;p&gt;Abstract. Constructing multi-device interfaces still presents major challenges, despite all efforts of the industry and several academic initiatives to develop usable solutions. One approach which is finding its way into general use, is XML-based User Interface descriptions to generate suitable User Interfaces for embedded systems and mobile computing devices. Another important solution is Model-based User Interface design, which evolved into a very suitable but academic approach for designing multi-device interfaces. We introduce a framework, Dygimes, which uses XML-based User Interface descriptions in combination with selected models, to generate User Interfaces for different kinds of devices at runtime. With this framework task specifications are combined with XML-based User Interface building blocks to generate User Interfaces that can adapt to the context of use. The design of the User Interface and the implementation of the application code can be separated, while smooth integration of the functionality and the User Interface is supported. The resulting interface is location independent: it can migrate over devices while invoking functionality using standard protocols.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Runtime transformations for modal independent user interface migration</title>
      <link>https://www.krisluyten.net/publications/dblp_journals_iwc_luytenlcr03/</link>
      <pubDate>Wed, 01 Jan 2003 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_journals_iwc_luytenlcr03/</guid>
      <description>&lt;p&gt;The usage of computing systems has evolved dramatically over the last years. Starting from a low level procedural usage, in which a process for executing one or several tasks is carried out, computers now tend to be used in a problem oriented way. Future computer usage will be more centered around particular services, and will not be focused on platforms or applications. These services should be independent of the technology used to interact with them. In this paper an approach will be presented which provides a uniform interface to such services, without any dependence on modality, platform or programming language. Through the usage of general user interface descriptions, presented in XML, and converted using XSLT, a uniform framework is presented for runtime migration of user interfaces. As a consequence, future services will become easily extensible for all kinds of devices and modalities. Special attention goes out to a component-based software development approach. Services represented by and grouped in components can offer a special interface for modal- and device-independent rendering. Components become responsible for describing their own possibilities and constraints for interacting. An implementation serving as a proof of concept, a runtime conversion of a joystick in a 3D virtual environment into a 2D dialog-based user interface, is developed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Migratable user interface descriptions in component-based development</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dsvis_luytenvc02/</link>
      <pubDate>Tue, 01 Jan 2002 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dsvis_luytenvc02/</guid>
      <description>&lt;p&gt;In this paper we describe how a component-based approach can be combined with a user interface (UI) description language to get more flexible and adaptable UIs for embedded systems and mobile com- puting devices. We envision a new approach for building adaptable user interfaces for embedded systems, which can migrate from one device to another. Adaptability to the device constraints is especially important for adding reusability and extensibility to UIs for embedded systems: this way they are ready to keep pace with new technologies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Specifying user interfaces for runtime modal independent migration</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_cadui_luytenlcr02/</link>
      <pubDate>Tue, 01 Jan 2002 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_cadui_luytenlcr02/</guid>
      <description>&lt;p&gt;The usage of computing systems has evolved dramatically over the last years. Starting from a low level procedural usage, in which a process for executing one or several tasks is carried out, computers now tend to be used in a problem oriented way. Future computer usage will be more centered around particular services, and will not be focused on platforms or applications. These services should be independent of the technology used to interact with them. In this paper an approach will be presented which provides a uniform interface to such services, with- out any dependence on modality, platform or programming language. Through the usage of general user interface descriptions, presented in XML, and converted using XSLT, a uniform framework is presented for runtime migration of user interfaces. As a consequence, future services will become easily extensible for all kinds of devices and modalities. An implementation serving as a proof of concept, a runtime conversion of a joystick in a 3D virtual environment into a 2D dialog-based user interface, is developed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>An XML-based runtime user interface description language for mobile computing devices</title>
      <link>https://www.krisluyten.net/publications/dblp_conf_dsvis_luytenc01/</link>
      <pubDate>Mon, 01 Jan 2001 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/publications/dblp_conf_dsvis_luytenc01/</guid>
      <description>&lt;p&gt;In a time where mobile computing devices and embedded systems gain importance, too much time is spent to reinventing user interfaces for each new device. To enhance future extensibility and reusability of systems and their user interfaces we propose a runtime user interface description language, which can cope with constraints found in embedded systems and mobile computing devices. XML seems to be a suitable tool to do this, when combined with Java. Following the evolution of Java towards XML, it is logical to introduce the concept applied to mobile computing devices and embedded systems.&lt;/p&gt;</description>
    </item>
    <item>
      <title>403</title>
      <link>https://www.krisluyten.net/403/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/403/</guid>
      <description>&lt;h2&gt;Sorry, you don&#39;t have permission to do that&lt;/h2&gt;</description>
    </item>
    <item>
      <title>404</title>
      <link>https://www.krisluyten.net/404/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/404/</guid>
      <description>&lt;h2&gt;404: File not found&lt;/h2&gt;&#xA;&#xA;&lt;h4&gt;Sorry, that doesn&#39;t seem to be here&lt;/h4&gt;</description>
    </item>
    <item>
      <title>About me</title>
      <link>https://www.krisluyten.net/about/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://www.krisluyten.net/about/</guid>
      <description>&lt;p&gt;&lt;span class=&#34;button&#34;&gt;&lt;a href=&#34;#biography&#34;&gt;Biography&lt;/a&gt;&lt;/span&gt; &lt;span class=&#34;button&#34;&gt;&lt;a href=&#34;#academic-professional-data&#34;&gt;Academic/Professional Data&lt;/a&gt;&lt;/span&gt;&#xA;&lt;span class=&#34;button&#34;&gt;&lt;a href=&#34;#contact-info&#34;&gt;Contact Info&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;biography&#34;&gt;Biography&lt;/h2&gt;&#xA;&lt;div class=&#34;about-bio-row&#34;&gt;&#xA;&lt;p&gt;I am a full professor in Computer Science at &lt;a href=&#34;https://www.uhasselt.be&#34;&gt;Hasselt University&lt;/a&gt; and deputy managing director of the research institute &lt;a href=&#34;https://www.uhasselt.be/en/edm&#34; target=&#34;_blank&#34;&gt;Digital Future Lab&lt;/a&gt;, a &lt;a href=&#34;https://www.flandersmake.be/en&#34;&gt;Flanders Make&lt;/a&gt; core lab. I am also one of the Flemish Principal Investigator for the &lt;a href=&#34;https://www.flandersairesearch.be/en&#34;&gt;Flanders AI Research Program&lt;/a&gt;. I founded the Intelligible Interactive Systems&lt;/a&gt; research unit which has been part of the Digital Future Lab since 2018.&lt;/p&gt;&#xA;&lt;div class=&#34;about-logos&#34;&gt;&#xA;  &lt;div class=&#34;about-logos-row&#34;&gt;&#xA;    &lt;a href=&#34;https://www.uhasselt.be/en/instituten-en/digital-future-lab&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;&lt;img src=&#34;./img/dfl-logo.jpg&#34; alt=&#34;Digital Future Lab&#34;&gt;&lt;/a&gt;&#xA;    &lt;a href=&#34;https://www.flandersairesearch.be/en&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;&lt;img src=&#34;./img/fair-logo.svg&#34; alt=&#34;Flanders AI Research&#34;&gt;&lt;/a&gt;&#xA;  &lt;/div&gt;&#xA;  &lt;div class=&#34;about-logos-row&#34;&gt;&#xA;    &lt;a href=&#34;https://www.flandersmake.be/en&#34; target=&#34;_blank&#34; rel=&#34;noopener noreferrer&#34;&gt;&lt;img src=&#34;./img/flanders-make-logo.png&#34; alt=&#34;Flanders Make&#34;&gt;&lt;/a&gt;&#xA;  &lt;/div&gt;&#xA;&lt;/div&gt;&#xA;&lt;/div&gt;&#xA;&lt;p&gt;I received both a M.Sc. in Knowledge Engineering and Computer Science from the transnational University Limburg, a joint university of Hasselt University, Belgium and Maastricht University, the Netherlands, in 2000. I was awarded a PhD in computer science in 2004, from Hasselt University, Belgium. In 2006, I was appointed as a professor at the same university.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
