HUCAPP 2018 Abstracts


Area 1 - Agents and Human Interaction

Full Papers
Paper Nr: 2
Title:

Analyzing Eye-gaze Interaction Modalities in Menu Navigation

Authors:

R. Grosse, D. Lenne, I. Thouvenin and S. Aubry

Abstract: While eye-gaze interaction for disabled people proved to work fine, its usability in general cases is still far from being integrated. In order to design a wearable interface for military products, several modalities using the eye were tested. We proposed a new modality named Relocated DwellTime which aimed at giving more control than existing modalities. We then conceived an experimental military representative observation task where 4 interaction modalities using the eye were tested (2 eye-only and 2 multimodal methods using an external physical button). The experiment evaluated the effect of two types of menus, circular and linear, on eye-gaze interactions performances. Significant results were observed regarding interaction modalities. The modality adding a physical button proved significantly more efficient than eye-only methods in this context and instant opening of menus was rather accepted despite the hypothesis of the literature. No impact of the menu type was observed.
Download

Area 2 - Haptic and Multimodal Interaction

Full Papers
Paper Nr: 6
Title:

The Role of Haptics in User Input for Simple 3D Interaction Tasks - An Analysis of Interaction Performance and User Experience

Authors:

Mirjam Augstein, Thomas Neumayr, Stephan Vrecer, Werner Kurschl and Josef Altmann

Abstract: Traditionally, input devices allowed for at least a certain degree of haptic experience by involving direct physical contact between user and device. Recently, touchless interaction gained popularity through readily available, cheap devices like the Leap motion controller or Microsoft Kinect. Usually, these devices support more than two degrees of freedom and are thus especially suitable for interaction tasks in a three-dimensional space. However, besides the high potential that lies within touchless input techniques, they also involve new challenges (e.g., lack of borders and natural haptic guidance). In this paper, we aim at the identification of potentials and limitations inherent to three different input techniques that involve a varying amount of haptics (i.e., touchful, touchless and semi-touchless input). We present a study conducted with 25 users that focuses on simple input tasks in a 3D interaction space and analyzes objective interaction performance metrics (e.g., regularity or time) and subjective User Experience aspects (e.g., dependability or efficiency). It reveals parallels as well as contrasts between the users’ actual interaction performance and perceived UX (e.g., several metrics suggested haptic input to outperform touchless input while differences regarding UX were not significant). The results are intended to inform other researchers when designing interactive environments.
Download

Paper Nr: 12
Title:

A Multimodal Positive Computing System for Public Speaking - Evaluating User Responses to Avatar and Video Speaker Representations

Authors:

Fiona Dermody and Alistair Sutherland

Abstract: A multimodal Positive Computing system with real-time feedback for public speaking has been developed. The system uses the Microsoft Kinect to detect voice, body pose, facial expressions and gestures. The system is a real-time system, which gives users feedback on their performance while they are rehearsing a speech. We wish to compare two versions of the system. One version displays a live video-stream of the user. The other displays a computer-generated avatar, which represents the user’s body movements and facial expressions. Visual feedback is displayed on both versions in proximity to the speaking modality it relates to. In all other aspects, the two versions are identical. We found that users rated the video version of the system more distracting as they focussed on their physical appearance rather than their speaking performance when using it
Download

Paper Nr: 28
Title:

Impact of Auditory Distractions on Haptic Messages Presented Under the Foot

Authors:

Landry D. Chapwouo T. and Bob-Antoine J. Menelas

Abstract: When compared to vision and audition, communication capabilities of the haptic channel remain underexploited. In this paper, we investigate the impact of auditory distractions on the learning of haptic messages presented under the foot plantar. From a set of six haptic messages that have been designed in order to be easily differentiable one from another, participants have to select four. With and without the presence of auditory distractions, we evaluate the completion time and the number of iteration required to reach an identification rate greater than 95%. For both measures, we observed that having auditory distractions was detrimental to the performances of users
Download

Short Papers
Paper Nr: 25
Title:

Design of Multimodal Interaction with Mobile Devices - Challenges for Visually Impaired and Elderly Users

Authors:

Michela Ferron, Nadia Mana, Ornella Mich and Christopher Reeves

Abstract: This paper presents two early studies aimed at investigating issues concerning the design of multimodal interaction - based on voice commands and mid-air gestures - with mobile technology specifically designed for visually impaired and elderly users. These studies have been carried out on a new device allowing enhanced speech recognition (interpreting lip movements) and mid-air gesture interaction on Android devices (smartphone and tablet PC). The initial findings and challenges raised by these novel interaction modalities are discussed. These mainly centre on issues of feedback and feedforward, the avoidance of false positives and point of reference or orientation issues regarding the device and the mid-air gestures.
Download

Paper Nr: 29
Title:

Reaction Time to Vibrotactile Messages on Different Types of Soil

Authors:

Landry D. Chapwouo T. and Bob-Antoine J. Menelas

Abstract: This study investigates the Reaction Time (RT) to vibrotactile messages presented under the foot plantar on different types of soil. We determine whether reaction time varies while walking on different types of soil (mobile situation). A total of six young participants (n=6) aged between 21 and 28 took part firstly in this study where they had to walk on five types of soil (concrete, carpet, foam, gravel, and sand). The methodology includes 360 repeated measures. The findings have consistently revealed a decrease of reaction time to vibrotactile messages when walking on the three deformable soils (foam, gravel, and sand).
Download

Area 3 - Interaction Techniques and Devices

Full Papers
Paper Nr: 30
Title:

Summarising Academic Presentations using Linguistic and Paralinguistic Features

Authors:

Keith Curtis, Gareth J. F. Jones and Nick Campbell

Abstract: We present a novel method for the automatic generation of video summaries of academic presentations using linguistic and paralinguistic features. Our investigation is based on a corpus of academic conference presentations. Summaries are first generated based on keywords taken from transcripts created using automatic speech recognition (ASR). We augment spoken phrases by incorporating scores for audience engagement, comprehension and speaker emphasis. We evaluate the effectiveness of our summaries generated for individual presentations by performing eye-tracking evaluation of participants as they watch summaries and full presentations, and by questionnaire of participants upon completion of eye-tracking studies. We find that automatically generated summaries tend to maintain the user’s focus and attention for longer, with users losing focus much less often than for full presentations.
Download

Short Papers
Paper Nr: 11
Title:

Development of an Eco-Driving Simulation Training System with Natural and Haptic Interaction in Virtual Reality Environments

Authors:

Konstantinos Gardelis, Aris S. Lalos and Kostantinos Moustakas

Abstract: Road transport is one of the major causes of the environmental pollution. Among the actions individuals can take to reduce their green-house gases associated with personal transportation, there is to operate their current vehicles more efficiently. Behavioral theory strongly confirms that the most important educational element in changing driver behavior is the direct feedback while driving on an immediate and continuous basis. Gamification has been positioned as a powerful approach, tool, or set of techniques that guides targeted behavior change and improves the way that various activities are undertaken so that those involved begin to take the desired actions while they experience more fun, enjoyment, and pleasure in their tasks. Building on this direction, we present conceptual approach of an eco-driving simulation system that aims to train drivers to follow eco-driving rules simulating the augmented reality technology in virtual driving games. The proposed system provides: i) an efficient way to study the effect of AR games responsible for monitoring driving behavior and delivering action personalized plans that will help user to maintain a green driving style without distracting them from safe driving and ii) a multiplayer gaming environment where users can monitor the eco-driving score evolution, set missions and invite other to participate collaboratively or competitively.
Download

Paper Nr: 15
Title:

ARGI: Augmented Reality for Gesture-based Interaction in Variable Smart Environments

Authors:

Jonas Sorgalla, Jonas Fleck and Sabine Sachweh

Abstract: Modern information- and communication technology holds the potential to foster the well-being and independent living of elderly. However, smart households which support older residents often are overwhelming in their interaction possibilities. Therefore, users demand a single and remote way to interact with their environment. This work presents such a way using gestures in free space to interact with virtual objects in an augmented reality to control a smart environment. For expandability and reliability the implementation of the approach relies on Eclipse SmartHome as a prevalent open source framework for home automation and the Microsoft HoloLens.
Download

Paper Nr: 17
Title:

Studying Natural Human-computer Interaction in Immersive Virtual Reality: A Comparison between Actions in the Peripersonal and in the Near-action Space

Authors:

Chiara Bassano, Fabio Solari and Manuela Chessa

Abstract: Interacting in immersive virtual reality is a challenging and open issue in human-computer interaction. Here, we describe a system to evaluate the performance of a low-cost setup, which has not the need of wearing devices to manipulate virtual objects. In particular, we consider the Leap Motion device and we assess its performance into two situations: reaching and grasp in the peripersonal space, and in the near-action space, i.e. when a user stays on foot and can move his own arms to reach objects on a desk. We show how these two situations are similar in terms of user performance, thus indicating a possible use of such device in a wide range of reaching tasks in immersive virtual reality.
Download

Paper Nr: 26
Title:

A 2-by-6-Button Japanese Software Keyboard for Tablets

Authors:

Kei Takei and Hiroshi Hosobe

Abstract: Increasingly more people are using tablets. When they enter text, they usually use software keyboards. However, such software keyboards are not specialized in tablets. In this paper, we propose a software keyboard for tablets. Tailored to its user’s use of the index, middle, and third fingers on a tablet screen, the keyboard consists of 2-by-6 buttons. It is focused on Japanese kana characters, and enables the user to enter a character usually with two strokes. We present the results of the experiment that we performed to evaluate the proposed keyboard by comparing it with a standard QWERTY software keyboard. The results indicate a trade-off between the average time lengths for entering a character and the average error rates for entering a word; the proposed keyboard resulted in a lower average error rate but in a longer average time length.
Download

Paper Nr: 8
Title:

Intelligent and Flexible Worker Assistance Systems - Assembly Assistance Platform for Planning Assisted Assembly and Rework as Well as Execution of a Worker-Centered Assistance

Authors:

Rainer Mueller, Matthias Vette-Steinkamp, Leenhard Hoerauf, Christoph Speicher and Attique Bashir

Abstract: In assembly, reworking stations are often barely automated work places, where no technical equipment supports the worker, validates the process execution or documents process results. However, worker assistance systems are used to guide and support the worker through manual assembly. Nevertheless, the concept of providing information to the worker is limited to screen based output media and assistance systems are just compatible to supplier selected devices. The objective of the presented research and development in this paper is an interactive worker assistance system which combines qualification-based worker support and intelligent process validation especially for rework stations. Beside the worker assistance system, a planning environment is developed for an efficient generation of work plans and content for the worker assistance system.
Download

Paper Nr: 19
Title:

Definition of Gesture Interactions based on Temporal Relations

Authors:

Dominik Rupprecht, Daniel Künkel, Rainer Blum and Birgit Bomsdorf

Abstract: Taxonomies reported in the literature and in technical instructions define terms of gestures and gesture interactions similarly, but hold differences in semantics that may lead to misunderstandings. However, in a heterogeneous development team a common understanding of concepts and notions is of utmost importance. In this paper, we present an approach to a more universal definition of gesture interactions and gesture types, respectively. We define a notation of gesture interactions using a specific combination of the temporal intervals of gesture execution, the user interface feedback and the system functionality to effectively consider all three perspectives. We do not introduce a completely different approach, but extend and combine existing work.
Download

Area 4 - Theories, Models and User Evaluation

Full Papers
Paper Nr: 16
Title:

PEAR: Prototyping Expressive Animated Robots - A Framework for Social Robot Prototyping

Authors:

Etienne Balit, Dominique Vaufreydaz and Patrick Reignier

Abstract: Social robots are transitioning from lab experiments to commercial products, creating new needs for prototyping and design tools. In this paper, we present a framework to facilitate the prototyping of expressive animated robots. For this, we start by reviewing the design of existing social robots in order to define a set of basic components of social robots. We then show how to extend an existing 3D animation software to enable the animation of these components. By composing those basic components, robots of various morphologies can be prototyped and animated. We show the capabilities of the presented framework through 2 case studies.
Download

Short Papers
Paper Nr: 10
Title:

Eye and Mouse Coordination During Task: From Behaviour to Prediction

Authors:

Alexandre Milisavljevic, Kevin Hamard, Coralie Petermann, Bernard Gosselin, Karine Doré-Mazars and Matei Mancas

Abstract: The study of web users’ behaviour is of crucial importance for understanding people reaction when browsing websites. Eye-tracking is a precise tool for this purpose, but it is hard to scale up when trying to apply it to a wide range of situations and websites. On the other hand, mouse-tracking fulfills these requirements. Unfortunately, mouse data provides a limited approximation of the eye position as it was shown in the literature. In this paper, we investigated the relationship between mouse and eye behaviour on several kind of websites with three different tasks to create models based on these behaviours. Our findings were that 1) saliency Pearson’s correlation is not suitable to analyse eye and mouse coordination, 2) this coordination is altered according to the task, 3) scroll speed directly influence where the eyes are during the scroll, 4) amplitude vary according to eyes position before the scroll and 5) by using the X axis variations it is possible to find the moments where it is easier to model eyes location from mouse location.
Download

Paper Nr: 22
Title:

Automatic Generation of an Operation Procedure Presentation System Reusing User’s Input Data

Authors:

Shimon Nakamura, Hajime Iwata, Junko Shirogane and Yoshiaki Fukazawa

Abstract: Users use software applications to achieve a goal. Occasionally they make mistakes in the operation path due to the complexity of large-scale applications, which requires them to back track to the appropriate operation step and reenter previously input data. This is burdensome for users. Herein a method is proposed to generate an operation support system that reuses previously input data in an inappropriate operation path as much as possible by navigating users to the appropriate operation path. Specifically, our method has an input reuse function for copying previously input data to similar input items as well as an operation procedure presentation function to highlight the operation procedure from the current step to the goal. Our integrated operation support can minimize users’ rework. To generate our system, developers must create an ontology, including concepts of label names of input items, correspondence between input items and label names, an activity diagram of the target application, and the operation procedure. Our system uses this information to compute the similarity of label names between input items, copy input data for similar input items, and present operation procedures to users.
Download

Paper Nr: 24
Title:

Nourishing Problem Solving Skills by Performing HCI Tasks - Relationships between the Methods of Problem Solving (Retrieval, Discovery, or Search) and the Kinds of Acquired Problem Solving Skills

Authors:

Muneo Kitajima

Abstract: There are three methods for deriving a solution for a problem with which a person is facing, which are 1) retrieval of an existing solution from his/her own memory or from available external resources including human resources, digital resources, and so on, 2) clarifying the constraints to meet and discovering a solution that should satisfy them by exploring the problem space, or 3) deriving a solution by applying inference rules successively until the goal state is achieved. This paper describes the distinctive cognitive processes that respective methods should use when deriving a solution. On the assumption that the ultimately needed problem solving skill would be the one which makes a person solve any problem by himself or herself without reliance on any external resources other than himself/herself, i.e., adaptive problem solving, this paper discusses the implications of the respective methods of problem solving to acquiring the required problem solving skill.
Download