Universität der Bundeswehr München
Hier finden Sie Informationen zu ausgewählten Forschungsarbeiten der Arbeitsgruppe.
SnapApp is a novel unlock concept for mobile devices that reduces authentication overhead with a time-constrained quick-access option.
We investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices.
We present a data logging concept, tool, and analyses to facilitate studies of everyday mobile touch keyboard use and free typing behaviour.
SmudgeSafe is a graphical authentication mechanism that mitigates smudge attacks by applying geometric transformations to images.
As a first step towards seamless VR authentication, this paper investigates the direct transfer of well-established concepts (PIN, Android unlock patterns) into VR.
We investigate body motion as behavioural biometrics for virtual reality. In particular, we look into which behaviour is suitable to identify a user.
A common objective for context-aware computing systems is to predict how user interfaces impact user performance regarding their cognitive capabilities. We address this by exploiting the fact that cognitive workload influences smooth pursuit eye movements.
We present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user’s 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface.
We rigorously compare modalities for cue-based authentication on situated displays. In particular, we provide the first comparison between touch, mid-air gestures, and calibration-free gaze using a state-of-the-art authentication concept.
We present the results of a user survey in which we investigate actual stories about shoulder surfing on mobile devices from both users and observers.
Common user authentication methods on smartphones, such as lock patterns, PINs, or passwords, impose a trade-off between security and password memorability. Image-based passwords were proposed as a secure and usable alternative.
We show how reading text on large display can be used to enable gaze interaction in public space.
We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user’s lateral movement.
We present GravitySpot – an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues.
We introduce a dynamic font personalisation framework, TapScript, which adapts a finger-drawn font according to user behaviour and context, such as finger placement, device orientation and movements - resulting in a handwritten-looking font.
We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all.
We present EngageMeter – a system that allows fine-grained information on audience engagement to be obtained implicitly from multiple brain-computer interfaces (BCI) and to be fed back to presenters for real time and post-hoc access.
We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender. We call this kind of communication embodied emotional feedback, and present a prototype implementation.
We designed an interactive installation that uses visual feedback to the incidental movements of passersby to communicate its interactivity.
We explore cylindrical displays as a possible form of such novel public displays. We present a prototype and report on a user study, comparing the influence of the display shape on user behavior and user experience between flat and cylindrical displays.