OL25 Manual Assembly in Mixed Reality facilitated by subtle visual guidance
To assemble or disassemble things is traditionally facilitated by paper-based step-by-step visual instructions, e.g. for assembling IKEA furniture. Augmented Reality (AR) is an alternative guidance platform which if done well could have some benefits over paper-based instructions based on innate features of AR to support superimposing digital instructions directly on real-world items of interest – reducing the need for mental rotation and other cognitively demanding work – and also the potential ability of AR systems to automatically recognize assembly phases and move on to the next step in the instructions accordingly, without the user having to turn a page.
However, highly visible digital guidance, e.g. in the shape of arrows or text can sometimes be distracting and add to an already very information-dense environment, given that more and more human tasks are supported by digital(ized) tools and machinery which _also_ tend to demand attention. What if AR guidance could be made more subtle? E.g. by using barely visible visual stimuli to draw the individual’s attention to certain objects and thereby increasing the chances for them to make the right decision, e.g. when choosing what piece to assemble next?
The aim of this thesis project is to develop a Mixed Reality test environment for subtle AR guidance so that controlled experiments which compare different kinds of subtle visual stimuli can be tested and compared.
The test environment should consist of spatially tracked physical assembly blocks on a table, and a subtle stimuli generator running on a state-of-the art Mixed Reality headset with embedded eye-tracking such as the Meta Quest Pro or Vario VR-1.
Human-centered iterative prototyping and evaluation.
Unreal, C#, Virtual Reality, Augmented Reality, Human-Computer Interaction