Open Lab

Student projects

This page shows a list of project proposals for students at University West. Most of the projects can be performed in group as part of a course you are taking (confirm with the course responsible) and/or as a thesis project on the study programme you are taking (“examensarbete”). In fact it is not uncommon to engage in a project you are interested in as part of a course project first, and then continue working on it as part of your BSc/MSc thesis.

Note: The list of project ideas is continuously updated. Some proposals might be outdated and some are not yet up here but in the works. If you see something interesting, please email the contact person and you will get the current status of that project and also potentially similar ones that are on the horizon.

  • Newest proposals are shown on top. Oldest and those already taken in the end.
  • If you are interested in a project already “taken”, don’t hesitate to get in touch with the contact person: sometimes a similar project can be arranged.
  • Some proposals are described in English language, some in Swedish. The language to be used in the (thesis) report writing is decided together with the supervisor.

Aim/Goal/Research question

To assemble or disassemble things is traditionally facilitated by paper-based step-by-step visual instructions, e.g. for assembling IKEA furniture. Augmented Reality (AR) is an alternative guidance platform which if done well could have some benefits over paper-based instructions based on innate features of AR to support superimposing digital instructions directly on real-world items of interest – reducing the need for mental rotation and other cognitively demanding work – and also the potential ability of AR systems to automatically recognize assembly phases and move on to the next step in the instructions accordingly, without the user having to turn a page.  
However, highly visible digital guidance, e.g. in the shape of arrows or text can sometimes be distracting and add to an already very information-dense environment, given that more and more human tasks are supported by digital(ized) tools and machinery which _also_ tend to demand attention. What if AR guidance could be made more subtle? E.g. by using barely visible visual stimuli to draw the individual’s attention to certain objects and thereby increasing the chances for them to make the right decision, e.g. when choosing what piece to assemble next?  
The aim of this thesis project is to develop a Mixed Reality test environment for subtle AR guidance so that controlled experiments which compare different kinds of subtle visual stimuli can be tested and compared.  
The test environment should consist of spatially tracked physical assembly blocks on a table, and a subtle stimuli generator running on a state-of-the art Mixed Reality headset with embedded eye-tracking such as the Meta Quest Pro or Vario VR-1.

Aim/Goal/Research question

This master’s thesis seeks to employ artificial intelligence, to analyze event logs from breakdowns of industrial machines. The study aims to discover the root cause of a machine or robot breakdown with specific stop events generated from sensor-based data in a machine or robot. Event log data from two industrial companies will be utilized, supplemented by insights obtained through interviews with engineering technicians and operators. Included in the thesis is to validate the effectiveness of the proposed AI-based approach through testing with datasets and real-world information. The thesis project is part of the Restart II project. Expected Results: An enhanced decision model for supporting the root cause of breakdowns of automated manufacturing systems, incorporating sensor-based system data for analyzing deviations from normal behavior. Validation and evaluation results demonstrate the effectiveness of the approach with diverse datasets.  

Aim/Goal/Research question

The latest development within HMD-based Augmented Reality and Context-Aware Systems points towards a potential future where people could carry nudging AI+AR glasses (see figure) that almost invisibly help them perform both professional and everyday tasks. Apart from ethical and privacy challenges (do we really want our perception to be invisibly manipulated in this way?) a number of advanced system functionalities would need to be engineered. Core mechanisms include manipulation of how the human individual perceives the surrounding environment (subtle visual attention guidance) driven by a story generation process which on the fly, using available real-world and digital objects, gradually lead users of the device towards a better understanding of a phenomena and/or simply conclusion of the task at hand.   The aim of this project is to develop a first version of a story generation component based on Large Language Model (LLM) technologies, used in for instance Open AI’s Chat GPT. The story generation is intended to be performed both a) in order for the system itself to make sense of the current situation, and b) in order to help the user of the system better make sense of the situation through subtle attention guidance (nudging). More concretely:
  1. Can LLM take in the signals from the environment and from cognitive loading (like pupil dilation information and EEG) can create a rich story about what is going on?
  2. Based on the above, can LLM suggest what to do in order to nudge action XYZ?

Aim/Goal/Research question

In the developing field of AI-enhanced Augmented Reality (AR) and Virtual Reality (VR), the concept of “nudging”—subtle guidance to influence human cognition and behavior—presents both promise and challenge. Can these methods help us understand and manage complex issues, improving lives or helping us deal with societal problems? Do we really want our perception to be invisibly manipulated in this way? To properly investigate such issues, a range of advanced functionalities must be developed. These include systems that alter how individuals perceive their environment through subtle guidance, and using techniques to generate and guide developing narratives to help users gain a better understanding of complex subjects in real-time. This project aims to investigate these dynamics in the context of immersive VR-based climate data visualization. The objective is to develop a VR component that leverages AI-driven nudging to assist users in navigating and interpreting complex information. A central focus is the creation of narratives within the data; as users navigate the visualization, the system will direct their attention to key elements, forming narratives that enhance understanding. By exploring these issues, the project aims to contribute to ongoing research in AI-enhanced nudging techniques for educational and decision-making applications in AR/VR environments.

Aim/Goal/Research question

The latest development within HMD-based Augmented Reality and Context-Aware Systems points towards a potential future where people could carry nudging AI+AR glasses that almost invisibly help them perform both professional and everyday tasks. Apart from ethical and privacy challenges (do we really want our perception to be invisibly manipulated in this way?) a number of advanced system functionalities would need to be engineered. Core mechanisms include manipulation of how the human individual perceives the surrounding environment (subtle visual attention guidance) driven by a story generation process which on the fly, using available real-world and digital objects, gradually lead users of the device towards a better understanding of a phenomena and/or simply conclusion of the task at hand. An even more fundamental mechanism for these envisioned AI+AR nudging glasses would be a system component which constructs and maintains a model of the immediate surrounding of the user, e.g. determining what physical/digital objects are present in front of the user, which of these are currently attended to, etc.   The aim of this project is to develop a first version of such a component running on a State-of-the-Art Augmented Reality headset.

Aim/Goal/Research question

Prototype and investigate how the use of VR and/or AR can be used to orient and introduce citizens to public buildings and services in Trollhättan. This may include exploring use cases and designing user interactions and work replicating real environments using for example photogrammetry. Questions to explore include what potential users need, want and/or can make use of, and/or what technical solutions are feasible or suitable in specific contexts.

Aim/Goal/Research question

End to end ice-charting software to create ice-concentration map using radar images and AI.

Aim/Goal/Research question

A comparative review of deep learning algorithms for SAR ATR. Find limitations and gaps in the existing work. Modify some existing work to take care of some of the existing limitations

Aim/Goal/Research question

Radar data analysis using various statistical methods like methods Visualisation of the data using PCA, kPCA, UMAP etc Design of classifier based on the micro-Doppler and/or PCA features. Design of a deep-learning classifier based on the aove features. (Optional) Use Kalman filter to track the drones