Data Compression of Multi-Modal Sensor Data Depending on Power and Time Consumption

Topic Area

As part of a research project, the IAS is developing a sensor system for a clamping tool in order to implement predictive maintenance using this data. These clamping tools have a highly complex structure and consist of many individual components that together ensure that the clamped element is firmly seated in the lathe. One essential group of components are collets, which clamp the component in the axial direction. The aim of this work is to record the movement patterns of the clamps and to transmit the data in an optimised way so that a failure of the clamps can be detected with the help of this data.

Task

Optimising the recording and transmission of this data poses a complex problem with three diametrically opposed characteristics: 1. the energy efficiency of data transmission, while the microcontroller requires approximately three times as much energy. 2. the energy and computing time required to compress the data. It should be noted that longer computing times do not mean energy savings.

Knowledge

Robust knowledge in: - Computer science - Microcontroller programming - Digital signal processing

Contact Person

Sebastian Baum

Style Transfer between Images for Training Robust Image Classification Models

Topic Area

State-of-the-Art image models like Deep Convolutional Neural Networks exceed human performance in tasks such as image classification. However, their performance is more prone to changes in the image data, even if those changes are small. This is why robustness, the ability of a model to achieve constant performance when confronted with corrupted data, is a popular and well recognized research direction. For training robust models, data augmentation techniques are among the most effective methods. They add to the variety of the training data, in many cases increasing robustness against common random corruptions along the way. However, the types of data augmentations available are limited. Recent studies have looked at exchanging the textures within training images altogether in order to make models learn more general features within images. This method of so called style transfer has been effective with regards to improvements in robustness. However, as ever more powerful transfer models are developing and the potencial data variety with style and texture exchanges are endless, there are open research opportunities to explore on style transfer for training data augmentation.

Task

The task of this Master Thesis is to do applied research on finding and developing effective style transfer approaches. The goal is to evaluate the approaches with respect to the robustness they provide when used for training data augmentation of image classifiers. The following task are to be carried out: - Literature research on style or texture transfer between images for Machine Learning tasks. - Literature and software repository research on style or texture transfer models - Integration of promising style transfer approaches into an existing image classification training pipeline - Development of methods for style transfer in training data between images and from external datasets - Training of image classification models in Pytorch with and without style transfer data augmentation - Comparison of style transfer approaches among each other and against state-of-the-art with regards to common corruption robustness - Evaluation and discussion of the methods effectiveness with regards to accuracy, robustness and computational effort.

Knowledge

- Python - Principles of Data Loading / Processing in Machine Learning (esp. Pytorch) pipelines

Contact Person

Georg Siedel

RoboDog Fetch – Detection and Retrieving

Topic Area

This project features a robotic dog (UniTree Go1 Edu) capable of playing fetch, turning a classic game into an engaging learning experience about robotics. Designed for demonstrations at events like Girl's Day and tryScience, our initiative aims to spark students' interest in technology. By merging engineering with programming, the Robodog project aims to develop a robotic demonstrator capable of throwing and retrieving a tennis ball, showcasing the integration of AI and robotics in interactive tasks through playful interactive activities. This functionality showcases the practical application of robotics in everyday activities and encourages students to think creatively about the future of technology and their potential role in it. Through this project, we hope to inspire a new generation of robot enthusiasts.

Task

This thesis focuses on three elements of the RoboDog Fetch project: Detection of the ball, as well as the gripping and returning of the ball. 1. Detect a nearby tennis ball using computer vision algorithms. 2. Successfully pick the tennis ball in different locations. 3. Return the ball to a human.

Knowledge

ROS, Computer Vision.

Contact Person

Joachim Grimstad

Dealing with Different Data Structures for AI Applications

Topic Area

As part of the SPP2422 "Data-driven process modelling in forming technology", the IAS has been assigned the task of estimating the deformation of three-dimensional objects using artificial intelligence. Specifically, this involves deep-drawn sheet metal that takes the shape of a can (see Figure 1, left). After the deep-drawing process and the removal of excess material, the mechanical stress in the object leads to elastic springback, which causes the component to deform. As part of the project, the IAS intends to learn and estimate this springback using machine learning applications.

Task

One challenge is that, as in many applications, the first training phase is carried out using synthetic data and the second with real data. These data structures differ, which is why the following questions need to be analysed: 1. should the synthetic data be aligned with the real data so that the model does not have to adapt? 2. should both data sets adapt and use preprocessing to generate a new type of data on which to train the model? 3. does it make sense to choose a model architecture that processes both data structures?

Knowledge

Robust knowledge in: - Python (Pytorch, Jax, Numpy) - Artificial intelligence (deep learning) - Mathematics

Contact Person

Sebastian Baum

Conceptualization of a reinforcement learning approach for human-assisted root cause analysis in software-defined systems

Topic Area

Modern system development is characterized by increased customer requirements and greater market and time pressure. The innovations required for this are created on the one hand by a higher proportion of software in products and on the other hand by the networking of more and more previously independent systems, resulting in heterogeneous and therefore more complex IT structures overall. This is also reflected in the automotive industry, where new business models are being developed via software-defined vehicles. Modern E/E architectures enable the vehicle to communicate with its environment as well as collect data during operation, which can then be used by manufacturers to improve driving or comfort services. To realize such a data loop, automated analysis of the software in operation is key. A key challenge here is to link the events occurring in the system in order to be able to determine the cause of any errors. Conventional approaches fail to take into account temporal behavior as well as contextual information, which can be provided by a system engineer. Therefore, a reinforcement learning approach is to be developed in the context of this thesis, which can incorporate the knowledge of the system engineer as well as information about updates of the system into the automated linkage.

Task

- Analysis of existing approaches for root cause analysis - Development of an own approach in the context of a software-defined DevOps environment - Integration into an analysis platform for distributed cloud systems - Evaluation of the approach using an own data set and comparison with conventional methods

Knowledge

- Very good conceptual skills - Prior knowledge of deep learning and Markov chains - Basic knowledge of software engineering and IT systems - Good programming skills - Very good English skills

Contact Person

Matthias Weiss

Conceptualization of a reinforcement learning approach for self-adaptive anomaly detection in software-defined systems

Topic Area

Modern system development is characterized by increased customer requirements and greater market and time pressure. The innovations required for this are created on the one hand by a higher proportion of software in products and on the other hand by the networking of more and more previously independent systems, resulting in heterogeneous and therefore more complex IT structures overall. This is also reflected in the automotive industry, where new business models are being developed via software-defined vehicles. Modern E/E architectures enable the vehicle to communicate with its environment as well as collect data during operation, which can then be used by manufacturers to improve driving or comfort services. To realize such a data loop, automated analysis of the software in operation is key. In order to detect changes or emerging errors at an early stage, the incoming data must therefore be continuously analyzed for anomalies. A particular challenge is posed by the high system dynamics, which means that the anomaly detection methods used must be continuously updated in order to always be able to issue reliable alarm messages. Since this has so far been associated with a high manual effort, a self-adaptive approach is to be developed within the scope of this work, by means of which suitable anomaly detectors can be selected and configured automatically on the basis of the system properties.

Task

- Analysis of existing approaches for self-adaptive anomaly detection - Development of an own approach in the context of a software-defined DevOps environment - Integration into an analysis platform for distributed cloud systems - Evaluation of the approach using an own data set and comparison with conventional methods

Knowledge

- Very good conceptual skills - Prior knowledge of deep learning and signal processing - Basic knowledge of software engineering and IT systems - Programming skills in Python - Very good English Skills

Contact Person

Matthias Weiss

Systematic testing of AI-based systems

Topic Area

Testing of AI-based systems such as autonomous vehicles is challenging due to many situations and scenarios. Brute force is expensive and has gaps, as we see in practice. We thus use synthetic data for an AI-driven testing. This data covers real-world scenarios to train autonomous systems in a simulation-based environment. The training success is evaluated in a data loop and enhanced to close blind spots and unknown knowns. This thesis targets to integrate a requirements and test engine to an automated test system.

Task

The goal of the thesis is to integrate existing parts of the system. A fully running system shall be implemented. The integration comprises verification and validation checks for the existing parts. Professional tools such as DOORS shall be used for industry-scale AI-based testing of autonomous systems.

Knowledge

Knowledge in Python Industry-scale software engineering and tools Work in a self-independent way Passionate about clean and good quality code Capable of integrating your work with other parts of the system

Contact Person

Christof Ebert

Design and implementation of a software complexity assistant system using Digital Twin

Topic Area

The aim of the project is to identify the different drivers of complexity within this project and quantify it with appropriate measures. We focus on the change of the software part. Specifically, we want a methodology that predicts the complexity of changing a given software module. In order to assist the management of the Digital Twin, an assistant system is to be created, which assesses the software complexity based on the described aspects. The feasibility of the assistant system will be evaluated on the software stack of the Digital Twin. Since we will have five different implementations of the same problem from the lab courses, there will be test data available to check the assessment results for plausibility.

Task

The Master Project should first analyze the literature for drivers of complexity and established complexity measurement methods in order to derive a methodology that identifies the complexity drivers and quantifies them. Moreover, within the project, an assistant system will be developed that assesses the complexity and visualizes the results. The assistant system will be evaluated using different variants of the Digital Twin.

Knowledge

Independent, scientific work Very good math and programming skills Good English skills

Contact Person

Golsa Ghasemi