Seminar Applied Artificial Intelligence


Selected topics from the field of socio-technical knowledge (see topics of the lecture Collaborative Intelligence). The seminar teaches the students how to write and present a scientific paper on a specific topic. Students are also introduced to doing a literature review of scientific papers. The final presentation is carried out in the form of a block event. More details will be given in the obligatory introductory meeting.


This seminar is offered to both Bachelor and Master students. Registration via OLAT is required for this seminar; the access code will be given in the introductory meeting. As we provide each student with a topic and a tutor, the number of seminar places is limited according to the number of topics available.


You can find all course materials, news and information in OLAT. The access code for the OLAT course will be given in the introductory meeting.


The seminars of all students take place as a block event at the end of the semester.

There will be a mandatory introductory online meeting via BigBlueButton (BBB), where the course organization and topics of the seminar will be presented, and the OLAT access code will be published:

During the lecture period, students will work on the topics. Discussions with their supervisor will take place individually. We offer a mandatory one-hour course about scientific writing and working with LaTeX.

The paper is to be written in English and should be of minimum length 10 pages (Bachelor: 8 pages) at the end. The presentations, which are also given in English, take place at the end of the semester and last about 25 minutes each (Bachelor: 20 minutes), including questions. Students should follow the provided mandatory LaTeX template for their seminar paper and an optional template for the presentation slides.


List of topics with a short description and the corresponding supervisor can be found below. After registration on OLAT, you will be asked to fill a topic preference survey to match you to your preferred topics. Please wait until you are assigned to their topic before contacting the supervisors.

  • On Pre-Training of Deep Neural Networks for Tabular Data (Dominique Mercier)While pre-training has proven to be powerful in the image and natural language domain, it has been less prominent when using tabular data. One reason for this is that tabular data has a defined structure and the structure of every dataset is different, making it more difficult to pre-train networks using a different dataset. However, in particular, self-supervised has shown to bridge this game by training the network on the dataset directly. The goal of this seminar is to present some of the existing pre-training approaches for deep learning on tabular data and compare them.
  • The Current State of ROI Segmentation for Acne & Eczema Images (Adriano Lucieri) In many real-world imaging applications, the actual relevant image areas are very small compared to the overall raw image sizes. Region of Interest (ROI) segmentation is a very important first step to ensure the quality and performance of the downstream processing pipeline. In contrast to most medical imaging domains such as MRI, X-ray, and histology, clinical skin photography constitutes the most accessible, but also most non-standardised imaging domain. This makes it particularly important to apply good ROI segmentation approaches before applying the downstream Deep Learning tasks. The goal of this seminar is to investigate and compare the current state-of-the-art methods for ROI segmentation of clinical eczema and acne images.
  • Diffusion Models for Time Series Data Generation (Philipp Engler) Diffusion models have become the go-to method for generative tasks. Popular image generation methods like DALL·E 2 and Stable Diffusion are based on that approach. Training of diffusion models is much more stable than with generative adversarial networks, and diffusion models do not suffer mode collapse like GANs do. We are interested in diffusion models in the time series domain. Time series data has some special characteristics, like trends, periodic patterns, temporal relations, and possibly variable length or missing data, which models need to be able to deal with. Therefore, the goal of the seminar is to find and compare recent methods based on diffusion models for time series generation. They shall be compared regarding the formulation of the diffusion models, the used architectures and possibly the use of conditions to steer the data generation.
  • Synthetic data generation with Large Language Models (LLMs) (Johannes Ruf)In recent times, LLMs such as Llama 2, Falcon-180B, and GPT-4 have demonstrated immense potential in natural language processing. However, they also exhibit notable shortcomings, including hallucinations and the production of biased content. A majority of these issues stem from the flawed data on which the LLMs are trained, leading to the "garbage in, garbage out" phenomenon. Typically, these models are trained on vast amounts of web data, which undergo only basic, automated filtering. Additionally, the finetuning of these models, especially with the use of RLHF, relies heavily on human crowdworkers, making the process both costly and prone to inaccuracies. A promising solution is synthetic data generation, which offers cleaner, controlled datasets. This seminar aims to explore pioneering research on generating and utilizing synthetic data for and with LLMs, shedding light on its potential to redefine the landscape of natural language processing.
  • Diffusion Models for Document Analysis (Saifullah Saifullah)Diffusion models have recently gained attention in the field of document synthesis, binarization, and generative tasks. However, numerous recent applications of these models can still be extended to the domain of documents. The objective of this seminar is to identify the latest research across various facets of document analysis that have harnessed diffusion models, conduct a comprehensive comparison across these areas, and pinpoint those domains where significant enhancements through diffusion are still achievable. In summary, the seminar aims to provide an overview of the most recent applications of diffusion models in document analysis and propose potential avenues for further research.
  • Deep learning-based urban flood prediction (Dinesh Natarajan) Climate change is causing significant changes to the Earth's climate patterns, leading to intense and frequent precipitation events increasing the risk of extreme weather events such as flash floods. Flash floods are especially dangerous because they occur with little to no warning, leaving limited time for safety precautions and emergency response. In the field of hydrodynamics, conventional flood simulators are used to estimate the impact of floods. But they are often slow and require computational effort. To facilitate faster and more extensive flood prediction models, deep learning methods have been used to replace the conventional flood simulators. For the flood prediction models, data about rainfall events, topography of the area, soil type, and other factors that affect how water flows through the landscape are used as input data. The models will then predict the depth and velocity of the flood water caused by the rainfall. Such models will help predict the impact of flood water during extreme rainfall events in areas at risk of flash floods. In this seminar, the goal is to extensively study the various state-of-the-art deep learning architectures being used for urban flood prediction with limited available data and their adaptability to new regions.
  • Survey on dense text-to-image generation (Stanislav Frolov)Despite astonishing progress, generating realistic images of complex scenes remains a challenging problem. Recently, layout-to-image synthesis approaches have attracted much interest by conditioning the generator on a list of bounding boxes and corresponding class labels. However, previous approaches are very restrictive because the set of labels is fixed a priori. Meanwhile, text-to-image synthesis methods have substantially improved and provide a flexible way for conditional image generation. In this work, recent dense text-to-image (DT2I) synthesis models should be surveyed and challenges/limitations discussed.
  • Spatially Explicit Machine Learning (Francisco Mena)Following the success of artificial intelligence approaches in diverse types of application areas (energy, NLP and bioinformatics), software development industry is trying to utilize the power of deep learning methods for the development of more accurate and reliable software. Specifically, in software development, requirement engineering through deep learning based approaches is an active area of research. The prime objective of this project is to utilize deep learning approaches to empower the process of requirement analysis
  • Intelligent Tutoring Systems (Jayasankar Santhosh, pre-assigned)Intelligent tutoring systems (ITSs) are a class of advanced learning technologies that provide students with step-by-step guidance during complex problem-solving practice and other learning activities. Many comprehensive reviews have suggested that Intelligent Tutoring Systems (ITSs) can improve student learning when compared to alternative learning technologies or conventional classroom instruction. The use of data analytics, cognitive modeling, and various instructional strategies distinguishes ITS from traditional educational approaches, making them a promising tool for enhancing the effectiveness of learning experiences across diverse subjects and levels of education. The primary objective of this seminar is to explore present trends, progress, limitations, and challenges associated with the development of ITS in contrast to traditional teaching methods.
  • Super-Resolution Microscopy Image Reconstruction (Nabeel Khalid)Super-resolution microscopy image reconstruction is a technique used to enhance the spatial resolution of microscopic images beyond the diffraction limit of light. The diffraction limit is a fundamental constraint in optical microscopy, which limits the ability to distinguish fine details in closely spaced objects. Super-resolution microscopy overcomes this limit and enables researchers to visualize biological structures and processes at a higher level of detail. In this seminar, review the state-of-the-art methods for enhancing the spatial resolution of microscopic images using computational techniques. Discuss the challenges and applications in biological and materials science.
  • A Comprehensive Review of Knowledge Incorporation Techniques for Improved Cell Segmentation in Microscopic Images (Nabeel Khalid)Knowledge incorporation refers to the process of integrating prior knowledge, domain expertise, or contextual information into a computational system, algorithm, or decision-making process to improve its performance, accuracy, or relevance.This literature review will provide an in-depth examination of various techniques and approaches that incorporate prior knowledge, domain expertise, and contextual information to enhance the accuracy and reliability of cell segmentation in microscopic images. The review will encompass a wide range of knowledge incorporation methods and their applications in the field of cell biology and medical imaging.
  • Beyond Square Patches: Adaptive token generation for vision transformers (Tobias Nauen)Transformers constitute the state of the art in NLP and CV. Unlike NLP, where tokens are generated based on word structure, in CV, the standard approach is to divide the image into a raster of square patches, regardless of its content. This procedure then allocates computational power to different parts of the image based on their size, not their importance to the prediction. Furthermore, this could make it harder for the model to learn the semantic content of the image. In this seminar, we are interested in the ongoing research on dynamical tokenization of images based on content to make learning easier and better distribute computational resources. The student will investigate different approaches to dynamically tokenizing images and evaluate their effectiveness based on model performance.
  • A systematic review of the AI based predictive tools for enhancer RNA (eRNA) prediction (Ahtisham Fazeel Abbasi)Utilising artificial intelligence( AI) methodologies is a transformative approach to improve the ability to identify and understand enhancer RNAs'( eRNA) functional roles, which have become important factors in gene regulation. Various AI techniques have been used to identify eRNAs and decipher the intricate workings of the regulatory networks . In order to thoroughly explore this topic, this topic aims to provide a thorough examination of the methodologies, difficulties, and potential applications associated with AI-driven enhancer RNA prediction. In particular in the context of advancing understanding of diseases, this initiative aims to provide an in-depth perspective on the current landscape of genomic research in context to eRNAs and its broad implications.
  • A survey of anonymization techinques for large scale natural image datasets (Abdul Hannan Khan)With the increase in demand for public data for training DNNs, the concern about protecting identities in natural images has intensified. Natural image datasets contain a large amount of human facial identities and it becomes hard to obtain consent from everyone who is clearly identifiable in the images. Various anonymization techniques exist to solve the issue including techniques based on camera only and camera + additional sensor like IR. The goal of this seminar is to summarize image anonymization techniques with developments in the field over time.
  • A survey of uncertainty estimation in remote sensing regression tasks (Miro Miranda Lorenz)Remote sensing has revolutionized the way we collect and analyze geospatial data, enabling a wide range of applications, including land cover classification, vegetation monitoring, and climate change analysis. In regression tasks, remote sensing data is used to predict continuous variables such as temperature, soil moisture, or agricultural yield values. However, accurate predictions alone are insufficient; understanding the uncertainty associated with these predictions is equally crucial for informed decision-making and robust model deployment. This seminar aims to investigate methods and techniques for uncertainty estimation in remote sensing regression tasks. The goal will be to review and categorize existing uncertainty estimation methods applicable to remote sensing regression tasks.
  • A review on generalist models in computer vision (Duway Nicolas Lesmes Leon)Foundation models (FMs) are currently the well established approach to developing machine learning models. FMs are trained with large amounts of data to capture features and understand the data domain on a general level. The main goal is to use FMs as the basis for downstream models and solve specific tasks. On the other side, and more recently, researchers are starting to focus their efforts on developing the so-called generalist models (GMs), which different from FMs, seek to train models that are capable of processing several specific tasks without any further process like fine-tuning. This seminar will focus on reviewing the current approaches for generalist modelling in computer vision. The objective of this work is to list, describe, and compare the foundation of the selected GMs.
  • GNN-based Information Retrieval (Mahta Bakhshizadeh)Graph Neural Networks (GNN) have recently shown great potential in representing graph-structured data, especially for applying ML/DL tasks. Hence, they have been utilized in various fields such as bioinformatics (discovering new antibiotics), social networks, etc. One of the recent innovative applications of GNNs is information/document retrieval using the semantic relationships between concepts which can be seen as graph data. This seminar topic aims to explore the recent approaches towards GNN-based Information Retrieval.
  • Enhancing Transparency in XAD (Explainable Anomaly Detection) (Ensiye Tahaei)"Enhancing Transparency in XAD" navigates the fascinating intersection of Explainable Anomaly Detection and natural language interpretations of Deep Neural Network (DNN) decisions within time-series data. This seminar underscores the pivotal role of accessible, natural language explanations in demystifying the complex decision-making processes of DNNs. Venturing through innovative approaches, we aim to spark a comprehensive study, highlighting the challenges and triumphs in marrying technological adeptness with clear, interpretable communication in anomaly detection systems.
  • Image Super-Resolution for Medical Imaging (Brian Moser)Image Super-Resolution is the task of enhancing low-resolution images into high-resolution images. While the formulation of the task is straightforward, the problem is inherently ill-posed: many high-resolution images can be valid for any given low-resolution image. In this seminar, we will explore the progress of image Super-Resolution in a specific domain: medical imaging and its applications
  • Satellite Image Super-Resolution (Brian Moser)Image Super-Resolution is the task of enhancing many low-resolution images to high-resolution. The goal of this survey is to discover the domain of aerial data for image super-resolution (SR). The task is to identify typical datasets, data structure (multi-spectral images), the standard training pipeline and to highlight the differences to classical single-image or multi-image SR. Next, various approaches and architectures should be explored to identify the capabilities of state-of-the-art methods with their strengths and weaknesses and to find interesting research avenues for future work, which can form an opportunity to work further on this topic.
  • Weakly Supervised 3D Instance Segmentation (Fabian Schmeisser)The segmentation of object instances in three-dimensional images is a topic that finds application in a wide variety of tasks, such as automated driving, medical imaging, etc. Most 3D instance segmentation methods rely on accurate ground truth masks which are especially time and resource consuming to produce for three dimensional images. Here, weak supervision methods prove to be extremely valuable as they can substantially reduce the painstaking process of manually annotating full ground truth masks. The aim of this seminar topic is to create a survey of methods which use weak ground truth to produce full 3D instance segmentation masks, preferably for domains where images contain multiple smaller objects.
  • Weakly Supervised 3D Cell Tracking (Fabian Schmeisser)Tracking the paths and behaviors of cells in microscopic images is an essential component in pharmaceutical and biomedical research, helping researchers to develop treatments for a wide variety of diseases. Modern microscopic imaging methods are capable of producing more high-quality data than is feasible for human experts to analyze manually and pseudo-3D images generated with Z-stack acquisition are especially tiresome to manually analyze. Weak ground truth provides an additional way of reducing the time and resources necessary to compile datasets that can be used to train Deep Learning methods that automate the tracking of individual cells. The aim of this topic is to compile a survey of modern, state-of-the-art approaches for tracking individual cell instances in 3D images, using fully 3D or pseudo-3D (2.5D) deep learning methods employing partial or weak ground truth.
  • Finetuning Methods for LLMs (Pervaiz Khan) Large Language Models (LLMs) have shown success in various NLP tasks such as Text Generation, text summarization, etc. Sometimes their performance is sub-optimal on the new data. Therefore, one needs to finetune them on the new data. However, finetuning of LLMs differs from traditional finetuing methods as LLMs require huge computional resources. Several methods exist in the literature to reduce computational costs associated with finetuing of LLMs. The aim of this topic is to study various finetuning methods of LLMs, the pros and cons of each method.
  • Survey on LLMs (Pervaiz Khan)Large Language Models (LLMs) are type of language models that have achieved general-purpose languae understanding and generation. They have achieved ability by using a large amount of training data and billions of parameters. The aim of this seminar topic is to study latest LLMs, their architecture details and training methods, and maybe compare some of the LLMs.
  • Peptide classification methods (Muhammad Nabeel Asim)Following the effectiveness of peptides in both pharmaceutical and cosmeceutical industries, researchers are classifying peptides into different classes using unique criterion to more deeply explore their potential functional horizons. Prime objective of this survey is to summarize diverse types of machine and deep learning methods that are designed for peptides classification.
  • Deep learning for Requirement Engineering (Summra Saleem, pre-assigned) Following the success of artificial intelligence approaches in diverse types of application areas (energy, NLP and bioinformatics), software development industry is trying to utilize the power of deep learning methods for the development of more accurate and reliable software. Specifically, in software development, requirement engineering through deep learning based approaches is an active area of research. The prime objective of this seminar is to explore deep learning approaches to empower the process of requirement analysis.
  • Use case study of HDM from application until analysis (Ko Watanabe)Head mount display (HMD) is one of the innovations in the current world. It supports visualizing non-existent things in the world to explore further learning. In physics, for example, people use HMD to display lasers to support understanding the path or frequency visually. Your task is to summarize works that have been done using HMD, what kind of applications exist in research, how they analyze their system, and what is a leftover task.
  • Representation of uncertainties arising from insufficient or unknown data quality (Christoph Balada)Graph Neural Networks (GNNs) are a comparable new research direction in the field of Machine Learning. . Unlike normal neural networks, they make use of the underlying structure of the data. Whether it be navigation, social networks, financial networks, smart grids or chemical molecules, Graph Neural Networks show promising results. However, GNNs still suffer from different shortcomings and two major research trends are trying to overcome these. The more modern approach of these two are networks without message passing. Your task is to summarize and briefly describe the various trends in message-passing-free networks.
  • Exploring Fairness Criteria in Algorithmic Decisions and Their Applicability in a Smart City Context (Julia Mayer, pre-assigned)In the ever-evolving landscape of Smart Cities, algorithmic decisions play a crucial role in shaping urban environments. However, the potential biases and fairness concerns embedded in these algorithms demand careful scrutiny. This seminar delves into the investigation of fairness criteria in algorithmic decisions, specifically examining their relevance and applicability within the unique dynamics of a Smart City.
  • Automated Knowledge Graph Generation using Large Language Models (Marc Gänsler)Knowledge graphs have emerged as powerful tools for organizing and representing structured information in a way that both humans and machines can understand. They provide a valuable foundation for numerous applications, including semantic search, recommendation systems, and question-answering platforms.The interconnected nodes and edges of knowledge graphs enable a comprehensive view of relationships between entities. However, the process of creating knowledge graphs is a labor-intensive task that requires significant human effort and domain expertise. Large Language Models can play a crucial role here. They possess a profound understanding of linguistic taxonomy, ontological hierarchies, word embeddings, and contextual relationships between topics. Leveraging these models in the automated generation of knowledge graphs holds the promise of efficiently converting unstructured text into structured knowledge representations. The goal here is to conduct a comprehensive literature review about automated knowledge graph generation using Large Language Models. The review should encompass existing methodologies, tools, and research findings, as well as analyze the opportunities, challenges, and potential applications in this evolving field.

NOTE: Topics marked as pre-assigned have been already assigned to student colleagues working with their supervisors. These topics will not be a part of the topic preference survey.