This catalogue contains the main use-cases and application of the project AIMS5.0, providing tools to implement AI applications using reusable tools. For getting started with the AI Toolbox, check the getting started guide.
Geofencing is the problem of triggering some actions when a person or and object enters into or leaves a predefined place. For implementing a geofencing application, please check the Sentinel Tool.
With the advent of large language models (LLM), chatbots ara capable of imitating human behavior. While LLMs have a great common knowledge, answering to questions regarding special knowledge requires fine-tuning of language models. For context based question answering, a typical method is to use prompt injection, or retrieval augmented generation. For getting started with the method, see the LLM Tool and the Context Injection Tool.
For determine the object pose in the camera of the robotic arm for various robotic problems, check 6D Pose Estimation Tool.
This use case focuses on leveraging artificial intelligence to improve both product development and manufacturing process optimization. One core aspect of the use case is the generation of human-like robot motion for automating user trials of personal care devices. Instead of relying exclusively on large-scale human testing to collect sensor data, the approach explores the use of generative AI combined with symbolic AI methods to synthesize realistic movement patterns. A central research objective is to define and evaluate what constitutes “human-like” motion and to determine whether hybrid AI techniques can reliably replicate natural user behavior for validation and algorithm development purposes.
The second major component of UC02 addresses manufacturing process intelligence, particularly the identification of defect-prone production paths. In complex manufacturing environments where products can follow multiple processing routes, certain paths may introduce higher defect rates. The use case applies process mining to historical production data in order to construct enriched process models that incorporate resource usage and defect information. Building on these structured models, machine learning techniques are envisioned to support root cause analysis and to predict which paths are likely to produce defects, thereby enabling proactive interventions before quality issues escalate.
From an AI applicability standpoint, UC02 exemplifies a hybrid AI paradigm that integrates generative modeling, symbolic reasoning, and data-driven analytics. AI is used not only for synthetic data generation and robotic motion modeling but also for predictive analysis and operational optimization in manufacturing systems. The use case demonstrates how AI can reduce experimental costs, accelerate development cycles, enhance product quality, and support more informed decision-making across the industrial value chain.
UC03 focuses on bringing AI directly to industrial edge environments for real-time machine monitoring and predictive maintenance. The main purpose is to avoid the limitations of cloud-only analytics, especially when low latency, limited bandwidth, intermittent connectivity, and data privacy constraints make centralized processing impractical. In this use case, AI is embedded into edge devices so that sensor data can be processed locally and faults, anomalies, or performance degradation can be detected early. This supports more autonomous operation and improves resilience in manufacturing, energy, and transportation contexts.
The AI usage in UC03 is based on lightweight and interpretable methods that are suitable for constrained industrial hardware. The document mentions decision trees for real-time classification of operational states and anomaly detection from multivariate sensor data. It also includes rule-based models for threshold monitoring, alerting, and expert-style decision support. In addition, TinyML neural networks are used for pattern recognition and anomaly detection on time-series data, while incremental learning techniques allow the models to adapt continuously as machine behaviour changes over time.
The main AI tools in this use case are the Stream Analyse platform and SA Studio. Stream Analyse provides the environment for deploying advanced analytics and machine learning models directly on industrial edge devices such as PLCs, embedded systems, and gateways. SA Studio is used for creating and configuring the model pipelines. Altogether, the use case combines edge AI deployment, efficient model design, and practical industrial tool support to enable local, robust, and production-oriented intelligent monitoring.

UC05 focuses on automatic tool breakage detection during precision machining of large parts. The AI usage is centered on analyzing high-frequency vibration signals from the machining process in order to recognize repetitive operational patterns, detect anomalies, and determine whether the tool is broken or not. The goal is to support real industrial machining environments with an AI-based monitoring solution that can reduce manual inspection effort and enable faster detection of failures during production.
The main algorithms used in UC05 combine signal processing, supervised learning, and unsupervised pattern discovery. Harmonics analysis via FFT is used to extract features from vibration signals, where the number and structure of peaks help distinguish normal and broken tool conditions. On top of these FFT-derived features, several supervised binary classification algorithms are applied, including Random Forest, K-Nearest Neighbors, Logistic Regression, and Support Vector Machines. In parallel, motif discovery and clustering methods are used in an unsupervised way to segment repetitive machining time series, relying on techniques such as change point detection and distance-based clustering of subsequences.
From a tooling and deployment perspective, the use case also emphasizes grouping repetitive executions based on process metadata such as Tool ID, Program Name, and Execution Block, then integrating the resulting algorithms as Dockerized microservices for flexible deployment. This shows that UC05 is not only about applying AI models, but also about operationalizing them in an industrial setting. Overall, the use case combines classical signal analysis with machine learning and modular software deployment to create a practical AI solution for machining process monitoring.

This use case targets a flexible robotic feeding system that can automatically place randomly distributed bottles of different sizes, weights, and types into conveyor sockets. The AI role is broader here than in the previous cases, because it spans perception, code generation, and motion planning. The overall objective is to make the robotic system modular and reconfigurable, so that it can adapt to new tasks and operate with a high degree of automation in a changing industrial setting.
A major AI component is machine vision. For 6DoF pose estimation, the document describes a hybrid perception method that combines RTDETRx detections, MobileSAM segmentation, RGB images, and 3D point cloud data from an Intel RealSense D415 sensor. Point cloud fitting and fitness-score calculation are then used to evaluate object alignment and support collision-aware assessment. In addition, socket detection is handled with a YOLOv8m neural network trained on labeled conveyor images, allowing the system to distinguish empty and occupied bottle sockets despite strong visual similarity between the conveyor and the bottles. ArUco markers are also used for calibration and reference positioning during evaluation.
Another important AI direction in this use case is code synthesis for robot task specification. The project explores iterative prompt-engineering with large language models and code-generation models, as well as diffusion-based code generation trained from scratch. The purpose is to let higher-level robot control policies be specified with less manual effort, improving reconfigurability for new robotic tasks. The generated code can be tested in simulation before deployment, which helps manage hallucinations and constraint violations from generative models.
For robot motion planning, the use case evaluates both differentiable physics and reinforcement learning. The differentiable physics approach is implemented with the Nimble Physics simulator and is described as producing smooth, fast, and accurate trajectories for the bottle insertion task with very low computation per action. This is being integrated with pose estimation, code synthesis, and robot control into a closed-loop system. In parallel, reinforcement learning is explored for the same task, with Proximal Policy Optimization as the main algorithm, tested in NVIDIA Isaac Gym and also in Nimble Physics for comparison. Overall, the AI tools and algorithms in this use case include RTDETRx, MobileSAM, YOLOv8m, LLM-based code generation, diffusion models for code synthesis, Nimble Physics, NVIDIA Isaac Gym, PPO, ROS-based control integration, and domain randomization for sim-to-real transfer.
Here, the AI is used in two complementary parts of luminaire manufacturing: understanding manual assembly work and optimizing automated 3D printing operations. On the assembly side, the goal is to observe workstation video streams, identify the individual assembly steps, measure their duration, and detect bottlenecks, deviations, or anomalies in the workflow. This turns manual process observation into a data-driven activity that can support process optimization, quality improvement, better planning, and potentially even product redesign for easier assembly.
The main AI method for assembly tracking is Human Activity Recognition applied to video data. The document specifically uses VideoMAE, a transformer-based masked autoencoder model for video, which is first pre-trained and then fine-tuned on a custom annotated dataset collected in the Signify factory. The resulting model provides a temporal breakdown of the assembly process into recognized steps and durations. The workflow is also supported by AI-assisted annotation, where initial models help pre-label larger datasets before human review, reducing the manual effort needed to build industrial training data.
A second AI component addresses the scheduling of 3D printing for luminaire components in a make-to-order setting. The objective is to minimize filament waste while also reducing tardiness in order fulfillment. For reel utilization, the problem is formulated as a bin-packing-like decision problem, and the document applies a hybrid approach that combines reinforcement learning with heuristic rules. The RL agent learns from historical order distributions how to assign print jobs to partially used filament reels, while heuristics decide when a new reel must be introduced. For machine scheduling, the planned methods include heuristics, priority rules such as Earliest Due Date, and potentially additional RL-based optimization for printer allocation.
Overall, the AI toolbox in this use case combines transformer-based computer vision for shop-floor process understanding with reinforcement learning and heuristics for production scheduling. This makes the use case broader than a single-model application, because it links perception of human work with optimization of automated manufacturing steps. The result is an AI-driven manufacturing platform that aims to improve efficiency, reduce both time and material waste, and increase operational adaptability in a customized production environment.

In this use case, AI is applied to human-aware robotics, especially for navigation in spaces shared with people. The core idea is to predict how humans and groups will move, so that robots can plan socially aware paths instead of reacting only to current positions. The framework models the robot, nearby obstacles, and humans together as sequential graph data, then forecasts the next scene state. These predicted future human positions are injected into the robot’s cost map, which allows the navigation system to choose safer and more socially appropriate paths around people.
The main AI algorithms here are graph neural network based trajectory prediction methods. The document specifically mentions the use of a T-GCN layer from the PyTorch Geometric Temporal library to encode historical state sequences, and GCNConv layers from the PyTorch Geometric library as the graph decoder. It also notes planned use of GATConv to exploit attention mechanisms. For robot navigation and planning, the system uses ROS move_base together with path planning methods such as A* and Time Elastic Band, while deadlock-free multi-robot coordination is supported through conflict-based search. The training and testing data come from the TBD Pedestrian Data Collection, which provides rosbags, videos, and human labels.
A second AI direction in the same use case concerns physiological-signal-driven human robot interaction through an EEG-supported exoskeleton setup. Here the aim is to use non-invasive physiological signals such as EEG and EMG for movement intention detection and external device control. This expands the use case beyond navigation into collaborative robotics and assistive control. Overall, the AI toolbox for UC08 combines graph-based trajectory forecasting, robot planning and scheduling in ROS, multi-agent path finding, and brain-computer interface concepts for intention-aware human-machine interaction.
In this use case, AI serves as an energy intelligence layer for semiconductor manufacturing. The aim is to better understand how energy is consumed across fab buildings, cleanrooms, and highly energy-intensive batch tools, then use that knowledge to support more efficient operation under strict power and sustainability constraints. Rather than relying on a dense deployment of smart meters everywhere, the approach uses machine learning to infer broader energy-flow patterns from a smaller set of carefully selected measurement points. This makes it possible to build a “golden tool” strategy, where representative tools and areas provide enough insight for wider energy management and planning.
The AI side combines machine learning with scheduling-oriented optimization. ML models are trained on synchronized production and energy data to uncover hidden patterns, characterize consumption behavior, and support forecasting and management decisions. Alongside this, the use case introduces innovative scheduling rules to better align production with energy availability and operational constraints. The supporting toolchain is built around smart power meters with high-frequency sampling, precise UTC-based synchronization, strong data storage capacity, and GPU-backed processing, explicitly including hardware such as the NVIDIA Tesla P40 for faster analysis and model development. Altogether, the use case applies AI not as a standalone model, but as a practical decision-support and optimization capability for energy-aware fab operation.
In this use case, AI is embedded into a next-generation Manufacturing Execution System that is intended to coordinate information flows across multiple semiconductor fabs more effectively. The goal is to support semiconductor manufacturing automation with transparent AI components while also harmonizing the software engineering process across sites through the “MES Airbus” concept. RAMI architecture is named as a key integration framework for building this distributed and scalable MES foundation.
The AI usage centers on embedding intelligent functions into a redesigned, real-time-capable MES. The use case explicitly mentions AI-based anomaly detection as one of the required capabilities, alongside advanced production system visualizations and function patterns that help handle large data volumes and highly variable production conditions. The main technical elements named in the use case are the AI-enabled MES itself, the MES Airbus concept for distributed software engineering, RAMI architecture, and AI-based anomaly detection integrated into semiconductor manufacturing automation.
This use case applies AI to indoor food production, with the goal of automating the plant product lifecycle in a flexible, decentralized, and scalable way. The framework is designed for industrial indoor vertical farming environments that rely on interconnected IoT devices, digital infrastructure, and automated control of growing conditions. AI is used to streamline automation, strengthen monitoring, and improve production efficiency, while the overall system is built with strong attention to security, safety, and compliance across the full lifecycle.
The main technical elements named in the use case are AI software for indoor food production, interconnected IoT devices, a service-oriented architecture, Eclipse Arrowhead for service orchestration, NERVE services, and a PREEMPT_RT patched Linux kernel to support real-time operation. These components are used in control loops such as humidity and nutrient management, where reliable orchestration and continuous monitoring are essential for stable crop growth and efficient operation.
This use case applies AI to the optimization of semiconductor manufacturing processes, where production behaviour is shaped by complex and non-linear relationships. The goal is to improve production efficiency by reducing variability, minimizing information distortion, and streamlining physical flows across manufacturing operations. A key aspect of the approach is that machine learning is not used in isolation. Instead, it is combined with human expertise so that data-driven models can support strategic process improvements while still benefiting from domain knowledge that is not fully captured in historical production data.
The AI methods named in the use case include XGBoost together with SHAP for time-series prediction and interpretation of work-in-progress, as well as CNNs and linear regression for pattern recognition tasks. Dijkstra’s algorithm is also included for shortest-path related optimization, and Proximal Policy Optimization is used in connection with a novel simulation framework developed by TUD for holistic modelling of fab capacities and automated material handling systems. This simulation environment generates training data for AI models and supports the discovery of correlations between job release decisions and upcoming work-in-progress levels. Overall, the use case combines predictive modelling, explainability, simulation-based learning, and optimization-oriented algorithms to support more efficient and sustainable semiconductor production.
This use case applies AI to overhead wafer transportation and automated material handling in semiconductor factories. The goal is to modernize the hardware and software architecture of the OHT demonstrator in Dresden and Villach, while also improving uptime, OEE, and cycle times of wafer transport and storage systems. AI is used here for predictive maintenance and automated data analysis, with the aim of detecting and predicting hardware malfunctions from sensor data before they lead to unplanned breakdowns. An additional part of the concept is the use of integrated cameras and software on transportation systems to enable more precise monitoring, vehicle identification, and control of shuttles across factories.
The technical approach explicitly refers to a machine learning based software architecture built on modernized sensing and monitoring infrastructure. The main AI-relevant elements named in the use case are predictive maintenance from sensor data, integrated cameras for autonomous vehicle identification and recognition, and software-supported monitoring of AMHS components. The document also highlights the importance of suitable acceleration sensors, improved data collection setups, and new infrastructure for precise malfunction detection. Overall, the use case combines sensor-driven machine learning, camera-supported monitoring, and upgraded transport-system software to make high-volume wafer transportation more reliable, automated, and sustainable.
This use case applies AI to virtual commissioning in semiconductor automation, with digital twins used to shorten commissioning time, reduce lead time until handover, and allow software development to start before the physical hardware is fully available. The digital twins represent automation products such as robot cells and make it possible to test code, simulate moving elements, and integrate hardware earlier into the customer’s MES. This also supports software adjustments and feature development after delivery, without requiring constant access to the physical system.
The main AI-related tools and methods named in the use case are digital twins, virtual commissioning concepts, AI-based software concepts, simulation-based code testing, MES integration, and advanced control algorithms for robotic operation. The use case also mentions integration of robots into the mobile HERO-SCOUT system, where arm sensitivity changes with payload weight and therefore requires sophisticated control algorithms for precise and reliable behavior. Overall, the AI role in UC14 is to enable earlier testing, faster software iteration, and more flexible automation engineering through digital twin based development and commissioning.
This use case is about taking machine learning models from experimentation into continuous industrial operation at semiconductor-fab scale. The aim is to deploy AI that can monitor complex processes in real time, detect subtle anomalies in multivariate sensor streams, support predictive maintenance, and remain usable in high-variability production environments where labeled fault data is scarce. A strong emphasis is placed on production readiness, which means the models must not only perform well, but also be interpretable, maintainable over time, and capable of handling drift in process and equipment behaviour. The work is carried out in the context of semiconductor manufacturing at LFoundry and Infineon, where defect prevention, process stability, and yield protection are critical.
The AI usage is organized into several complementary streams. One stream focuses on anomaly detection and virtual inspection in industrial data streams, where multivariate models are used to identify deviations early enough to reduce scrap and prevent failures. Another stream focuses on corrosion-defect classification from images, using deep learning to recognize defect patterns in a specialized visual inspection setting. A third stream addresses defect-density image classification through a scalable ML pipeline that supports data collection, labeling, version control, retraining, deployment, and monitoring in production. Across these streams, the goal is to build AI-supported diagnostics that run continuously in production and can scale across tools, machines, and sites with limited manual reconfiguration.
Several concrete AI algorithms and tools are named in the use case. For multivariate time-series anomaly detection, the document introduces AcME-AD, an actionable and explainable anomaly-detection framework that uses reconstruction scores and variable-level contributions to identify deviations and indicate likely root causes. Extended B-ALIF is also used as an unsupervised anomaly-detection method, enriched with process-specific thresholds and dynamic baselines to improve robustness and reduce false alarms in variable industrial settings. For image-based inspection, the use case applies CNN-based deep learning models to classify corrosion defects and other defect-related image patterns. In addition, transfer learning from state-of-the-art pretrained models is used in the defect-density classification pipeline to improve scalability and model performance.
A major part of UC15 is the operational toolchain around the models. The use case explicitly relies on MLOps principles, including containerized training and inference environments, model versioning, automated retraining pipelines, CI/CD workflows, monitoring dashboards, and traceable lifecycle management integrated into existing IT/OT infrastructures. Synthetic fault injection is used for robustness testing when real fault data is limited, and domain experts remain involved in the loop to make sure the alerts and explanations are meaningful for process engineers and operators. Altogether, the use case combines explainable anomaly detection, deep-learning-based visual inspection, transfer learning, and a production-grade MLOps stack to turn AI into a scalable and operational diagnostic capability for semiconductor manufacturing.
This use case applies AI to the development of emerging RF components and systems for avionic and SATCOM applications across multiple RF bands. The goal is to support faster design optimization and more efficient fabrication of components such as inductors and RF MEMS, while also reducing time, engineering effort, consumables, and waste during development and production.
The AI usage is described in terms of machine learning enabled design and process optimization. The use case explicitly states that the partners are implementing machine learning techniques together with methodologies and protocols that accelerate component design and improve fabrication procedures. The named technical scope therefore includes machine learning for RF component optimization, AI-enabled design methodologies, and ML-supported fabrication protocols for inductors and RF MEMS.
This use case applies AI to contamination monitoring and cleanroom control in semiconductor manufacturing. The aim is to detect corrosive gases and airborne molecular contaminants early enough to prevent cross contamination, corrosion, and yield loss in both highly automated Infineon cleanrooms and the more flexible pilot-fab environment at Fraunhofer IISB. A central part of the use case is building a data landscape where contamination data, sensor measurements, and contextual MES information are stored together or linked, so that users and fab management can work with integrated visualizations and better operational insight.
The AI usage is focused on prediction and root cause analysis built on top of advanced sensor infrastructure. The technical elements explicitly mentioned include static sensors with Meander structures, Si-based AMC sensors, optical sensors from Picarro, APA302 systems from Pfeiffer Vacuum, Atmocube sensor systems, and offline analytical measurements such as IC, GC-MS, and VPD-ICP-MS. These sensing and data-integration components provide the basis for machine learning models that support automated cleanroom monitoring, contamination prediction, and analysis of likely causes behind abnormal cleanroom conditions.
This use case applies AI to semantic modelling for semiconductor supply-chain optimization, with the goal of connecting economic, ecological, and societal perspectives in a single knowledge structure. The approach uses a semantic data model as the basis for AI-driven optimization of sustainable supply-chain decisions, including work on the ecological operating curve and Available-to-Promise related planning. Multiple simulations are created in AnyLogic and then transformed into ontologies, so that the resulting knowledge can be queried, reused, and connected to the project’s Open Access Platform and broader semantic-web initiatives.
The main AI usage is in AI-assisted ontology generation. Large language models are used to help human experts create ontology classes, structure semantic concepts, and support the generation of more complete knowledge representations from standards and related domain material. The document explicitly names ChatGPT 4 Turbo for this purpose and also refers to research around GPT-3.5 and Flan-T5 in ontology-related tasks. In addition, the use case mentions knowledge-graph creation support through detailed prompting and refers to ANGEL as a methodology that combines ontological hierarchies with the generative capabilities of LLMs.
A central point of UC18 is that the AI does not replace domain expertise. Human experts are required to validate semantic classes, descriptions, and relationships, because logical consistency, ambiguity, synonym identification, and concept labelling remain difficult for LLMs in such a precise industrial domain. The named tool and method stack therefore includes AnyLogic for simulation, semantic web technologies, ontology modelling, the Open Access Platform, ChatGPT 4 Turbo, GPT-3.5, Flan-T5, and AI-assisted knowledge-graph and ontology generation with human validation in the loop.
This use case applies AI to automation and digital support in medical-technology manufacturing. The goal is to improve flexibility and productivity while keeping human workers in the loop, especially in production environments where manual assembly, process variation, and strict quality requirements make full automation difficult. The use case therefore targets AI-supported human–machine collaboration rather than isolated automation alone.
The available document text provides only limited detail on the exact AI algorithms and software tools for UC19. It identifies the use case within the “AI supporting and strengthening human and manufacturing cycles” group, but the retrieved section does not name concrete model families or toolchains in the same way as several other use cases do. Based on the available material, the factual summary is that UC19 uses AI to support human-centered manufacturing processes in medical-technology production, with the emphasis on strengthening collaboration between operators and manufacturing systems.
This use case targets the harmonization of equipment test strategies across two 300 mm high-volume semiconductor factories that are intended to operate as one virtual fab. AI and advanced software are used to consolidate and coordinate test-wafer related monitoring processes that are currently distributed across different tools, databases, and sites. The practical objective is to reduce the number of test wafers needed, lower cost, and still preserve the strict quality requirements expected in health and automotive production.
The main technical focus is on integrating heterogeneous test and monitoring data into a single platform with a common user interface. The use case explicitly names SPACE statistical process control software, recipe and defect density databases, NPW data, and APC as key sources that must be connected and harmonized. Based on the available text, the AI usage is therefore centered on data integration, unified monitoring, and advanced software-supported analysis rather than on a specifically named standalone ML algorithm. The named tools and system elements are the common monitoring platform, shared UI, SPACE, recipe and defect-density databases, NPW, APC, and the broader software stack needed to support cross-factory harmonized test strategy management.