Overview
This one-day training course introduces the foundational principles for ensuring the functional safety of systems utilizing Artificial Intelligence (AI), grounded in the ISO/IEC TS 22440 series. Participants will explore how AI technology integrates with established functional safety frameworks—such as IEC 61508 and ISO 26262—and how machine learning (ML) challenges traditional deterministic safety assumptions.
The training addresses the specialized approach required for the inherently dynamic nature of AI decision-making. Participants will examine the AI safety life cycle, classification schemes for AI software components, and essential mitigation techniques like diverse redundancy and runtime monitoring. The course also provides a framework for quantifying residual failures and validating non-deterministic software through statistical performance assessments.
Objectives
By the end of this course, participants will be able to:
- Navigate the ISO/IEC TS 22440 series and understand the structure and intent of its requirements, guidance, and application examples
- Classify AI software components using Application Usage Level (AUL) and Software Technology Class (SWTC) to assess risk and complexity
- Identify AI-specific faults and failure modes, including model drift, vague specifications, and insufficient data representativeness
- Design hybrid safety architectures that combine AI-based functionality with non-AI safety mechanisms, backup functions, and plausibility monitors
- Determine the appropriate AI Systematic Capability (AI-SC) level and select corresponding mitigation measures
- Apply verification and validation strategies for non-deterministic systems, including out-of-distribution (OOD) testing and adversarial analysis
Agenda
Below you will find a tentative schedule for this training course.
- The Intersection of AI & Safety: Bridging the gap between traditional functional safety and AI challenges
- Terminology & Classification: Understanding AI-SCS, Application Usage Levels (AUL A–D), and Software Technology Classes (SWTC I–III)
- The AI Safety Life Cycle: Extending the V-model with continuous monitoring, update strategies, and reassessment
- Hazard & Risk Assessment: Integrating AI-related faults into HARA and deriving appropriate Risk Reduction Factors (RRF)
- AI Fault Analysis: Identifying potential faults in model creation, input handling, and runtime processing
- Architectural Mitigations: Designing safety shields, employing diverse redundancy, and quantifying uncertainty
- Data Quality & Training: Managing Operational Design Domain (ODD) coverage, addressing bias, and mitigating data drift
- Testing & Validation: Applying statistical performance metrics, robustness testing, and residual failure analysis
- AI-Based Development Tools: Qualifying AI-assisted tools using TIC/TCL classifications and offline tool strategies

