Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
04/07/26
12 Likes

ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle


This article concludes our ISO/PAS 8800 Walkthrough series by focusing on the development of AI safety requirements, as well as AI system design and verification & validation (V&V). Building on Part 1 (overview) and Part 2 (AI and dataset lifecycles), this final installment explains how safety requirements are derived for AI systems and how they are implemented, verified, and validated within the broader frameworks of ISO 26262 and ISO 21448 (SOTIF). It highlights the iterative nature of AI system development and the integration of AI-specific considerations into established safety engineering practices.

Looking to go deeper? SRES offers ISO 8800 AI Safety Professional (AISP) training in collaboration with SGS-TÜV Saar, as well as AI safety consulting to help organizations implement ISO 8800 alongside ISO 26262 and ISO 21448 (SOTIF) within real-world product development programs.


Introduction

In part 3 of this blog series, we will dive deeper into the development of AI safety requirements, as well as AI system design and verification & validation (V&V).

Let’s visually reflect on the content of the previous parts and how they are connected to part 3 of this blog series.

Blog Series Part 1: ISO/PAS 8800 overview

Icon representing ISO/PAS 8800 standard document for AI safety and functional safety frameworks

Blog Series Part 2: The AI and Dataset Lifecycles

Diagram showing AI safety lifecycle, V-cycle development, and dataset lifecycle including requirements, design, implementation, verification, and validation

This Blog, Part 3: AI Safety requirements and V-cycle

Now, in Part 3, we will look at details of the AI safety requirement development as well as the AI system design and V&V.

The blue highlighted aspects of the AI lifecycle show the content of part 3 in respect to the AI lifecycle.

Iso:pas 8800 Ai Safety Requirements And V Cycle

1. Development of AI safety Requirements

Ai Lifecycle Simple
Diagram showing AI system architecture with datasets and model parameters alongside the derivation of AI safety requirements from system-level inputs

The right side of this diagram – showing the derivation of AI safety requirements – is based on ISO/PAS 8800. The left side was created for this blog and illustrates a simplified static architecture, with AI-relevant aspects highlighted in blue. Model parameters (i.e., trained weights) are shown separately to emphasize how AI safety requirements apply to model parameters. ISO 26262 part 6 applies for the implementation of the AI component software.

The separation between the encompassing system and the derivation of AI Safety requirements emphasizes the novel aspects introduced by AI. Work products generated for AI safety are intended to be integrated with existing functional safety and SOTIF work products, where appropriate.

1.1 Inputs to AI Safety Requirements Derivation

The key inputs for deriving AI safety requirements are illustrated in the following figure.

Diagram showing how AI safety requirements are derived from functional requirements, safety requirements, input space, and influencing factors
1
The functional and safety requirements allocated to the AI system, together with the input space definition of the encompassing system, form the initial starting point. As outlined also in previous parts of the blog series, the safety requirements of the encompassing system are FuSa and SOTIF-based.
2
AI properties are selected for the AI component based on their importance for safe functioning of the AI system. AI safety requirements are derived considering AI properties with an example list below.
AI Property Description
AI robustness Maintain an acceptable level of performance with exposure to foreseeable and relevant perturbations. Examples: Camera noise, blur, glare, weather, adversarial attack (perturbations designed to fool the model while being almost imperceptible)
AI generalization capability Good response to previously unseen data
AI reliability Ability to perform its task without AI errors, i.e. provide expected output, under stated conditions for a specified time period
AI resilience Ability to recover quickly from an incident
AI controllability Ability of an external agent to override the AI system behavior
AI explainability Ability to explain the factors of an AI decision with natural language
AI predictability Ability of an AI model to reliably indicate if its prediction can be trusted or not
AI alignment Ensures that the AI system behavior is aligned with human behavior
Justified design decisions Design decisions are justified and negative effects are analyzed.
Maintainability Ability to easily identify operational insufficiencies and countermeasures, and modify the encompassing system.
AI bias and fairness AI bias in the model or dataset
Distributional shift over time Distributional shift data over time, e.g. sensor aging, new object classes on streets. This distributional shift can lead to performance decrease
3

Feedback from safety analysis and insufficiencies detected during operation are a 3rd input source.

1.2 Creation of AI Safety Requirements

The previous section outlined the sources to derive AI safety requirements from. The following illustration highlights the creation process in blue.

Diagram showing workflow for deriving AI safety requirements using functional requirements, safety requirements, and influencing factors

SOTIF and FuSa requirements of the encompassing system form the starting point. Influencing factors are applied to relevant safety-related properties, which are used to refine the requirements of the encompassing system. The resulting requirements are called AI safety requirements. An AI safety requirement simply denotes a requirement that is allocated to an AI system.

Unlike a downstream requirement process as known from ISO 26262, the resulting AI safety requirements are at the same hierarchy level as the input requirements. It’s a refinement and addition of requirements, not a derivation process.

1.3 Refinement of the Input Space

The refinement of the input space adapts the vehicle ODD to be specific for each AI component.

Diagram showing how input space definition from the encompassing system is refined into AI-specific input space

The refined input space definition is specific to the AI system and corresponds to the ODD. 

The refined input space of an object detection system would only detail the types of objects that would occur within the ODD of the vehicle. 

For example, consider two AI systems: one for object detection as part of an Automatic Emergency Braking (AEB) function, and another for Lane Keep Assist (LKA). While both AI components may operate within the same ODD, their input spaces differ: the AEB input space focuses on various object types, whereas the LKA input space emphasizes lane line characteristics. The refined input space for lane detection would be the different lane types (such as white, yellow, solid, dotted) and road boundaries (such as flat, dropping edge). Both AI systems would have to considering environmental conditions such as rain, ice or snow.

2. AI system design and V&V

The V-cycle for product development also exists for AI system development.

Diagram showing AI safety lifecycle integrated with system-level requirements, assurance, operation, and the AI development V-cycle
The numbers in this V-cycle are clause references to ISO/PAS 8800; It references the clauses 9 to 13.

2.1 Dataset Considerations

Diagram block labeled Data Consideration representing dataset-related activities in AI safety lifecycle

Data is central for most AI systems, particularly those based on ML.

Diagram showing AI training dataset, validation dataset, and test dataset used for model development and verification and validation

As outlined previously, the dataset lifecycle supports the creation of high-quality datasets, which are typically divided into training, validation, and test datasets.

  • AI training dataset: Used to train the AI component, enabling it to develop the capability to perform its intended task.
  • AI validation dataset: Used to evaluate the performance of different AI model candidates and to tune hyperparameters. Once validated, the AI component should meet the defined AI safety requirements. Note that the term validation dataset originates from the AI community and may not be confused with safety validation in the context of ISO 26262, which refers to vehicle-level testing.
  • AI test dataset: Used to assess the expected performance and generalization of the trained AI model or AI system with unseen data. “Unseen” meaning data not used to train it. Independent metrics are derived from this dataset to demonstrate that acceptance criteria are satisfied.

Note the distinction between dataset validation and an AI validation dataset. Dataset validation focuses on ensuring the correctness and quality of the dataset itself, whereas an AI validation dataset is used to assess the correctness and performance of the trained model.

2.2 Design and Implementation

Diagram showing AI system design, component design, implementation, and safety analysis lifecycle stages

At this stage, AI safety requirements and the input space definition have been established.

As discussed in part 2 of this series, ISO/PAS 8800 defines AI system as follows:

AI System Architecture with Pre-processing, Model, and Post-processing

The AI system design level is the superset of AI components and traditional software components. The source code responsible for computing the AI model output must be implemented as specified in accordance with ISO 26262 and thus, be free from systematic faults to ensure safety. In addition to the core model inference logic, the AI system typically includes pre-processing and post-processing components which may be conventional, deterministic software components.

The AI model is one element within the AI system which for ML may consist of a backbone (encoder) and one or more task-specific decoder heads.

The AI model and implementation is visualized in the context of the datasets in ISO/PAS 8800 clause 11:

Flowchart of AI training, validation, and testing process with iterative model improvement loop

Safety analysis takes the system and component design as input and evaluates it against the defined AI safety requirements. The outcome of this analysis may necessitate updates to the requirements, the architecture, or the implementation, reinforcing the iterative nature of the development lifecycle.

2.3 AI Component and System verification

Diagram showing AI component verification and AI system verification and validation stages

Testing should be performed at the lowest practical level, i.e. on AI components if it can be evaluated in isolation. There is benefit in testing the AI component in isolation if feasible, so that issues deriving from the AI component can be identified early.  However, if stand-alone testing is not feasible then testing the AI component at a higher level of integration is acceptable.  The integration level at which testing is conduction is not as important as sufficient test coverage against requirements and AI properties. 

Test cases should be derived with a combination of methods known from ISO 26262 such as requirement analysis, equivalence classes and analysis of boundary values. For ML, analysis of requirements relies on meeting safety relevant performance requirements. 

Testing is performed at different levels of system integration

  • AI component testing
  • Integrated AI system
  • Integration of the AI system with the encompassing system
  • Post-development validation

Examples of methods for testing AI components include:

  • Statistical testing: Verifying that metrics defined in the AI safety requirements are met. 
  • Data/scenario replay: Using recorded datasets or known pre-crash scenarios to evaluate AI performance. Examples include test drive recordings or simulations of critical events. 
  • Metamorphic testing: Transforming one test case into another, using metamorphic relations that describe how input changes should affect output changes. 
  • Synthetic test case generation: Creating a wide variety of scenarios, including edge cases that are unsafe or impractical to reproduce in the real world. 

Testing under resource limitations: Evaluating model performance under constraints, such as specified processing frequency or memory limits.

Summary and further reading

Part 1 of this blog series provided an overview of the clauses and essential work products of ISO/PAS 8800.

Part 2 and part 3 elaborate on certain aspects in more detail:

  • Part 2 covers the AI lifecycle as well as the dataset lifecycle
  • The above part 3 outlines how AI safety requirements are derived and the AI system design including verification and validation.

For further reading, the following blog series is recommended highlighting conceptual aspects of ISO/PAS 8800 as well as certification

In Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 1): Foundations, we examined how ISO/PAS 8800 extends ISO 26262 and ISO 21448 (SOTIF) through refinement of triggering conditions and structured traceability into the machine learning lifecycle.

In Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 2): Dataset Governance and Validation, we focused on dataset governance, validation strategy, and the transition from R&D practices to production-grade process discipline.

In Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 3): Certification, Audits, and Applicability Beyond Automotive we address certification, audit expectations, and the broader applicability of ISO/PAS 8800 beyond automotive systems.

Interested in going deeper?

Explore our ISO 8800 training or connect with us about consulting support.


Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our approach? Explore why teams choose SRES training and how we help organizations with consulting support across functional safety, cybersecurity, autonomy safety, and EV development.


Beyond TARA: How STPA Strengthens Automotive Cybersecurity

Beyond TARA: How STPA Strengthens Automotive Cybersecurity

03/30/26

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems20
  • Electric Mobility3
  • News17
  • Videos12
  • Functional Safety40
  • Responsible AI28
  • Cybersecurity6
Most Recent
  • ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    04/07/26
  • Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    03/30/26
  • ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    03/25/26
  • On Redundant Systems
    On Redundant Systems
    03/24/26
  • SRES SafeStack | March 2026
    SRES SafeStack | March 2026
    03/20/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2026 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube