
Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 1): Foundations
This article begins our 3-part series summarizing key points discussed in our recent Fireside Chat with SRES Partners Jody Nelson, Gokul Krithivasan and Bill Taylor, along with Eduard Dojan of SGS-TÜV Saar on ISO/PAS 8800 and early implementation experience across OEMs and suppliers deploying machine learning in safety-critical systems.
Watch the full discussion here: What Auditors Really Look For: A Fireside Chat on ISO/PAS 8800 and Its Next Evolution
The Structural Gap That Led to ISO/PAS 8800
Artificial intelligence is now embedded directly within vehicle safety functions. ISO 26262 provides the established functional safety framework for automotive systems, while ISO 21448 (SOTIF) addresses functional insufficiencies and triggering conditions at the vehicle level. Both standards primarily operate at the system or behavioral level rather than at the internal structure of machine learning models. In practice, ML components were often treated as black boxes from a behavioral standpoint. The internal structure of the model — training data, validation data, architecture, post-processing, and supervision — was not explicitly structured within the safety framework.
ISO/PAS 8800 adds that lower abstraction layer. It provides a framework to address AI-specific insufficiencies within machine learning components — particularly those related to model behavior and datasets.
The practical question many organizations now face is not awareness of the standard, but implementation: What does alignment to ISO 8800 look like in practice? What are assessors evaluating?
ISO 8800 Extends ISO 26262 and SOTIF
ISO 8800 is not a standalone framework. It assumes:
- A defined ISO 26262 item
- A completed SOTIF analysis
- System-level triggering conditions already identified
8800 builds on these foundations.
One key structural element is the refinement of SOTIF triggering conditions into AI triggering conditions. For example: A SOTIF triggering condition such as reduced camera visibility due to fog may result in a functional insufficiency at system level. Under ISO 8800, that system-level condition must be refined into model-level phenomena — such as distribution shift, degraded feature extraction, or confidence degradation.
From a structural standpoint, one SOTIF triggering condition may decompose into multiple AI triggering conditions. These AI triggering conditions then lead to defined AI output insufficiencies, which trace back to system-level hazards.
This establishes traceability in both directions:
- AI insufficiencies link back to SOTIF triggering conditions and system hazards
- SOTIF triggering conditions are refined into AI triggering conditions
This bidirectional traceability is not merely conceptual. It is expected in audits and assessments. ISO 8800 follows the structural logic of ISO 26262: objectives, requirements, and defined work products. That structural decision was intentional — it enables both process and product certification.
For both process and product assessment, the expectation is:
- A clearly defined AI development process
- Defined work products aligned to ISO 8800 clauses
- Traceability from the ISO 26262 item through SOTIF into 8800 artifacts
- Integration of AI assurance into the overall safety case
The AI work products do not stand alone. They augment the existing functional safety case.
Why an Additional ML Abstraction Layer Is Necessary
A recurring theme in the fireside chat was the difficulty of applying traditional deterministic decomposition to AI-driven functions.
In classical functional safety, engineers can often derive low-level requirements directly from high-level safety goals. Circuits, software units, and system behavior can be analyzed in a structured and relatively deterministic way. The traceability chain from high-level requirement to low-level implementation is clear. For AI-driven functions, that clarity is reduced.
Vehicle-level safety in ADAS and ADS systems depends heavily on perception and data-dependent model behavior, which makes traditional requirement decomposition more difficult to express at the ML component level. When validating that low-level ML components collectively achieve high-level safety objectives, the linkage is harder to express in traditional requirement form. The assurance argument becomes more dependent on validation evidence and statistical demonstration across scenarios.
Without additional structure at the ML level, traceability can weaken. SOTIF does not define detailed low-level processes for machine learning model development. ISO/PAS 8800 fills that gap by introducing defined AI triggering conditions, AI-specific work products, and lifecycle expectations that restore structural continuity between:
- System-level hazards
- SOTIF triggering conditions
- AI triggering conditions
- ML lifecycle artifacts
In this sense, ISO/PAS 8800 does not replace ISO 26262 or SOTIF. It refines them. It introduces a structure where deterministic decomposition alone is insufficient and strengthens the overall safety argument. That structural integration is what makes a defensible AI assurance argument possible under ISO/PAS 8800.
Next in the Series
In Part 2 of this series, we examine how this structure translates into practice through dataset governance, validation strategy, and the shift from R&D-oriented ML development to production-ready safety assurance — where many organizations encounter their greatest implementation challenges. Click here to read part 2.
Need support applying this in practice? Explore our ISO 8800 training or connect with us about consulting support.
Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.
Interested in learning more about our approach? Explore why teams choose SRES training and how we help automotive organizations with consulting support across functional safety, cybersecurity, autonomy safety, and EV development.



