Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 3): Certification, Audits, and Applicability Beyond Automotive
03/12/26
41 Likes

Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 3): Certification, Audits, and Applicability Beyond Automotive


This article concludes our 3-part series summarizing key points discussed in our recent Fireside Chat with SRES Partners Jody Nelson, Gokul Krithivasan and Bill Taylor, along with Eduard Dojan of SGS-TÜV Saar on ISO/PAS 8800 and early implementation experience across OEMs and suppliers deploying machine learning in safety-critical systems.

Watch the full discussion here: What Auditors Really Look For: A Fireside Chat on ISO/PAS 8800 and Its Next Evolution


Certification, Audit Expectations, and Applicability Beyond Automotive

In Part 1, we examined how ISO/PAS 8800 extends ISO 26262 and ISO 21448 (SOTIF) through refinement of triggering conditions and structured traceability into the machine learning lifecycle.

In Part 2, we focused on dataset governance, validation strategy, and the transition from R&D practices to production-grade process discipline.

In Part 3 we address certification, audit expectations, and the broader applicability of ISO/PAS 8800 beyond automotive systems.


Certification Under ISO/PAS 8800

Certification under ISO/PAS 8800 is already possible today.

During standardization, there was discussion about how the standard should be structured: whether to build directly from SOTIF-style functional insufficiencies or to follow the structural logic of ISO 26262. The decision was to follow ISO 26262.

As a result, ISO/PAS 8800 is organized around:

  • Defined objectives
  • Structured requirements
  • Specified work products

This structure enables formal auditing and assessment activities for both:

  • Process certification
  • Product certification

Although ISO/PAS 8800 is published as a Publicly Available Specification (PAS), process certification activities are already underway.

In addition to organizational certification activities, professional training programs with certificate exams are also available. For example, SRES offers a 3-day ISO 8800 AI-Safety Professional (AISP) training in collaboration with SGS-TÜV Saar. The training explains how ISO 26262 and ISO 21448 (SOTIF) are supplemented by ISO/PAS 8800 in the context of responsible and safe artificial intelligence, including AI safety management, assurance arguments, dataset lifecycle considerations, safety analysis, and verification and validation of AI systems.


Process Capability Precedes Product Certification

Certification follows the same fundamental logic as ISO 26262.

Organizations must first demonstrate a defined AI development process. That process must:

  • Integrate with the base functional safety process (typically ISO 26262 and ISO 21448)
  • Define work products aligned to ISO/PAS 8800 clauses
  • Establish traceability expectations

For product certification, assessors evaluate whether:

  • The defined process was followed
  • Work products satisfy ISO/PAS 8800 requirements
  • Traceability flows from the ISO 26262 and SOTIF safety artifacts into AI safety artifacts

Process definition is not optional in a certification context. For organizations with strong R&D cultures, this often represents a structural shift toward documented, repeatable, and auditable development practices.


Product Assessment Expectations

Product assessment focuses on coherence of the overall safety argument.

Key expectations include:

  • Traceability from ISO 26262 and ISO 21448 safety artifacts
  • Analysis of SOTIF triggering conditions and functional insufficiencies at the ML level
  • Definition and analysis of AI output insufficiencies and triggering conditions
  • Definition and achievement of ML model Safety Performance Indicators (SPIs) and Key Performance Indicators (KPIs)
  • Corresponding ISO/PAS 8800 work products
  • Integration of AI assurance into the overall safety case

AI artifacts do not stand alone. They augment the functional safety case.

The safety case must present a coherent “whole picture,” where requirements are properly derived, traceability is maintained in both directions, and validation and dataset evidence support the defined assurance argument.

ISO/PAS 8800 introduces AI assurance as an augmentation of the broader safety case rather than a replacement for existing safety standards.


Alignment With Broader AI Standardization

ISO/PAS 8800 references numerous ISO/IEC AI-related standards, including:

  • Terminology standards
  • Classification schemes
  • Toolchain governance standards
  • Neural network robustness assessment standards
  • Data quality standards

One objective of ISO/PAS 8800 is alignment between automotive functional safety practice and broader AI standardization efforts.

Although ISO/PAS 8800 is written in road-vehicle language, it is not limited to automotive applications. If a different base functional safety standard is used (for example, IEC 61508), ISO/PAS 8800 can be applied to the machine learning component with appropriate alignment.

The ML lifecycle concepts introduced by ISO/PAS 8800 are not vehicle-specific. They apply wherever machine learning influences safety-relevant behavior.


Applicability Beyond Automotive

The same structural safety questions arise in domains such as robotics and industrial automation:

  • What hazardous behaviors can occur without malfunction?
  • What insufficiencies can arise from AI behavior?
  • How are these triggering conditions refined and analyzed?

Even before formal standards existed in some domains, engineers relied on first-principles safety thinking to address these issues.

ISO/PAS 8800 formalizes lifecycle structure and traceability for machine learning components within safety-relevant systems.

The underlying challenges — non-deterministic behavior, data dependency, validation limits, and traceability — are not automotive-specific.


Industry Adoption

ISO 8800 is already being implemented.

Process certifications have been completed, and product certifications are emerging. Adoption varies by region, but formal audit activities are underway.

ISO/PAS 8800 is therefore not theoretical guidance. It is being applied in real development programs and assessed within formal certification frameworks.


Completing the Assurance Argument

Across this three-part series, a consistent structure emerges for a defensible AI assurance argument under ISO/PAS 8800:

  • Defined and followed AI safety development processes
  • Clear traceability from system hazards to AI triggering conditions and output insufficiencies
  • Structured work products aligned to ISO/PAS 8800 clauses
  • Dataset governance and validation evidence integrated into the safety case
  • Coherent augmentation of the functional safety argument

ISO/PAS 8800 does not replace ISO 26262 or SOTIF. It extends them downward into the machine learning lifecycle.

When traceability, documented process adherence, and validation evidence are aligned, the result is not simply compliance. It is a defensible AI safety case grounded in structured lifecycle thinking.


Interested in going deeper?

Explore our ISO 8800 training or connect with us about consulting support.


Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our approach? Explore why teams choose SRES training and how we help automotive organizations with consulting support across functional safety, cybersecurity, autonomy safety, and EV development.


Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 2): Dataset Governance and Validation

Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 2): Dataset Governance and Validation

03/05/26

Engineers Most Skeptical of AI Are Best Positioned to Use It Responsibly at Scale

03/16/26
Engineers Most Skeptical of AI Are Best Positioned to Use It Responsibly at Scale

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems20
  • Electric Mobility3
  • News17
  • Videos12
  • Functional Safety40
  • Responsible AI28
  • Cybersecurity6
Most Recent
  • ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    04/07/26
  • Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    03/30/26
  • ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    03/25/26
  • On Redundant Systems
    On Redundant Systems
    03/24/26
  • SRES SafeStack | March 2026
    SRES SafeStack | March 2026
    03/20/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2026 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube