Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
ISO/PAS 8800 Walkthrough (Part 1): Overview of Clauses and Essential Work Products
03/03/26
97 Likes

ISO/PAS 8800 Walkthrough (Part 1): Overview of Clauses and Essential Work Products


This article introduces our ISO/PAS 8800 Walkthrough series, beginning with a structured overview of the standard’s clauses, objectives, and essential work products. It clarifies how ISO/PAS 8800 integrates with ISO 26262 and ISO 21448 (SOTIF), defines AI-specific lifecycle expectations, and outlines the documentation and assurance artifacts required to support defensible AI safety arguments in automotive systems.

Looking to go deeper? SRES offers ISO 8800 AI Safety Professional (AISP) training in collaboration with SGS-TÜV Saar, as well as automotive AI safety consulting to help organizations implement ISO 8800 alongside ISO 26262 and ISO 21448 (SOTIF) within real-world product development programs.


Introduction - The Evolution of Safety and Performance

Avoiding hazards and minimizing injuries has always been fundamental to human progress – especially in the world of transportation. Striking the right balance between safety and performance is a constant negotiation. For example, limiting vehicles to a maximum speed of 10 km/h would mitigate the severity of hazards, but we accept a higher level of risk in exchange for the performance benefits of faster travel.

In the automotive world, safety has always been a concern and has gradually become more standardized over time. You might be familiar with ISO 26262 for functional safety, ISO 21448 for Safety Of The Intended Function, and ISO 21434 for Cybersecurity.

The next stage in vehicle automation is defined by Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS). While existing automotive systems have largely relied on deterministic software, artificial intelligence (AI) is the true enabler of higher levels of autonomy. There is a complex landscape evolving around standards and regulations for the use and development of AI. For an overview of the most relevant AI automotive related standards, see: Safety and Artificial Intelligence: A Look Into the ISO 8800 Standard.

ISO/PAS 8800 plays a central role in applying safety in the evolving area of AI as part of automotive product development. It provides a framework for AI, extending approaches of ISO 26262 and ISO 21448.

An ISO/PAS 8800 Overview

Informative Clauses

The first six clauses provide an introduction and establish the connection to ISO 26262 and ISO 21448. Only the clause headings are listed here:

  • Clause 1: Scope
  • Clause 2: Normative references
  • Clause 3: Terms and definitions
  • Clause 4: Abbreviated terms
  • Clause 5: Requirements for conformity
  • Clause 6: AI within the context of road vehicles system safety engineering and basic concepts

Normative Clauses

For each normative clause, this blog provides a summary of the objective and identifies essential work products specified as outputs of the clause.

Clause 7: AI safety management

This clause defines an AI reference lifecycle.

The resulting work products include:

  • The AI safety lifecycle definition
  • Updates to all ISO 26262-2 (Management of Functional Safety) work products to incorporate ISO/PAS 8800 aspects by extending “functional safety” to “AI safety”

Examples of affected ISO 26262-2 work products include:

  • Organization-specific rules and processes for functional safety
  • Impact analysis
  • Safety plan
  • Confirmation measure report

Part 2 of this series will discuss the AI lifecycle in greater detail.


Clause 8: Assurance argument of AI systems

The purpose of the assurance argument is to demonstrate that the residual risk of the AI system violating its safety requirements is sufficiently low.

The AI safety assurance argument is conceptually analogous to:

  • The safety case defined in ISO 26262
  • The SOTIF argument defined in ISO 21448

All three assurance artifacts may be integrated into a single, coherent safety case document.


Clause 9: Derivation of AI safety requirements

The inputs to the derivation of AI safety requirements include:

  • Requirements and the input space of the encompassing system
  • Relevant safety-related properties
  • Feedback from iterative development and validation activities

The resulting work products include:

  • A refined input space
  • AI safety requirements

Part 3 of this series will discuss the derivation of AI safety requirements in greater detail.


Clause 10: Selection of AI technologies, architectural and development measures

The objective of this clause is to select and justify the AI technologies to be used, as well as appropriate architectural and development measures to mitigate functional insufficiencies. This includes identifying architectural and development measures that demonstrate that safety requirements are satisfied in the target execution environment.

The resulting work products include:

  • An architecture defined at the appropriate level of abstraction (AI component or AI system)
  • A defined AI development process
  • The implemented AI component, including training and validation artifacts

Clause 11: Data-related consideration

This clause defines a reference dataset lifecycle and specifies measures to mitigate potential dataset insufficiencies. Once an AI dataset has been created — including annotation for supervised learning — it is partitioned into training, validation, and test datasets.

The resulting work products include:

  • An organization-specific dataset lifecycle
  • Evidence of the outputs of the dataset lifecycle activities (e.g., dataset requirements and verification results)
  • A safety analysis addressing dataset-related risks

Part 2 of this series will discuss the AI dataset lifecycle in greater detail.


Clause 12: Verification and validation of the AI system

The purpose of this clause is to verify the AI safety requirements against the stand-alone AI system. Validation of these requirements is performed within the context of the encompassing system. Note that the term “validation” is used differently in ISO/PAS 8800 compared to other safety standards.

The resulting work products include:

  • AI system verification report
  • Integrated AI system
  • AI system validation report

Clause 13: Safety analysis of AI systems

The safety analysis of the AI system shall identify faults that could lead to violations of AI safety requirements, determine their potential root causes, and define measures to prevent or control AI errors.

Recommended analysis techniques include:

  • Fault Tree Analysis (FTA)
  • Failure Mode and Effects Analysis (FMEA)
  • System-Theoretic Process Analysis (STPA)
  • Event Tree Analysis
  • Bayesian Networks
  • HAZOP

The resulting work product is the AI system safety analysis report.


Clause 14: Measures during operation

Process requirements are established to ensure AI safety after deployment, including re-approval of modified AI systems prior to release. An essential input for this process is field data.

The resulting work products include:

  • A process for assuring AI safety during operation
  • Required on-board and off-board safety measures
  • Collected field data
  • Detection and reporting of functional insufficiencies during operation

Clause 15: Confidence in use of AI development frameworks and software tools used for AI model development

The purpose of this clause is to reduce the risk that errors in off-line AI development frameworks or software tools could compromise the safety of AI models.

The resulting work products include:

  • Evidence of analysis of the AI model creation process
  • Demonstrated confidence in the software tools used
  • Evidence that the AI model creation process has been executed according to the defined procedures

Continued: Part 2

This blog provided an overview of ISO/PAS 8800. Part 2 takes a closer look at the essential AI and dataset lifecycles. [Click here to read part 2]


Have insights or questions? Send us an email at info@sres.ai or leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our approach? Explore why teams choose SRES training and how we help automotive organizations with consulting support across functional safety, cybersecurity, autonomy safety, and EV development.


Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 1): Foundations

Building Defensible AI Assurance Arguments in ISO/PAS 8800 (Part 1): Foundations

03/02/26

What Auditors Really Look For: A Fireside Chat on ISO/PAS 8800 and Its Next Evolution

03/05/26
What Auditors Really Look For: A Fireside Chat on ISO/PAS 8800 and Its Next Evolution

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems20
  • Electric Mobility3
  • News17
  • Videos12
  • Functional Safety40
  • Responsible AI28
  • Cybersecurity6
Most Recent
  • ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    ISO/PAS 8800 Walkthrough (Part 3): AI Safety Requirements and V-cycle
    04/07/26
  • Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    Beyond TARA: How STPA Strengthens Automotive Cybersecurity
    03/30/26
  • ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    ISO/PAS 8800 Walkthrough (Part 2): The AI and Dataset Lifecycles
    03/25/26
  • On Redundant Systems
    On Redundant Systems
    03/24/26
  • SRES SafeStack | March 2026
    SRES SafeStack | March 2026
    03/20/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2026 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube