Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Autonomous Vehicles and Explainable AI (XAI): A Fresh Look
12/16/24
153 Likes

Autonomous Vehicles and Explainable AI (XAI): A Fresh Look


This article offers an in-depth look at topics related to Autonomous Systems and Responsible AI. It follows up an article from October 2023 where we discussed concerns surrounding Explainable Artificial Intelligence (XAI) in Autonomous Vehicles (AV).

For expert-level training—including certification-based programs—on these topics and more, explore our Automotive trainings and Responsible AI trainings. To learn how we support product development, compliance, and organizational safety goals with consulting support, visit our Autonomous Product Development and Responsible Artificial Intelligence pages—or contact us directly.


The previous blog post concluded with an open question: Does XAI make sense for AVs, and if not, what is necessary?

The ISO/PAS 8800:2024 standard, released on December 13, 2024, offers a definition of XAI: “property of an AI system to express important factors influencing the AI system’s outputs in a way that humans can understand.” The standard highlights the challenges with XAI for Deep Neural Networks (DNN) commonly used in automotive applications, noting that pre-hoc global explainability for DNNs can be difficult, while post-hoc local explainability is more achievable through techniques like local linearization or heatmaps. It provides measures for XAI such as attention or saliency maps, structural coverage of AI components, and identification of software units. 

The ISO/IEC 22989:2022 and ISO/IEC TR 5469:2024 standards, referenced by ISO/PAS 8800:2024, emphasize the difficulties of achieving XAI in complex neural networks. ISO/IEC 22989:2022 states, “Deep learning neural networks can be problematic since the complexity of the system can make it hard to provide a meaningful explanation of how the system arrives at a decision.” ISO/IEC TR 5469:2024 extends this sentiment, “Generally speaking, even when fully ‘explainable AI’ is not immediately achievable, a methodical and formally documented evaluation of model interpretability is employed in regards to risk, subject to careful consideration of the consequences on functional safety risk that arise from inappropriate decisions…Mitigation is approached through systematic application of the verification and validation process, with careful considerations for the nature of the AI system. Again, ‘explainable AI’ is a future solution, but process-supported solutions are more often available.”

These standards reaffirm the challenges of achieving XAI in complex neural networks with multiple layers. It is true that we cannot express the outputs of Convolutional Neural Networks (CNNs) in a way that a ‘human can understand.’ CNNs, a subset of DNNs, are critical for object detection and image recognition in AVs. However, the takeaway should not be that it is too difficult, so we should do nothing. Instead, it suggests that although we cannot precisely explain the decisions of CNNs, we can gain more confidence in the model by applying proper processes that include adequate verification and validation activities. This gained confidence helps provide assurance for the concerns related to XAI. 

Establishing an AI Management System, as prescribed in the ISO/IEC 42001:2023 standard, along with applying risk reduction at the model development level as prescribed in the ISO/PAS 8800:2024 standard, builds this confidence and assurance. These standards also help derive a level of transparency that is possible even for complex machine learning (ML) models. According to ISO/IEC 22989:2022, transparency at the organizational level is defined as, “the property of an organization that appropriate activities and decisions are communicated to relevant stakeholders in a comprehensive, accessible and understandable manner”.


Have insights or questions? Leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more? Join us for one of our upcoming training sessions that delve into the ISO/PAS 8800:2024, ISO/IEC TR 5469:2024 and ISO/IEC 42001:2023 standards.


Demystifying SOTIF Acceptance Criteria and Validation Targets - Part 1

Demystifying SOTIF Acceptance Criteria and Validation Targets - Part 1

01/21/25

Is the SW-FMEA Busywork? – A SW-FMEA Guide

01/13/25
Is the SW-FMEA Busywork? – A SW-FMEA Guide

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems17
  • Electric Mobility3
  • News12
  • Videos9
  • Functional Safety29
  • Responsible AI20
  • Cybersecurity5
Most Recent
  • Humanoid Robot Safety Comes Into Focus
    Humanoid Robot Safety Comes Into Focus
    11/25/25
  • Public, Private, or Customized: Choosing the Right Training Format for Your Team
    Public, Private, or Customized: Choosing the Right Training Format for Your Team
    10/13/25
  • ISO/SAE 21434: Why Implement TARA for ADS Systems?
    ISO/SAE 21434: Why Implement TARA for ADS Systems?
    10/07/25
  • Is Artificial General Intelligence Required for Safe Autonomous Vehicles? | SRES Fireside Chat
    Is Artificial General Intelligence Required for Safe Autonomous Vehicles? | SRES Fireside Chat
    09/30/25
  • Certificate vs. Certification in ISO 26262 Training: What You Need to Know
    Certificate vs. Certification in ISO 26262 Training: What You Need to Know
    09/02/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube