Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Autonomous Vehicles and Explainable AI (XAI): A Fresh Look
12/16/24
43 Likes

Autonomous Vehicles and Explainable AI (XAI): A Fresh Look

In October 2023, we discussed the concerns surrounding Explainable Artificial Intelligence (XAI) in Autonomous Vehicles (AV). We concluded with an open question: Does XAI make sense for AVs, and if not, what is necessary?

The ISO/PAS 8800:2024 standard, released on December 13, 2024, offers a definition of XAI: “property of an AI system to express important factors influencing the AI system’s outputs in a way that humans can understand.” The standard highlights the challenges with XAI for Deep Neural Networks (DNN) commonly used in automotive applications, noting that pre-hoc global explainability for DNNs can be difficult, while post-hoc local explainability is more achievable through techniques like local linearization or heatmaps. It provides measures for XAI such as attention or saliency maps, structural coverage of AI components, and identification of software units. 

The ISO/IEC 22989:2022 and ISO/IEC TR 5469:2024 standards, referenced by ISO/PAS 8800:2024, emphasize the difficulties of achieving XAI in complex neural networks. ISO/IEC 22989:2022 states, “Deep learning neural networks can be problematic since the complexity of the system can make it hard to provide a meaningful explanation of how the system arrives at a decision.” ISO/IEC TR 5469:2024 extends this sentiment, “Generally speaking, even when fully ‘explainable AI’ is not immediately achievable, a methodical and formally documented evaluation of model interpretability is employed in regards to risk, subject to careful consideration of the consequences on functional safety risk that arise from inappropriate decisions…Mitigation is approached through systematic application of the verification and validation process, with careful considerations for the nature of the AI system. Again, ‘explainable AI’ is a future solution, but process-supported solutions are more often available.”

These standards reaffirm the challenges of achieving XAI in complex neural networks with multiple layers. It is true that we cannot express the outputs of Convolutional Neural Networks (CNNs) in a way that a ‘human can understand.’ CNNs, a subset of DNNs, are critical for object detection and image recognition in AVs. However, the takeaway should not be that it is too difficult, so we should do nothing. Instead, it suggests that although we cannot precisely explain the decisions of CNNs, we can gain more confidence in the model by applying proper processes that include adequate verification and validation activities. This gained confidence helps provide assurance for the concerns related to XAI. 

Establishing an AI Management System, as prescribed in the ISO/IEC 42001:2023 standard, along with applying risk reduction at the model development level as prescribed in the ISO/PAS 8800:2024 standard, builds this confidence and assurance. These standards also help derive a level of transparency that is possible even for complex machine learning (ML) models. According to ISO/IEC 22989:2022, transparency at the organizational level is defined as, “the property of an organization that appropriate activities and decisions are communicated to relevant stakeholders in a comprehensive, accessible and understandable manner”.

Join us in one of our upcoming training sessions that delve into the ISO/PAS 8800:2024, ISO/IEC TR 5469:2024 and ISO/IEC 42001:2023 standards.

Demystifying SOTIF Acceptance Criteria and Validation Targets - Part 1

Demystifying SOTIF Acceptance Criteria and Validation Targets - Part 1

01/21/25

Is the SW-FMEA Busywork? – A SW-FMEA Guide

01/13/25
Is the SW-FMEA Busywork? – A SW-FMEA Guide

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News9
  • Videos9
  • Functional Safety25
  • Responsible AI17
  • Cybersecurity2
Most Recent
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
  • Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    04/14/25
  • Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    04/11/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube