Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Autonomous Vehicles and Explainable AI (XAI)?
10/26/23
52 Likes

Autonomous Vehicles and Explainable AI (XAI)?


This article offers an in-depth look at topics related to Autonomous Systems and Responsible AI.

For expert-level training—including certification-based programs—on these topics and more, explore our Automotive trainings and Responsible AI trainings. To learn how we support product development, compliance, and organizational safety goals with consulting support, visit our Autonomous Product Development and Responsible Artificial Intelligence pages—or contact us directly.


As you have seen in the news on Tuesday, October 24, 2023, the California Department of Motor Vehicles issued a suspension of General Motors’ Cruise autonomous vehicles from California public roads. Their statement reads:

The California DMV today notified Cruise that the department is suspending Cruise’s autonomous vehicle deployment and driverless testing permits, effective immediately. The DMV has provided Cruise with the steps needed to apply to reinstate its suspended permits, which the DMV will not approve until the company has fulfilled the requirements to the department’s satisfaction. This decision does not impact the company’s permit for testing with a safety driver.

It was reported by the California DMV that Cruise didn’t show them complete video footage of an October 2, 2023 accident. In the statement, Cruise provided footage up to the point of impact, but failed to provide the footage afterwards, where the Cruise attempted to pull over while the pedestrian was dragged for approximately 20 feet before coming to a complete stop. According to Cruise, the vehicle was “achieving a minimal risk condition”, by trying to get out of the lane of travel after a collision. They admittedly stated that they will be adding this scenario to future simulation test suites.

It didn’t take long for several Monday morning armchair quarterbacks to denounce Cruise’s safety efforts, rather than to discuss the broader issues we face with autonomous vehicles (AV) as a society. Is there even an infrastructure or governance to fully support and accept the safe development and deployment of AVs? As humans, we learn from experiences, from doing, from examples. Machine Learning (ML) systems also learn by example.

We know that we can’t fully test all real-world scenarios on a controlled NASCAR track, so what level of risk are we willing to accept to allow AV developers and ML to gain the necessary experience? For full AVs to ever be a reality, there must be a point where the safety driver is absent. I’m not advocating that we blindly put all AVs on public roads and hope for the best, I’m just trying to illustrate that there is a lot stacked against an AV developer that has sincere, non-malicious intentions. Our US legal system doesn’t promote a “safe space” for developers to openly discuss and work together as a community to solve these complex issues. The media and public perception tend to drive towards zero risk. There are no harmonized binary regulations to safely validate to. Nobody is willing to hold the hot potato of accountability. In this current environment it is very difficult to envision how AVs could ever be accepted without a safety driver. Maybe the only solution is to abandon the application of full autonomy in passenger vehicles?

This is one of many complexities raised in general by Artificial Intelligence (AI); how to gain trust in removing the human from the loop, and fully rely on the machine (if at all ever possible)?

There are some frameworks built around Responsible AI (RAI) intended to increase trust, but I’m uncertain if they go far enough to support AV development. For instance, some common pillars seen across multiple industries working on RAI are:

  1. Establishment of ethical values and AI principles, which are applied across the practice of AI. Organizations are encouraged to be guided by these values and principles, creating a culture that embraces them;
  2. Developing a form of governance structure for both data and algorithms, including an ethics committee/board, processes for auditing and a means of accountability. The need to ensure the organization follows the company’s AI principles and engages into their developed processes;
  3. The organization should create real-world scenarios in which the ethics committee should “role play” through, asking, “How do we react to each of these scenarios”;
  4. The need to publicly show decisions through some form of Explainable AI (XAI) in at least two domains, one that is publicly understandable (including regulators), but another that can support technical communities to better improve and understand the technology and its limitations.

For the latter example, XAI, the intent is to better explain what is happening inside of the “black-box”. Common examples are AI tools determining loan approvals, or assessing potential recidivism risk. In many of these cases model explainability methods like Local Interpretable Model-agnostic Explanations (LIME) or SHapley Additive exPlanation (SHAP) can be used to approximate an understanding of how the “black-box” determines its outputs. In concept, this seems like a good idea for AVs, so that we can better understand and learn from incidences, help AI be more transparent and gain trust from the public and regulators. However, these methods break down with the complexity of the “black-box”, and create additional effort, costs and tuning of a parallel model, not to mention that the XAI model would have to work in real-time. I’m unaware of any AV developer today that has published the use of XAI successfully. Are we at a point where we (or governments) demand more explainable and transparent AI? Or is it a pillar that still doesn’t help the industry enough to gain the needed trust? Is society’s risk aversity too high to ever accept the idea of driverless AVs all together? What are your thoughts?


Have insights or questions? Leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our services? Find all upcoming trainings here and all consulting offerings here.


Do you bleed safety?

Do you bleed safety?

10/20/23

ISO 26262 is like bringing a knife to a gun fight with AI

11/06/23
ISO 26262 is like bringing a knife to a gun fight with AI

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News10
  • Videos9
  • Functional Safety25
  • Responsible AI18
  • Cybersecurity2
Most Recent
  • New Training: Safety Analyses for Automated Driving Systems (ADS)
    New Training: Safety Analyses for Automated Driving Systems (ADS)
    06/25/25
  • Watch Now: LHP + SRES AI & Safety Webinar Series (Parts 1-3)
    Watch Now: LHP + SRES AI & Safety Webinar Series (Parts 1-3)
    06/05/25
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube