In October 2023, we discussed the concerns surrounding Explainable Artificial Intelligence (XAI) in Autonomous Vehicles (AV). We concluded with an open question: Does XAI make sense for AVs, and if not, what is necessary?
The ISO/PAS 8800:2024 standard, released on December 13, 2024, offers a definition of XAI: “property of an AI system to express important factors influencing the AI system’s outputs in a way that humans can understand.” The standard highlights the challenges with XAI for Deep Neural Networks (DNN) commonly used in automotive applications, noting that pre-hoc global explainability for DNNs can be difficult, while post-hoc local explainability is more achievable through techniques like local linearization or heatmaps. It provides measures for XAI such as attention or saliency maps, structural coverage of AI components, and identification of software units.
The ISO/IEC 22989:2022 and ISO/IEC TR 5469:2024 standards, referenced by ISO/PAS 8800:2024, emphasize the difficulties of achieving XAI in complex neural networks. ISO/IEC 22989:2022 states, “Deep learning neural networks can be problematic since the complexity of the system can make it hard to provide a meaningful explanation of how the system arrives at a decision.” ISO/IEC TR 5469:2024 extends this sentiment, “Generally speaking, even when fully ‘explainable AI’ is not immediately achievable, a methodical and formally documented evaluation of model interpretability is employed in regards to risk, subject to careful consideration of the consequences on functional safety risk that arise from inappropriate decisions…Mitigation is approached through systematic application of the verification and validation process, with careful considerations for the nature of the AI system. Again, ‘explainable AI’ is a future solution, but process-supported solutions are more often available.”
These standards reaffirm the challenges of achieving XAI in complex neural networks with multiple layers. It is true that we cannot express the outputs of Convolutional Neural Networks (CNNs) in a way that a ‘human can understand.’ CNNs, a subset of DNNs, are critical for object detection and image recognition in AVs. However, the takeaway should not be that it is too difficult, so we should do nothing. Instead, it suggests that although we cannot precisely explain the decisions of CNNs, we can gain more confidence in the model by applying proper processes that include adequate verification and validation activities. This gained confidence helps provide assurance for the concerns related to XAI.
Establishing an AI Management System, as prescribed in the ISO/IEC 42001:2023 standard, along with applying risk reduction at the model development level as prescribed in the ISO/PAS 8800:2024 standard, builds this confidence and assurance. These standards also help derive a level of transparency that is possible even for complex machine learning (ML) models. According to ISO/IEC 22989:2022, transparency at the organizational level is defined as, “the property of an organization that appropriate activities and decisions are communicated to relevant stakeholders in a comprehensive, accessible and understandable manner”.
Join us in one of our upcoming training sessions that delve into the ISO/PAS 8800:2024, ISO/IEC TR 5469:2024 and ISO/IEC 42001:2023 standards.