As you have seen in the news on Tuesday, October 24, 2023, the California Department of Motor Vehicles issued a suspension of General Motors’ Cruise autonomous vehicles from California public roads. Their statement reads:
The California DMV today notified Cruise that the department is suspending Cruise’s autonomous vehicle deployment and driverless testing permits, effective immediately. The DMV has provided Cruise with the steps needed to apply to reinstate its suspended permits, which the DMV will not approve until the company has fulfilled the requirements to the department’s satisfaction. This decision does not impact the company’s permit for testing with a safety driver.
It was reported by the California DMV that Cruise didn’t show them complete video footage of an October 2, 2023 accident. In the statement, Cruise provided footage up to the point of impact, but failed to provide the footage afterwards, where the Cruise attempted to pull over while the pedestrian was dragged for approximately 20 feet before coming to a complete stop. According to Cruise, the vehicle was “achieving a minimal risk condition”, by trying to get out of the lane of travel after a collision. They admittedly stated that they will be adding this scenario to future simulation test suites.
It didn’t take long for several Monday morning armchair quarterbacks to denounce Cruise’s safety efforts, rather than to discuss the broader issues we face with autonomous vehicles (AV) as a society. Is there even an infrastructure or governance to fully support and accept the safe development and deployment of AVs? As humans, we learn from experiences, from doing, from examples. Machine Learning (ML) systems also learn by example.
We know that we can’t fully test all real-world scenarios on a controlled NASCAR track, so what level of risk are we willing to accept to allow AV developers and ML to gain the necessary experience? For full AVs to ever be a reality, there must be a point where the safety driver is absent. I’m not advocating that we blindly put all AVs on public roads and hope for the best, I’m just trying to illustrate that there is a lot stacked against an AV developer that has sincere, non-malicious intentions. Our US legal system doesn’t promote a “safe space” for developers to openly discuss and work together as a community to solve these complex issues. The media and public perception tend to drive towards zero risk. There are no harmonized binary regulations to safely validate to. Nobody is willing to hold the hot potato of accountability. In this current environment it is very difficult to envision how AVs could ever be accepted without a safety driver. Maybe the only solution is to abandon the application of full autonomy in passenger vehicles?
This is one of many complexities raised in general by Artificial Intelligence (AI); how to gain trust in removing the human from the loop, and fully rely on the machine (if at all ever possible)?
There are some frameworks built around Responsible AI (RAI) intended to increase trust, but I’m uncertain if they go far enough to support AV development. For instance, some common pillars seen across multiple industries working on RAI are:
- Establishment of ethical values and AI principles, which are applied across the practice of AI. Organizations are encouraged to be guided by these values and principles, creating a culture that embraces them;
- Developing a form of governance structure for both data and algorithms, including an ethics committee/board, processes for auditing and a means of accountability. The need to ensure the organization follows the company’s AI principles and engages into their developed processes;
- The organization should create real-world scenarios in which the ethics committee should “role play” through, asking, “How do we react to each of these scenarios”;
- The need to publicly show decisions through some form of Explainable AI (XAI) in at least two domains, one that is publicly understandable (including regulators), but another that can support technical communities to better improve and understand the technology and its limitations.