Over the past few months, we’ve posted about recent AI incidents involving ethics and safety. Such incidents embody the need for responsible AI (RAI), which employs AI in a safe, trustworthy, and ethical manner. RAI is not only being adopted by major AI technology organizations but was also recently released as an Executive Order (E.O.) by the Biden Administration late in October of 2023; E.O. 14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Within the E.O. it states the following as its purpose:
“Artificial Intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”
But how do we develop responsible AI systems? Until recently, there hasn’t been a standard addressing how organizations should develop AI systems responsibly. In December of 2023 the ISO/IEC 42001 (Information technology – Artificial intelligence – Management system) standard was released, intended to help organizations responsibly perform their role with respect to AI systems by establishing an AI management system. The scope of the standard is agnostic to industry, product, service or even the size of an organization.
ISO/IEC 42001 takes AI accountability head on, stating that top management is responsible for demonstrating leadership and commitment with respect to the AI management system, in other words establishing the organization’s culture. It takes a top-down approach. Top management must formally express the intentions and direction of the organization through an AI policy, which provides the framework for AI objectives. The AI objectives need to be measurable and show that specific results were achieved. The top management is also responsible for assigning the responsibility and authority for relevant roles to ensure compliance of the AI management system. Part of this is creating governing bodies in the form of committees or boards, such as an Ethics Board or an Algorithm and Data Risk Committee.
Another important aspect of the ISO/IEC 42001 standard is the planning and implementation of AI risk assessments, AI risk treatments and AI risk impact assessments. The AI risk assessment identifies risks that could prevent achieving the AI objectives and assesses the potential consequences if the identified risks were to materialize. The organization needs to define options to address these risks and determine all controls necessary to implement the AI risk treatment for meeting organizational AI objectives. The AI impact assessment determines the potential consequences of the AI system’s deployment and must consider foreseeable misuse.
Those involved in the AI management system must have appropriate competencies and awareness of the AI policies and their role in supporting compliance. It is also required that the organization has appropriate internal and external communications related to their AI management system. Communication is an important aspect of transparency and trustworthiness.
Finally, it is required by the ISO/IEC 42001 standard to conduct minimally internal cyclic audits to confirm that the organization meets its own AI management system requirements and the requirements of ISO/IEC 42001. It is expected that everything contained within the AI management system follows a continual improvement process.
For further guidance on responsible AI, visit SRES website for training and workshops related to RAI aligned to the ISO/IEC 42001:2023 standard: https://sres.ai/training/automotive-adas-and-av-responsible-ai-training/.