Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Responsible AI requires an AI Management System – A look into ISO/IEC 42001:2023
03/08/24
46 Likes

Responsible AI requires an AI Management System – A look into ISO/IEC 42001:2023


This article offers an in-depth look at topics related to Responsible AI.

For expert-level training—including certification-based programs—on these topics and more, explore our Responsible AI trainings. To learn how we support product development, compliance, and organizational safety goals with consulting support, visit our Responsible Artificial Intelligence page—or contact us directly.


Over the past few months, we’ve posted about recent AI incidents involving ethics and safety. Such incidents embody the need for responsible AI (RAI), which employs AI in a safe, trustworthy, and ethical manner. RAI is not only being adopted by major AI technology organizations but was also recently released as an Executive Order (E.O.) by the Biden Administration late in October of 2023; E.O. 14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Within the E.O. it states the following as its purpose:

“Artificial Intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

But how do we develop responsible AI systems? Until recently, there hasn’t been a standard addressing how organizations should develop AI systems responsibly. In December of 2023 the ISO/IEC 42001 (Information technology – Artificial intelligence – Management system) standard was released, intended to help organizations responsibly perform their role with respect to AI systems by establishing an AI management system. The scope of the standard is agnostic to industry, product, service or even the size of an organization.

ISO/IEC 42001 takes AI accountability head on, stating that top management is responsible for demonstrating leadership and commitment with respect to the AI management system, in other words establishing the organization’s culture. It takes a top-down approach. Top management must formally express the intentions and direction of the organization through an AI policy, which provides the framework for AI objectives. The AI objectives need to be measurable and show that specific results were achieved. The top management is also responsible for assigning the responsibility and authority for relevant roles to ensure compliance of the AI management system. Part of this is creating governing bodies in the form of committees or boards, such as an Ethics Board or an Algorithm and Data Risk Committee.

Another important aspect of the ISO/IEC 42001 standard is the planning and implementation of AI risk assessments, AI risk treatments and AI risk impact assessments. The AI risk assessment identifies risks that could prevent achieving the AI objectives and assesses the potential consequences if the identified risks were to materialize. The organization needs to define options to address these risks and determine all controls necessary to implement the AI risk treatment for meeting organizational AI objectives. The AI impact assessment determines the potential consequences of the AI system’s deployment and must consider foreseeable misuse.

Those involved in the AI management system must have appropriate competencies and awareness of the AI policies and their role in supporting compliance. It is also required that the organization has appropriate internal and external communications related to their AI management system. Communication is an important aspect of transparency and trustworthiness.

Finally, it is required by the ISO/IEC 42001 standard to conduct minimally internal cyclic audits to confirm that the organization meets its own AI management system requirements and the requirements of ISO/IEC 42001. It is expected that everything contained within the AI management system follows a continual improvement process.

For further guidance on responsible AI, visit SRES website for training and workshops related to RAI aligned to the ISO/IEC 42001:2023 standard: https://sres.ai/training/automotive-adas-and-av-responsible-ai-training/.


Have insights or questions? Leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our services? Find all upcoming trainings here and all consulting offerings here.


How does ISO 26262 address open source software?

How does ISO 26262 address open source software?

02/26/24

High Voltage Electrical Safety - Designing in Layers of Protection

10/01/24
High Voltage Electrical Safety - Designing in Layers of Protection

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News9
  • Videos9
  • Functional Safety25
  • Responsible AI18
  • Cybersecurity2
Most Recent
  • Watch Now: LHP + SRES AI & Safety Webinar Series (Parts 1-3)
    Watch Now: LHP + SRES AI & Safety Webinar Series (Parts 1-3)
    06/05/25
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
  • Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    04/14/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube