Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Short series: Introduction to the ISO/IEC 42001 Standard
08/08/24
35 Likes

Short series: Introduction to the ISO/IEC 42001 Standard

ISO/IEC 42001 establishes our AI Management System and emphasizes the ethical and responsible use of AI. In this episode we discuss the layout of the standard and dig into the risk-based approached prescribed by the standard.

Transcript (auto-generated)

Jody Nelson, SRES Shorts. In this episode, I want to give a brief introduction to the ISO IEC 42001 standard. So what is the 42001? Well, it establishes the AI management system. This is very similar to how the IATF 16949 establishes the QM or the quality management system for us in automotive. But the 42001 is specific for AI.

It emphasizes the ethical and responsible use of AI. What we commonly refer to as responsible AI, and it’s applicable to all industries and all sizes of organizations. So, what are some of the benefits of RAI beyond current headlines? Well, Harvard Business Review made a statement that most AI projects actually fail, and they claimed a failure rate as high as 80%.

Now, the Boston Consulting Group came out and said that companies that prioritize scaling their RAI programs over scaling their AI capabilities, experience nearly 30 percent fewer AI failures. This could be a big impact, not just on the quality, but also on a lot of costs.

So a brief overview of the standards. So this is directly out of the standard, uh, clauses 5, and 10. But before we jump into this, I’d like to reorder this, and I recommend for those of you getting into the 42001 to also reorder this. So I moved seven “support” up next to five “leadership”. This is our organizational setup. We’re establishing our leadership, our accountability, our AI policies, our roles and responsibilities, what are resources, the competencies we need, the communication processes.

So that’s going to be set up up front, and then I grouped in six and eight together. This is our risk-based approach. And then you’ll see I did a small switch. I brought in the AI risk assessment up with the AI system impact assessment, and I’ll talk about that in a little bit. And then the last portion is the actual deployment.

So evaluating, monitoring, our AI that’s being deployed, and then having a continual improvement process. Where I’m going to focus in this episode is on the AI policy and AI objectives, because those lead into performing our risk assessment and then our system impact assessment, which then lead into determining what our AI risk treatment plan will be.

What is our AI policy? This is our overall framework where we outline our high level principles, guidelines, and strategic direction of our organization for either the development or the use of AI systems. So this includes our business strategy. What are our organizational values? What is our culture? And what is our appetite for risk?

So what is our risk environment? Um, what amount of risk are we willing to pursue? We’re going to discuss our legal requirements here. Any kind of impacts to relevant parties as well. And the next is our AI objectives. These are specific measurable goals set with the context of the AI policy that guide our implementation and management of our AI system.

So these are very common topics that you see when you look into RAI, such as fairness, accountability, traceability, reliability, safety, privacy, security, and etc. all. Now, we’re going to take that AI policy, we’re going to take those AI objectives, and they become the inputs to our risk assessment.

And again, we’re going to combine the AI risk assessment in 8. 2 with the AI system impact assessment of 8. 4. So our first step is to list out all the risks that could violate the AI objectives. Then we’re going to assess the potential consequences for individuals, groups, and groups of individuals and societies. Then we analyze the likelihood of these risks occurring and then based on that we have to prioritize these risks and establish a risk level for all of them.

And based on that we apply appropriate AI risk treatments based on that risk level. Now, what is the AI Risk treatment? Well, these are our processes and controls that we established to mitigate these AI risks. Now, within the 42001, it identifies at least minimally 37 controls that we have to evaluate as part of our treatment. We can add more, but the standard provides us these 37. And just to kind of give you an example of some of them, I’ll just read through these. Organization should define, document, and implement data management processes.
Organization should determine and document details about the acquisition and selection of data. So making sure we get good data. The next one is regarding the data quality. Ensuring that the data used to develop and operate the AI system meets all of our requirements. And again, remember, we’re pointing a lot of theseattributes to our AI objectives.

And then the organization should identify and document objectives to guide the responsible development of AI.

And then lastly shown here is that we have to ensure that we have a responsible approach to develop this AI system in considering the end customer, the end user.

 

SynSpace and SRES Partner to Provide ASPICE and Functional Safety Services

SynSpace and SRES Partner to Provide ASPICE and Functional Safety Services

07/03/24

Walking on Thin Ice: The Risks of Ignoring Responsible AI in AI System Deployments

08/13/24
Walking on Thin Ice: The Risks of Ignoring Responsible AI in AI System Deployments

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News9
  • Videos9
  • Functional Safety25
  • Responsible AI17
  • Cybersecurity2
Most Recent
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
  • Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    04/14/25
  • Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    04/11/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube