Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Walking on Thin Ice: The Risks of Ignoring Responsible AI in AI System Deployments
08/13/24
154 Likes

Walking on Thin Ice: The Risks of Ignoring Responsible AI in AI System Deployments


This article offers an in-depth look at topics related to Responsible AI.

For expert-level training—including certification-based programs—on these topics and more, explore our Responsible AI trainings. To learn how we support product development, compliance, and organizational safety goals with consulting support, visit our Responsible Artificial Intelligence page—or contact us directly.


Roughly 2 years after the launch of OpenAI’s ChatGPT, AI incidents have surged over 70%, per the 2024 AI Index Report. The launch marked the start of the current era of mass AI adoption. This trend of AI incidents has manifested itself in the headlines: “A Self-Driving Cruise Robot Taxi Reportedly Struck and Dragged a Pedestrian 20 Feet,” “Rite Aid Facial Recognition Disproportionately Misidentified Minority Shoppers as Shoplifters.” Such headlines highlight a disturbing reality–organizations are deploying their AI systems with insufficient oversight, and they are causing significant harm. Naturally, this leads us to ask: how do we control AI systems to prevent or reduce negative impacts? Let’s look closer at an AI incident involving Meta’s AI systems. The company uses AI to moderate content on its social media platforms, including ad space. Since March they have been under federal investigation for running ads that promote online marketplaces for illicit drugs. These ads violate Meta’s policies. Yet upon review from a nonprofit, Tech Transparency Project, more than 450 ads promoted illicit drugs in Meta’s ad library from March to June. Tragically, the same promotion of drugs resulted in the death of 15-year-old Elijah Ott, who purchased drugs laced with fentanyl. It’s cut and dry; this AI system cannot safely moderate Meta’s social media platforms. But let’s suppose we were going to implement a similar system for a similar social media platform. How do we go about making this a safer AI system? Interest has amassed in a promising approach–responsible AI (RAI). ISO defines it as “An approach to developing and deploying artificial intelligence from both an ethical and legal standpoint; the goal is to employ AI in a safe, trustworthy, and ethical way.” That said, the definition is vague; it refers to an “approach” that is not defined. Luckily, we have guidance in the form of ISO standards:
  1. ISO/IEC 22989 – Artificial intelligence concepts and terminology
  2. ISO/IEC 5338 – AI system life cycle processes
  3. ISO/IEC 23894 – Artificial intelligence – Guidance on risk management
  4. ISO/IEC 38507 – Governance implications of the use of artificial intelligence by organizations
Concepts from the above standards were chosen and used to create:
  1. ISO/IEC 42001 – Artificial intelligence – Management system
The standard centers around the AI Management System (AIMS), its primary work product. The AIMS itself is structured around four main work products:
  1. AI policies – defines the organization’s approach to RAI, and ensures alignment with organizational values and legal requirements, everything within the scope of the AIMS must be aligned with the AI policy
  2. AI Objectives – measurable (if practicable) AI system-specific results that are derived from AI policies, every AI system must achieve its respective AI objectives
  3. RAI Processes, controls – processes that allow the AIMS to interface with the AI lifecycle, ensuring compliance with AI policies and objectives
  4. AI governance – includes organizational structure, internal audit, management review
Diagrammatic Representation of AIMS as described in ISO/IEC 42001:2023

With the necessary work products defined, we can apply the necessary processes and controls to prevent harm to users. To address this problem we should first start at the core of the AIMS and establish an AI policy on the topic of harm or risk of harm to users. For example:

Safety – the AI system should pose no unacceptable risk of physical or psychological harm to any individuals, groups, or societies that are involved in the development, provision, or use of AI systems.

Proceeding to the AI Objectives, we require a measurable, monitorable goal to realize this AI policy of safety. For example:

Human oversight – when performing inference, the AI system shall always have competent personnel that validate and flag incorrect model outputs, as well as make the final decision to classify the ad as conforming or in violation to policy.

RAI processes and controls are processes that allow us to implement the AI objective of human oversight. These include:

Risk management processes (preemptive) to:

  1. Assess the likelihood and consequences of risk
  2. Determine and implement a control (process to modify risk) to mitigate such risk to acceptable levels/li>

Nonconformity assessment and corrective action (reactive) to:

  1. Deal with the consequences of such a violation
  2. Determine measures to control and correct a violation of AI objectives
  3. Determine the root cause(s) of a violation that has occurred of the AI objectives
  4. Determine whether similar violations exist and correct similar nonconformities

Finally, everything is overseen by AI governance. This section is spearheaded by top management, who is responsible not only for establishing a culture of RAI but also for creating a governance structure that ensures the organization’s compliance with ISO/IEC 42001. In addition to compliance with ISO/IEC 42001, the governance structure shall also make clear what roles are accountable for any actions or consequences of the AI system.

We have done everything to establish RAI from the requirements side. But what happens when one of those requirements is broken? What are concrete examples of handling a nonconformity, treating risk, and establishing governance? At SRES we partner with clients to ensure integration of RAI at an organizational, management, and engineering level through the AIMS and technical processes. Learn more about RAI through our training aligned to the ISO/IEC 42001:2023 standard.


Have insights or questions? Leave a comment below—we welcome thoughtful discussion from our technical community.

Interested in learning more about our services? Find all upcoming trainings here and all consulting offerings here.


Short series: Introduction to the ISO/IEC 42001 Standard

Short series: Introduction to the ISO/IEC 42001 Standard

08/08/24

What’s wrong with big data?

09/01/24
What’s wrong with big data?

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems20
  • Electric Mobility3
  • News16
  • Videos11
  • Functional Safety32
  • Responsible AI22
  • Cybersecurity5
Most Recent
  • SRES SafeStack | February 2026
    SRES SafeStack | February 2026
    02/05/26
  • New Training: ISO/IEC TS 22440, Functional Safety and AI Systems
    New Training: ISO/IEC TS 22440, Functional Safety and AI Systems
    02/04/26
  • From Standards to Systems: How SRES Tailors Safety Training for Real-World Teams
    From Standards to Systems: How SRES Tailors Safety Training for Real-World Teams
    01/26/26
  • SRES SafeStack | January 2026
    SRES SafeStack | January 2026
    01/06/26
  • SRES SafeStack | December 2025
    SRES SafeStack | December 2025
    12/01/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube