Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety
      • Cybersecurity
      • Autonomous Product Development
      • Electric Vehicle (EV) Development
      • Assurance of AI-based Tools
    • Physical AI
      • Robotics Safety
      • Assurance of AI-based Tools
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Functional Safety
    • Cybersecurity
    • ADS and Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Short series: Responsible AI
09/26/23
171 Likes

Short series: Responsible AI

What is Responsible AI (RAI) and why is it important?

Transcript (auto-generated)

Jody Nelson at SRES Shorts. I’d like to discuss a little bit about Responsible AI, also known as RAI. So although there is no universal definition of RAI, we do see a lot of large organizations like Google, Microsoft, IBM, and others publicly show it on their website and show the importance of Responsible AI. Now, it is important for us to understand all the possible issues, limitations, unintended consequences of both our AI data and the AI model itself. Now, a lot of our AI has to deal with the culture.

We need to establish organizational-wide ethical values and AI principles. And then we need to monitor how those AI principles are being used in the actual practice through generally some form of audits. So when we’re dealing with the cultural aspects of this, we’re not monitoring just the AI product itself, so the outputs, we’re also monitoring the management that built up those products. This is very important for us. Now in doing so, we need humans involved, actual people involved in here, and they have to have some kind of form of accountability. We also need subject matter experts to understand the AI architecture, understand the organizational strategy for AI.

Additionally, we want some kind of ethical board, some kind of review of what’s going on to make sure we meet our principles and are establishing our values correctly. So this is not just ML coders. common across a lot of organizations and their principles. Generally, we talk about transparency and explainability. This is very critical, although we have to caution that this can cause also cybersecurity concerns. So, we have to take that into consideration as well. Other things to consider, fairness, accountability as I mentioned before, and privacy of the user.

Short series: we want to hear from you

Short series: we want to hear from you

09/27/23

Short series:  ISA/IEC 62443

09/25/23
Short series:  ISA/IEC 62443

Insight Categories

  • Autonomous Systems19
  • Electric Mobility3
  • News14
  • Videos11
  • Functional Safety31
  • Responsible AI21
  • Cybersecurity5
Most Recent
  • SRES SafeStack | January 2026
    SRES SafeStack | January 2026
    01/06/26
  • SRES SafeStack | December 2025
    SRES SafeStack | December 2025
    12/01/25
  • SRES SafeStack | November 2025
    SRES SafeStack | November 2025
    11/03/25
  • From Evidence to Argument: Using GSN to Structure AV Safety Cases
    From Evidence to Argument: Using GSN to Structure AV Safety Cases
    01/16/26
  • CES Wrap-Up 2026: The Humanoid Robot Safety Question
    CES Wrap-Up 2026: The Humanoid Robot Safety Question
    01/15/26
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Physical AI

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Training Terms & Cancellation Policy
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube