Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Short series: Responsible AI
09/26/23
46 Likes

Short series: Responsible AI

What is Responsible AI (RAI) and why is it important?

Transcript (auto-generated)

Jody Nelson at SRES Shorts. I’d like to discuss a little bit about Responsible AI, also known as RAI. So although there is no universal definition of RAI, we do see a lot of large organizations like Google, Microsoft, IBM, and others publicly show it on their website and show the importance of Responsible AI. Now, it is important for us to understand all the possible issues, limitations, unintended consequences of both our AI data and the AI model itself. Now, a lot of our AI has to deal with the culture.

We need to establish organizational-wide ethical values and AI principles. And then we need to monitor how those AI principles are being used in the actual practice through generally some form of audits. So when we’re dealing with the cultural aspects of this, we’re not monitoring just the AI product itself, so the outputs, we’re also monitoring the management that built up those products. This is very important for us. Now in doing so, we need humans involved, actual people involved in here, and they have to have some kind of form of accountability. We also need subject matter experts to understand the AI architecture, understand the organizational strategy for AI.

Additionally, we want some kind of ethical board, some kind of review of what’s going on to make sure we meet our principles and are establishing our values correctly. So this is not just ML coders. common across a lot of organizations and their principles. Generally, we talk about transparency and explainability. This is very critical, although we have to caution that this can cause also cybersecurity concerns. So, we have to take that into consideration as well. Other things to consider, fairness, accountability as I mentioned before, and privacy of the user.

Short series: we want to hear from you

Short series: we want to hear from you

09/27/23

Short series:  ISA/IEC 62443

09/25/23
Short series:  ISA/IEC 62443

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News9
  • Videos9
  • Functional Safety25
  • Responsible AI17
  • Cybersecurity2
Most Recent
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
  • Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    04/14/25
  • Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    04/11/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube