Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
November 2023 Recap: How is AI and Autonomy doing?
12/04/23
28 Likes

November 2023 Recap: How is AI and Autonomy doing?

It’s now been a year since OpenAI released an experimental chatbot dubbed “ChatGPT”. Since then, the terms “ChatGPT” and “AI” have mushroomed, creating a movement of “Citizen Developers” and pages and pages of school reports being created in seconds. But one thing we ALL learned in the past 12 months, is that ChatGPT can quickly create something that looks real, but without any fact checking involved. Well, have we ALL really learned?
1. Man killed by a robot in South Korea: As reported on November 8, 2023 by BBC, a robot was unable to distinguish a man from boxes of food it was handling. “The incident occurred when the man, a robotics company employee in his 40s, was inspecting the robot. The robotic arm, confusing the man for a box of vegetables, grabbed him and pushed his body against the conveyer belt, crushing his face and chest, South Korean news agency Yonhap said.” This was a very unfortunate event, which seems to have been avoidable. The man was performing some form of maintenance on the robot’s sensors when the event occurred. Functional safety standards like IEC 61508 have processes required during maintenance which would require the safe maintenance activities of safety related equipment.
2. Microsoft’s AI-generated poll showed up next to an article about the death of a 21-year-old woman, speculating the cause of death: Business Insider reported on November 1, 2023 that a newspaper publisher claimed Microsoft damaged its reputation by placing an insensitive poll, generated by AI, next to an article about the death of a woman. “An AI-generated poll asking readers to vote on whether they thought the woman had died by murder, suicide or accident appeared next to the article on Microsoft Start.” This triggered anger from readers and the news organization. When we understand the aspects of Responsible AI, one of the important starting points is establishing AI Principles and Policies. Published on Microsoft’s website, dated March 10, 2023, one of Microsoft’s AI Principles is “Inclusive and respectful”. It reads: “The first principle of AI that Microsoft follows is ensuring that AI technology is inclusive and respectful of human rights. AI should be designed and used to enhance human capability, not replace it. This means that AI systems should not discriminate against people based on their race, gender, religion, or any other characteristic. They should be accessible to everyone, including people with disabilities. Moreover, AI should respect human dignity, privacy, and autonomy.” It seems this incident is in violation of Microsoft’s first principle.
3. False AI-generated allegations against four consultancy firms: The Guardian published on November 2, 2023 that case studies created by Google Bard AI provided incorrect statements, in which a group of academics used to claim against four consultancy firms. The article goes on to say, “The academics, who specialize in accounting, were urging a parliamentary inquiry into the ethics and professional accountability of the consultancy industry to consider broad regulation changes, including splitting up the big four. Part of the original submission relied on the Google Bard AI tool, which the responsible academic had only begun using that same week. The AI program generated several case studies about misconduct that were highlighted by the submission.” It was found that these case studies were fabricated, damaging the reputation of these consultancy firms. This occurred a year after the release of ChatGPT where there have been hundreds if not thousands of articles written on cases where ChatGPT made up information that, at first impression, looked very real. Is it the fault of Google Bard AI, or the academics that exploited the information given to them without fact checking?
Updates on the Third Edition of the ISO 26262 Standard

Updates on the Third Edition of the ISO 26262 Standard

11/29/23

SRES Partners with SGS-TÜV Saar GmbH to provide SOTIF Professional and Safety for Artificial Intelligence Training

12/18/23
SRES Partners with SGS-TÜV Saar GmbH to provide SOTIF Professional and Safety for Artificial Intelligence Training

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News9
  • Videos9
  • Functional Safety25
  • Responsible AI17
  • Cybersecurity2
Most Recent
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
  • Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    04/14/25
  • Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    04/11/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube