Linkedin-inYoutube
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
Let's Talk
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
logotype
logotype
  • Consulting
    • Automotive
      • Functional Safety & Cybersecurity
      • Electric Vehicle (EV) Development
      • Autonomous Product Development
    • Industrial
      • Industrial Functional Safety
      • IACS Cybersecurity
    • Responsible AI
      • Responsible Artificial Intelligence
  • Training
    • Automotive
    • Industrial
    • Responsible AI
  • Company
    • Why SRES Training
    • Leadership
    • Partnerships
    • Careers
  • Insights
  • Contact
October 2024 Recap: How is AI and Autonomy doing?
11/07/24
37 Likes

October 2024 Recap: How is AI and Autonomy doing?

In August of this year, the EU AI Act was enacted into law. Starting in 2025, enforcement will begin with substantial non-compliance fines, potentially reaching tens of millions of euros. From February 2, 2025, prohibited AI practices will be in effect, including social scoring systems, scraping facial images from the internet, and emotion recognition systems. We’ve previously reported on the US Federal Trade Commission’s (FTC) ban on Rite Aid for using facial recognition for security or surveillance purposes. This type of AI system seems to also be in violation of the EU AI Act.

On August 2, 2026, the rules for high-risk AI systems will come into effect. Examples of high-risk AI systems include certain medical devices, critical infrastructure management systems (e.g., water, gas, electricity), and autonomous vehicles. These systems must meet several requirements, such as implementing a quality management system, having a risk management system in place, ensuring data quality, and providing transparency about the AI system’s capabilities to users. The ISO/IEC 42001 standard, which introduces an auditable AI Management System, addresses many of these requirements according to the EU AI Act. This standard also provides a framework to support the responsible usage and development of AI.

French government asked to stop using AI risk-scoring system

Recently, Amnesty International called out the French government for employing discriminatory algorithms within their social security agency. The use of AI for welfare payments in France raises potential concerns under the EU AI Act, especially for high-risk AI systems. “This system operates in direct opposition to human rights standards, violating the right to equality and non-discrimination and the right to privacy,” said Agnes Callamard, Secretary General of Amnesty International. It has been reported that the AI risk-scoring system disproportionately targets individuals with disabilities, single parents (mostly mothers), and those living in poverty as higher risks.

Police seldom disclose use of facial recognition despite false arrests

According to the Washington Post, “Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology.

Police departments in 15 states provided The Post with rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years. According to the arrest reports in those cases and interviews with people who were arrested, authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error, especially when identifying people of color.”

These cases underscore that the risks associated with responsible AI are not confined to the private or public sector; governments can also be culpable. Follow SRES as we navigate the path to assist clients in developing responsible AI systems. Our training on the ISO/IEC 42001 standard elucidates the relationship to the EU AI Act and responsible AI development.

SOTIF and FuSa Coupling

SOTIF and FuSa Coupling

02/03/25

Interplay between ISO 21448 and ISO 8800 for Autonomous Systems

12/03/24
Interplay between ISO 21448 and ISO 8800 for Autonomous Systems

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Insight Categories

  • Autonomous Systems14
  • Electric Mobility3
  • News9
  • Videos9
  • Functional Safety25
  • Responsible AI17
  • Cybersecurity2
Most Recent
  • SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    SecuRESafe (SRES) Strengthens Leadership in Autonomous Systems and AI Safety, Appoints Industry Veteran Bill Taylor as Partner
    05/01/25
  • VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    VDA 450: Vehicle Power Distribution and Functional Safety – Part II
    04/28/25
  • SRES Partners on AI & Safety Webinar Series with LHP
    SRES Partners on AI & Safety Webinar Series with LHP
    04/16/25
  • Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    Credo AI and SecuRESafe (SRES) Announce Strategic Partnership to Advance Responsible AI Governance and Safety
    04/14/25
  • Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    Demystifying SOTIF Acceptance Criteria and Validation Targets – Part 3
    04/11/25
logotype
  • Company
  • Careers
  • Contact Us
  • info@sres.ai
  • 358 Blue River Pkwy Unit
    E-274 #2301 Silverthorne,
    CO 80498

Services

Automotive

Industrial

Responsible AI

Training

Resources

Insights

Video

Legal

Privacy Policy
Cookie Policy
Terms & Conditions
Accessibility
Consent Preferences

© Copyright 2025 SecuRESafe, LLC. All rights reserved.

Linkedin Youtube