In August of this year, the EU AI Act was enacted into law. Starting in 2025, enforcement will begin with substantial non-compliance fines, potentially reaching tens of millions of euros. From February 2, 2025, prohibited AI practices will be in effect, including social scoring systems, scraping facial images from the internet, and emotion recognition systems. We’ve previously reported on the US Federal Trade Commission’s (FTC) ban on Rite Aid for using facial recognition for security or surveillance purposes. This type of AI system seems to also be in violation of the EU AI Act.
On August 2, 2026, the rules for high-risk AI systems will come into effect. Examples of high-risk AI systems include certain medical devices, critical infrastructure management systems (e.g., water, gas, electricity), and autonomous vehicles. These systems must meet several requirements, such as implementing a quality management system, having a risk management system in place, ensuring data quality, and providing transparency about the AI system’s capabilities to users. The ISO/IEC 42001 standard, which introduces an auditable AI Management System, addresses many of these requirements according to the EU AI Act. This standard also provides a framework to support the responsible usage and development of AI.
French government asked to stop using AI risk-scoring system
Police seldom disclose use of facial recognition despite false arrests
According to the Washington Post, “Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology.
Police departments in 15 states provided The Post with rarely seen records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years. According to the arrest reports in those cases and interviews with people who were arrested, authorities routinely failed to inform defendants about their use of the software — denying them the opportunity to contest the results of an emerging technology that is prone to error, especially when identifying people of color.”
These cases underscore that the risks associated with responsible AI are not confined to the private or public sector; governments can also be culpable. Follow SRES as we navigate the path to assist clients in developing responsible AI systems. Our training on the ISO/IEC 42001 standard elucidates the relationship to the EU AI Act and responsible AI development.