Last year we presented three AI related incidents; one involving a fatality by a robot in South Korea, another involving Microsoft’s AI-generated poll creating an ethically questionable survey, and the last regarding false allegations against three major consulting firms. Already early in 2024 we’ve seen several AI incidents including Deepfakes of celebrities, both alive and dead, and Chevrolet selling a Tahoe for $1. But in this blog we want to focus on one particular incident and draw in measures identified in functional safety standards that could have helped reduced the systematic failures in Rite Aid’s AI use for surveillance.
Rite Aid banned by FTC from using facial recognition for security or surveillance purposes:
Since 2012 Rite Aid has been using AI based facial recognition systems to “drive and keep persons of interest out of Rite Aid’s stores.” However, recent complaints and an actual acknowledgement from Rite Aid has shown bias towards Black, Latino, Asian and women consumers with inaccurate matches producing false positives. Earlier in January the FTC announced action against Rite Aid for unfair use of AI, banning them from using facial recognition systems for their security and surveillance. Multiple deficiencies were cited, including:
- Failure to properly vet vendors that had access to consumers’ personal information;
- Failure to periodically reassess service providers’ data security practices;
- Failure to include sufficient security requirements in contracts with service providers;
- Failure to enforce image quality controls;
- Failure to properly train staff;
- Failure to monitor, test and track the accuracy of results.
Although Rite Aid is the 6th largest pharmacy in the US with revenues around $24B USD, it doesn’t have the AI and software development capabilities that an Amazon or Microsoft has, and therefore depends on external vendors. However, a major aspect of Responsible AI (RAI) is accountability, and it doesn’t matter if the oversights were from an internal development team or external supplier, Rite Aid is accountable. Unfortunately, today there aren’t standards or regulations to draw from to support RAI to the depth required. But we can look at other standards for guidance of best practices. One in particular is the automotive functional safety standard ISO 26262:2018, specifically Part 8, Clause 5 Interfaces within distributed developments. Clause 5 has a few requirements, with proper adaptation, that could have been used to lower the risk to the above noted deficiencies:
- Supplier selection criteria: it is required to evaluate all suppliers and to provide evidence that there is a high level of confidence the supplier is capable to develop properly the requested work by showing evidence of previous projects and evaluating their work. It is also required in this section for the customer to provide sufficient requirements to their supplier such that development can be properly performed.
- Initiation, planning and execution of distributed development: a tedious, but very important aspect of distributed development is the Development Interface Agreement (DIA). The DIA documents all required responsibilities and verification activities. It also establishes the appropriate communication paths between supplier and customer, requiring continuous reviews and discussions on the progress of the project.
- Assessment activities in distributed development: ultimately it is necessary in RAI, much like in functional safety, to establish assessment targets and carry out an assessment to ensure compliance to those targets are fully and correctly met.
So, although there isn’t a standard specifically addressing the supplier/customer interface in RAI, there are standards in which we can learn from to improve the quality and capabilities of AI related functions.