Rite Aid Facial Recognition Fiasco: A 5-Year Ban Amidst FTC Scrutiny

Rite Aid faces a five-year prohibition on utilizing facial recognition software following a Federal Trade Commission

In a significant development, Rite Aid faces a five-year prohibition on utilizing facial recognition software following a Federal Trade Commission (FTC) investigation into what it deems the company’s “reckless use of facial surveillance systems.” The FTC asserts that this not only subjected Rite Aid’s customers to humiliation but also jeopardized the security of their sensitive information.

Pending approval from the U.S. Bankruptcy Court, a crucial step due to Rite Aid’s Chapter 11 bankruptcy filing in October, the FTC’s Order mandates the removal of all images collected during the facial recognition system rollout. Additionally, any products derived from these images must be deleted. Rite Aid is further directed to establish a robust data security program to ensure the protection of any personal data collected in the future.

Rite Aid

A 2020 Reuters report exposed Rite Aid’s clandestine implementation of facial recognition systems in around 200 U.S. stores over an eight-year period starting in 2012. Notably, these systems were primarily tested in “largely lower-income, non-white neighborhoods.” This raised eyebrows and brought Rite Aid under the FTC’s intensified scrutiny on the misuse of biometric surveillance.

Rite Aid, allegedly in collaboration with two contracted companies, established what the FTC terms a “watchlist database.” This database featured images of customers deemed to have engaged in criminal activities within Rite Aid’s stores. The images, often of subpar quality, were sourced from CCTV or employees’ mobile phone cameras.

Upon a customer entering a store and matching an existing image on the database, employees received an automatic alert instructing them to take action. Most commonly, this action was to “approach and identify.” Unfortunately, these matches frequently turned out to be false positives, leading to employees wrongly accusing customers of wrongdoing. This resulted in embarrassment, harassment, and other harm, as per the FTC’s findings.

The FTC further alleges that Rite Aid neglected to inform customers of the use of facial recognition technology, and employees were explicitly instructed not to disclose this information. This lack of transparency raises significant concerns about the ethical deployment of such surveillance technology.

In the broader context, facial recognition software has become a contentious aspect of the AI-powered surveillance era. Cities have enacted bans, and politicians have advocated for stricter regulations. The FTC’s investigation into Rite Aid sheds light on inherent biases within AI systems, citing the company’s failure to mitigate risks to certain consumers based on their race.

In response to the FTC’s findings, Rite Aid expressed satisfaction with reaching an agreement but firmly disputed the core allegations. The company clarified that the facial recognition technology pilot program was discontinued more than three years ago in the limited number of stores where it was deployed. This raises questions about the timeline and awareness surrounding the technology’s discontinuation.

 

Read More

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *