Clearview AI is being investigated by regulators in the UK and Australia over privacy issues related to its data scraping practices, the latest legal challenge facing the controversial startup.
The Office of the Australian Information Commissioner and the UK's Information Commissioner's Office are concerned with how the New York-based facial recognition company builds and protects its internal database, which has more than 3 billion images of people taken from websites including Facebook and YouTube. Using Clearview's software, law enforcement agencies—and, until recently, companies in the private sector reportedly including Walmart and Macy's—can upload an image of a person and attempt to identify them based on the photos in the database.
The watchdogs will likely focus on how Clearview is both using and protecting the data it stores. Privacy concerns have been at the forefront, as an investigation by The New York Times in January revealed that photos remained on Clearview's database even after users deleted them from social media. But the company was also hacked in February and its client list stolen, though it said no photos or search histories were accessed.
Clearview is already facing legal action in states including California, Illinois and Vermont. Earlier this month, the startup announced that it would no longer offer its tool in Canada as a result of the country's investigation, and it officially ended its contract with the Royal Canadian Mounted Police. The European Data Protection Board also announced that Clearview's technology was likely illegal within the EU. Tech giants Google, Facebook, YouTube and Twitter have sent cease-and-desist letters to stop the business from taking photos from their sites.
Clearview had largely flown under the radar until The New York Times investigation. Set up in 2017, the startup raised $1.4 million from angel investors including Palantir co-founder Peter Thiel. Last year, Clearview secured a $7 million Series A from Kirenaga Partners and was valued at an estimated $37 million, according to PitchBook data.
Facial recognition technology is a controversial topic in the tech world and beyond, and companies supplying it have come under intense scrutiny for providing software to law enforcement. Amazon has banned the police from using its own technology for the next year, and Microsoft announced that it would no longer invest in third-party companies focusing on facial recognition.
The US is currently working on legislation to regulate the use of such technology, which could reportedly include protections against it being used at political protests and for arrests. In February, Washington became the first state to introduce its own bill, which would require agencies using facial recognition software to provide a description of how the data will be collected, used and stored.
In the wake of Black Lives Matter protests, nonprofits like Amnesty International are calling for a ban on law enforcement's use of facial recognition for mass surveillance and potential racial profiling. Facial recognition technology has long been criticized for its inaccuracy, especially when identifying people of color. A study from the National Institute of Standards and Technology in the US found when it tested several algorithms at a sporting venue that the median accuracy rate was 40%, and its studies have also shown false identification rates for Asian or Black people to be as much as 100 times higher than for white people. In the UK, civil liberties nonprofit Big Brother Watch found that 93% of people stopped during the London Metropolitan Police's facial recognition trials were wrongly identified.