Clearview AI May Be Illegal In EU Watchdog Says
The European Data Protection Board (EDPB) said that the use of Clearview artificial intelligence by law enforcement would “likely not be consistent with the EU data protection regime.”
The board added that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”
The statement comes amid growing concerns around potential misuses of facial recognition technology like Clearview, which matches faces to billions of photos the software scrapes from websites, The Next Web reported.
The protests across the world that have erupted due to police brutality and racial inequality have already led Amazon to pause police use of facial recognition for a year, and IBM and Microsoft to stop offering the software entirely until there is regulation framework by Congress, VOX reported. But Clearview has thus far refused to follow the trend (likely because that is its whole business model).
“We are very encouraged that our technology has proven accurate in the field and has helped prevent the wrongful identification of people of color,” Clearview CEO Hoan Ton-That said in a recent interview.
Activist Post recently discussed Clearview AI in an article about the rise of facial recognition technology amid the CV pandemic.
Clearview AI has developed an app which allows anyone to snap a picture of someone which is then compared to a database of more than 3 billion photos that the company has scraped off Facebook, Venmo, YouTube and other sites, before serving up matches along with links to the sites where the database photos originally appeared.
It turns out we were right, a Twitter feed titled “Minnesota Contact Tracing” recently revealed how police are using contact tracing to identify and arrest protesters. “Minnesota Public Safety Commissioner John Harrington says they’ve begun contact tracing arrestees.”
Minneapolis police and the Minnesota Fusion Center are also using Clearview AI, BriefCam, Ring doorbell cameras, Axon police body cameras, ShotSpotter and license plate readers to create an intimate view of people’s lives.
Recently, 100 human rights groups warned that an Apple/Google contact tracing app could be used as a cover to identify activists and minorities.
An increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities—undermining the effectiveness of any public health response. Such measures also pose a risk of discrimination and may disproportionately harm already marginalized communities.
As NBC News noted, contact tracers also use geofencing to help identify protesters.
“Geofencing” captures the social media posts of people entering a specific area. The technology locates any cellphones that cross into the area by locking onto their geolocation systems, and then records social media posts and sometimes other data from the phones.
You may remember that Clearview AI had its entire client list stolen earlier this year.
Clearview AI has partnered with law enforcement agencies around the country; however, it was unknown exactly how many or who they were. That may not be the case for much longer, after an intruder “gained unauthorized access” to its customer list – along with data on the number of searches its customers have conducted, as well as how many user accounts have been set up, according to the Daily Beast.
The company has raised concerns among privacy advocates after a New York Times article described their work with law enforcement agencies, with over 40 organizations signing a letter calling for an independent watchdog to recommend a ban on government use of facial recognition technology.
In total, governments in at least 25 countries are employing vast programs for — mobile data tracking, apps to record personal contact with others, CCTV networks equipped with facial recognition, permission schemes to go outside and drones to enforce social isolation regimes, according to The Guardian.
Fortunately for us, there is a little hope as Congress recently demanded that Amazon disclose how its Ring service stores data, while other lawmakers have called for a halt of rolling out facial recognition technology altogether, Activist Post reported.
Last year, legislators called for putting a “time out” on facial recognition technology until regulations are in place. So far, Congress has held two oversight hearings on the topic and there are at least four bills in the works to limit the technology.
On top of that, some cities in the U.S. have outright banned the biometric technology like San Francisco, Somerville, Massachusetts, and Oakland, California, as Activist Postreported.
The rapid growth of this technology has triggered a much-needed debate to slow down the rollout. Activists, politicians, academics and even police forces all over the world are expressing serious concerns over the impact facial recognition could have on our society.
Although, that debate might be voided with the CV pandemic providing a window of opportunity for Orwellian technology to become permanently embedded in our lives. The battle isn’t over yet, though; in fact, the American Civil Liberties Union (ACLU) recently filed a lawsuit against the Department of Homeland Security (DHS) over its use of facial recognition technology in airports.
© Author: Aaron Kesel