Throughout the exchanges, Clearview AI encouraged more use of its services. The documents show that many individuals at NYPD had access to Clearview during and after this time, from department leadership to junior officers. The emails also show how NYPD officers downloaded the app onto their personal devices, in contravention of stated policy, and used the powerful and biased technology in a casual fashion. The emails reveal that the NYPD gave many officers outside the facial recognition team access to the system, which relies on a huge library of public photos from social media. Both policies seem to have been circumvented with Clearview AI. This is particularly problematic because stated policies limit the NYPD from creating an unsupervised repository of photos that facial recognition systems can reference, and restrict the use of facial recognition technology to a specific team. The NYPD has run over 5,100 searches with Clearview AI. But the emails show that the relationship between them was well developed, with a large number of police officers conducting a high volume of searches with the app and using them in real investigations. The NYPD has previously downplayed its relationship with Clearview AI and its use of the company’s technology. The documents, obtained through freedom of information requests by the Legal Aid Society and journalist Rachel Richards, track a friendly two-year relationship between the department and the tech company during which time NYPD tested the technology many times, and used facial recognition in live investigations. Many of those agencies replied to the accusations by saying they had only trialed the technology and had no formal contract with the company.īut the day before, the definition of a “trial” with Clearview was detailed when nonprofit news site Muckrock released emails between the New York Police Department and the company. On April 6, Buzzfeed News published a database of over 1,800 entities-including state and local police and other taxpayer-funded agencies such as health-care systems and public schools-that it says have used the company’s controversial products. "As the Representative of two of the first cities on the east coast to outlaw the use of this technology, I'm proud to sponsor this bill and make clear that our government has no business spying on its civilians.It’s been a busy week for Clearview AI, the controversial facial recognition company that uses 3 billion photos scraped from the web to power a search engine for faces. "This bill would boldly affirm the civil liberties of every person in this country and protect their right to live free of unjust and discriminatory surveillance by government and law enforcement," she added. Pressley, in a statement last year backing the federal ban on government use of facial recognition software, said "Black and brown people are already over-surveilled and over-policed, and it's critical that we prevent government agencies from using this faulty technology to surveil communities of color even further." "States and local efforts are also continuing, with communities in California, Washington, Nebraska, Illinois, and Massachusetts and more still pushing forward local legislation banning the technology. We need action from lawmakers in Albany and at City Hall to prohibit the use of facial recognition technologies outright to protect New Yorkers' privacy and other fundamental rights."įacial recognition out of control? Half of US adults have their faces on police databasesįacial-recognition databases used by the FBI and state police hold images of 117 million US adults, according to new research. "Short of this litigation and these disclosures, the public would never know the extent to which NYPD employed Clearview - a controversial tool that other localities have banned outright. "The NYPD has purposefully kept New Yorkers in the dark on the controversial surveillance technologies that the Department deploys citywide," said Jonathan McCoy, staff attorney with the digital forensics unit at The Legal Aid Society in a statement. The documents also showed that the officers who had access to Clearview AI even used it on their personal devices and had gotten login and password information sent directly to their email accounts, a major cybersecurity risk with wide-ranging implications. The Legal Aid Society noted that it is still unclear if courts and lawyers were notified that the technology was used to identify suspects.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |