Clearview AI, the company behind a controversial facial recognition system that scrapes social media sites to add pictures of people to its database, is on the cusp of receiving a patent for its technology. The company confirmed Saturday that the US Trademark and Patent Office had sent it a notice of allowance, which means Clearview’s application is set to be granted once the company pays administrative fees.
News of the notice was reported earlier Saturday by Politico, which said critics worry that the granting of the patent could speed the development of similar technologies before lawmakers have had time to come to grips with them.
Get the CNET Daily News newsletter
Catch up on the biggest news stories in minutes. Delivered on weekdays.
Clearview AI’s system, which is used by law enforcement agencies including the FBI and the Department of Homeland Security, has been criticized for feeding its database of billions of images by trawling social media sites and harvesting pictures of people without their consent. The company says the pictures it gathers are publicly available and thus should be fair game. But the approach has prompted cease-and-desist letters from Facebook, Twitter and others. And officials in Australia, Britain and Canada have called out the company over data privacy laws, as well.
Clearview CEO Hoan Ton-That has also said that the company’s system is meant to identify criminal suspects and not as a surveillance tool and that Clearview is “committed to the responsible use” of its technology, including working with policy makers on facial recognition protocols. On Saturday, the company told CNET in a statement that “we do not intend to make a consumer grade version of Clearview AI.” Critics have said that apps or other consumer versions of such a technology could potentially let a passerby capture your image with a smartphone and then uncover personal data about you.
Politico points out that Clearview AI’s patent application contains language that suggests uses beyond police ID’ing suspects.
“In many instances, it may be desirable for an individual to know more about a person that they meet, such as through business, dating, or other relationship,” the patent application says, adding that traditional methods like asking questions, doing internet searches or running background checks can fall short. “Therefore, a strong need exists for an improved method and system to obtain information about a person and selectively provide the information based on predetermined criteria.”
Facial recognition systems in general have been criticized for inaccuracy that’s sometimes led to false arrests. In particular, the systems have had trouble when it comes to recognizing people of color and women. Privacy advocates also worry about the potential for stifling dissent through, for instance, surveilling political demonstrations and protests. On the other hand, law enforcement officers say the systems have been used to solve crimes from shoplifting to child sexual exploitation to murder.
Clearview told Politico that it doesn’t know of any instances where its technology led to a wrongful arrest, and the publication notes that Clearview’s technology was found to be highly accurate in a recent audit by the Commerce Department’s National Institute of Standards and Technology. “As a person of mixed race,” such accuracy “is especially important to me,” Ton-That has said.
Lawmakers are still wrestling with how best to regulate facial recognition. In the US, a handful of states and some cities have rules, but there aren’t yet any federal laws governing the technology. That’s despite the fact that the systems are widespread and a growing number of US agencies use them. In June, the Government Accountability Office said 20 US agencies were using facial recognition systems but that many of those organizations lacked essential information about them.
“Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees,” the GAO said at the time. “These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy.”