Did the fight for an open web create a facial recognition nightmare?
A recent exposé from the New York Times revealed the startling privacy concerns surrounding Clearview AI, a secretive startup that uses facial recognition to match photos of unknown people to their online images.
The company owes much of its success to the fervent defense of web scraping, which has led to unintended consequences in security, privacy, and culture.
How does it work? Clearview AI has spent years compiling its database of billions of images scraped from profiles on Facebook, YouTube, Twitter, Instagram, Venmo, and other social websites.
Companies and government agencies using Clearview AI can upload a picture of any face into its system and discover any matching photos of that person across the internet. Matches are displayed alongside links to related social media profiles, making it easy to identify and locate people.
Is it legal? Probably. While such rampant data scraping violates the terms of service for most targeted websites, recent court cases have upheld the legality of automated, large-scale web scraping.
In late 2019, a court determined in hiQ Labs vs LinkedIn that accessing publicly available information in an automated way does not violate the Computer Fraud and Abuse Act.
Many saw this as a huge win for developers and a more open web. As we noted:
"While many companies hope to protect the data they have collected on their platforms, others are looking to leverage that data to build new platforms. With greater freedom for web scraping, developers will have access to many new and legal data sources."
Unintended consequences. Web scraping has certainly had a beneficial impact on developers building innovative and powerful apps, platforms, and tools. But combining the increasingly nefarious use of facial recognition with the legality of scraping has unintentionally created a world in which beasts like Clearview AI can thrive.
Want to get more of these in your inbox?
Subscribe for weekly updates from the Software team.