Tech Companies Are Using Your Face to Build a Nightmarish Dystopia
03.27.2019

Just a few years ago, automated facial recognition was the stuff of far-future science fiction. But as detailed in a frankly terrifying new report from Fortune, it is becoming reality so fast regulators haven’t kept up. That gap has left software companies almost entirely free to harvest crucial training data from photos uploaded to cloud storage apps and personal archivesincluding, in some cases, tools aimed at collecting images of children.

As Jeff John Roberts reports, facial recognition software has evolved rapidly over the past decade-plus, largely because of huge pre-existing archives of photos. In particular, social networks and photo apps have been important sources of this training data.

“We have consumers who tag the same person in thousands of different scenarios,” Doug Aley told Fortune. “Standing in the shadows, with hats-on, you name it.” Aley is CEO of Ever AI, a San Francisco facial recognition startup that initially launched as a photo-management app called EverRoll. EverRoll was an aggressive marketer, and photos uploaded by users more than half a decade ago have been used to train the Ever AI softwarewithout consent from those users. The market companies like Ever AI are building is projected to grow into a $9 billion industry by 2022, thanks to interest from customers including governments, law enforcement agencies, and retailers.

Millions of individual faces may be in training datasets without those subjects' agreement or even knowledge.

An even more disturbing example is RealNetworks, whose RealTimes app is specifically aimed at attracting users with families. Data from photo albums shared between family members is also used to train the company’s facial recognition software. Another shocking example is Waldo, a Texas startup that markets facial recognition software to schools. Hundreds of schools currently use Waldo, allowing the company to scan official school photos and videos to distribute those images to the children’s own parents.

Get the BREAKERMAG newsletter, a weekly roundup of blockchain business and culture.

All of this raises giant red flags for a number of reasons. Only three U.S. states currently have any laws requiring prior consent for the gathering and processing of biometric data, meaning millions of individual faces may be in training datasets without those subjects’ agreement or even knowledge. Tech companies and other large businesses have been shown to be completely unreliable in their handling of their vast troves of data, meaning your facial data, gathered without your permission, could wind up nearly anywhere.

TheChain: Image

The more subtle concern is how this data is used. Some of the companies detailed by Fortune provide their services to retailers hoping to detect supposed criminals, as do law enforcement programs such as Detroit’s dystopian Project Greenlight. These applications combine at least two layers of so-called “black box AI,” with fallible and inauditable machines given the task of both recognizing individuals, and, increasingly, assessing them as possible threats.

The problem with the first part was clearly demonstrated by an ACLU test in which 28 members of Congress were inaccurately matched with mugshots from a law-enforcement database by Amazon’s facial recognition software. These were disproportionately legislators of color, demonstrating the lower accuracy of Amazon’s software when processing the faces of members of racial minorities. If these systems remain unregulated, such false positives could remain on the books indefinitely.

Applications of facial recognition software suggest the creeping growth into the United States of something resembling China’s utterly nightmarish Social Credit System.

And being flagged as suspect can have devastating effects. Current applications of facial recognition software suggest the creeping growth into the United States of something resembling China’s utterly nightmarish Social Credit System. Under the system, violations from criminal activity to unpaid debts to traffic violations can limit access to resources as basic as transportation. Similar effects may arise from the use of facial-recognition software by law enforcement or retailers in the U.S., as individuals’ past mistakes restrict their future options, with no clear limits on how long those sanctions can be enforced.

Related: Alexandria Ocasio-Cortez Is Right About Racist Algorithms

At least in principle, this violates a fundamental tenet of American justice: the idea that people who make mistakes can pay their debt to society (or wipe out their financial missteps) and set about building new, better lives. That’s not just a moral principlethe ability to start over with a clean slate is often seen as essential to the dynamism and resilience of the U.S. economy as a whole.

What can we do to prevent a future of permanent blacklists and omnipresent monitoring? Thankfully, a pushback has begun in earnest. Nonprofits like the ACLU have begun pushing to restrict the use of facial recognition, and a bipartisan Senate bill introduced this month would require opt-in consent for the use of images. Supporting those efforts may be essential to preserving future freedom for yourself, and your children.