This story is part of BREAKER’s Social Good Week, a series looking at ways blockchain technology can engineer progress and help humanity.
On Sept. 7, 2018, 16-year-old Syrian Muhammad Najem tweeted a video message to President Trump asking him to send international observers to Syria. “I know you hate fake news,” the teen says in the clip. “That’s why I recorded this authentic and verifiable video message for you. … Me and hundreds of thousands of civilians are trapped in Idlib Province. Assad and Russian forces are preparing to attack our last refuge in Syria.”
The 57-second video, which received more than 120,000 views after it was shared by Al Jazeera English, was recorded with the TruePic phone app. If you click on the TruePic link of Najem’s video, you’ll also see several green checkmarks indicating the video has passed a number of tests—including a “Metadata Check,” “Image Analysis,” “Vault Storage”—and finally, that it has been “Written to Blockchain.” The idea is to make sure that four major pieces of data—time, date, location, and pixelation—are preserved from the moment of capture. The app runs more than 20 different computer vision tests and checks the device itself to make sure its operating system has not been manipulated, says Mounir Ibrahim, TruePic vice president of strategic initiatives. The information is stored in a hash on the bitcoin blockchain.
Of course, using an app like TruePic doesn’t necessarily make the content of the video infallible; commenters still questioned the veracity of the claims Najem was making. But knowing that the video has passed these tests creates a level of trust that sets it apart from other similar videos. Furthermore, if a different version of Najem’s video later made the rounds on social media, it could be compared to the original footage hashed on the blockchain.
There’s more concern over “fake news” than ever—in part because it’s a buzzword favored by certain politicians as a convenient shorthand for “news I don’t like”—but also because technology is making it easier than ever to create videos that seem real, but aren’t.
Look no further than deepfakes, the anonymous Redditor, whose name has become synonymous with the fake AI-assisted videos that they popularized. These videos are often pornographic and feature the faces of popular celebs grafted (sometimes seamlessly, sometimes not so seamlessly) onto the “donor bodies” of porn actors. The open-source code and user-friendly programs used to make these clips mean it’s something any reasonably tech-savvy person with a good computer can attempt. And the trend has troubling implications for non-celeb victims of revenge porn, too. (Reddit banned the r/deepfakes subreddit in February 2018, but like all things on the internet, that doesn’t mean the content has gone away.)
The same technology that fuses Gal Gadot’s face onto a porn actor’s body can be used to put words in the mouth of a powerful world leader—a move that could have potentially devastating global consequences. “Celebrities have been completely victimized,” says Ibrahim, who previously worked as a U.S. foreign service officer. “That’s pretty scary from a civil liberties and character standpoint. But when you start hearing hypotheticals of how [deepfakes] can be used to incite violence, that’s the really scary stuff.”
There’s “limited ability” for blockchains to provide a solution for the porn problem specifically, says Paul Snow, cofounder and CEO of Factom, a company that uses blockchain technology to prove data provenance. “The viewers are all watching a fantasy in some sense, no matter whether it’s real or it’s fake,” he says. And because the video clips are created from images and videos that already exist, it’s not like a blockchain-based verification process would provide an easy solution.
But for other sorts of videos, like those created by journalists or individuals, Snow believes blockchain technology can provide an important service. Factom, which raised more than $8 million in its Series A, has built systems for loan origination, and worked with the Department of Homeland Security to test a video and image collection system along the border. “The blockchain is going to be going to dramatically impact standards for data in the next five to 10 years,” he says. “And, maybe with a bit of a lag, the public is going to begin to be concerned that when they’re watching a video of the president’s address, the data handlers are all doing their job and are not editing or censoring or modifying it.”
The stakes are high, says Tiana Laurence, cofounder of Factom and author of Blockchain for Dummies. “[With] these new types of a fake videos, it’s just an arms race,” says Laurence, who has since left the company and now is an angel investor and adviser. “I see it as a new type of cyber warfare, where social media is used to affect nation states. Every country should be looking at it from a sovereignty point of view.”
With increased concern over fake news following the 2016 U.S. election, there’s plenty of interest in exploring technological solutions to combat misinformation—intentional or otherwise. Popular gif-hosting company Gfycat started using AI to identify fake porn. And just this week, Facebook, the company most frequently at the epicenter of the fake news debate, announced that it will be setting up a “war room” to address political misinformation in the run-up to the E.U. elections in May. It will bring together “dozens of experts from across the company—including from our threat intelligence, data science, engineering, research, community operations, and legal teams,” says communications chief Nick Clegg.
Fake news is an extension of a problem that already exists: a lack of skepticism about where the news we consume comes from, says Rogayeh Tabrizi, CEO of Theory & Practice, a Vancouver economic consulting company that works with big data. “On Facebook and Twitter and Instagram, what we are saying, indirectly and unofficially, is ‘My friends and family are the editors of news and information—they are the curators,’” she says. “Now, you have these bad actors that can come into these environments, pretend to be one of you, and start spreading fake news—because you’re not used to critical thinking anymore.”
Laurence wishes people looked at every piece of information that’s shared with them with a “more critical eye.” “Then, when you do see something that’s totally inflammatory, taking a beat on it and doing the research, versus immediately [sharing it],” she says. “It’s your natural instinct, to be like, ‘Oh my God, have you seen this? This is effing insane.’”
Combatting deepfakes is not as easy as using technology to solve the problems, well, caused by technology. It has to be done thoughtfully. With any sort of data-driven work, Tabrizi says a solid foundation in ethics is a necessity, since algorithms can mirror human biases and have unintended consequences. (Remember that chatbot Tay, who quickly started parroting racist, sexist views on Twitter? Or the Facebook “trending” algorithm that began promoting fake news?) “We really have to start teaching people who are training in mathematics, physics, computer science—people who, one way or another, get into data science—courses in game theory and in ethics,” says Tabrizi.
Related: Alexandria Ocasio-Cortez Is Right About Racist Algorithms
Some think that the 2020 presidential campaign will provide a testing ground for deepfakes—whether or not we’re prepared for it. (And, worst case, it could lead to the kind of international disaster that would make something like the Russian hack of the DNC or Wikileaks look like child’s play.) In preparation, the Department of Defense is working with researchers to try to figure out how to spot manipulated footage. Members of the House of Representatives have also expressed fears about the damage that a deepfake could do. “By blurring the line between fact and fiction, deep fake [sic] technology could undermine public trust in recorded images and videos as objective depictions of reality,” members of Congress wrote in a letter to Director of National Intelligence Dan Coates in September.
For his part, Snow believes there will likely be an inciting incident that brings the idea of deepfakes to the forefront of the public’s minds. “I believe that the concern for deepfakes is going to rise after some sort of monumental event,” he says. “Maybe some stock can be manipulated by putting words into a financial leader’s mouth, like the chairman of the Federal Reserve or a big investor like [Warren Buffett]. Maybe billions of dollars are gained or lost. Maybe that will be the impetus.”