Electronic Frontier Foundation’s Gennie Gebhart: Blockchain Isn’t the Solution to Data Privacy
04.10.2019

Facebook is harvesting our data. Cameras track our movements. In New York, landlords want to replace your keys with facial recognition technology. AI, deep fakes, phishing attacks—every day the world looks more like Black Mirror.

Enter the EFF. The Electronic Frontier Foundation, founded way back in 1990, is a non-profit watchdog for “defending civil liberties in the digital world.” They do a mix of advocacy, policy analysis, legal work, and reporting. Their site includes recent stories like “To Search Through Millions of License Plates, Police Should Get a Warrant,” and “Your Fourth Amendment Rights Should Not be Limited by Terms of Service.”

Gennie Gebhart is EFF’s Associate Director of Research. This wasn’t her plan. In 2014, she was working as a librarian—or, more accurately, a library computer systems expert—in Thailand, trying to improve citizens’ access to cell phones and the internet. Then the coup happened. Civil liberties were squashed. Many of her friends were forced to flee. “It was a huge, massive wake-up call,” she says now. “I was seeing censorship and surveillance firsthand that I hadn’t in the United States, particularly as a middle-class white woman. After that I focused completely on censorship, privacy, and surveillance.”

So how scared should we be? Gebhart explains why surveillance techniques tend to make marginalized communities even more marginalized, gives some practical things we can do to protect ourselves, shares her advice for Mark Zuckerberg, and politely explains why [ahem] blockchain likely isn’t the answer.

What are some privacy violations that most Americans aren’t thinking about?
You don’t have to travel far from home to find egregious, systematic privacy violations. Right now there’s a bill in Oregon that’s a kind of pay-per-privacy thing that would incentivize you to sell your health data. It’s bad. And in the mainstream media, people are finally talking about how the first citizens in the United States who experience surveillance techniques tend to be low-income, marginalized communities.

What do you mean exactly?
Police surveillance. Privacy violations in the course of people receiving welfare, or other government assistance. Middle-class or wealthy Americans don’t see the surveillance techniques that are used first, and then honed, on the poor. This goes back to colonial times, where low-income communities were historically monitored at a higher, more pervasive level.

Whoa, colonial times, really?
In Colonial America, most towns had a position that was the overseer of the poor, who would track poor people. This pattern of surveillance isn’t new. Now it’s just turbocharged with the technology available to the law enforcement and the government.

Like facial recognition software.
Right. Facial recognition. For a long time, federal databases of arrest records [combined with facial recognition software] consistently, disproportionately falsely-identified people of color, women, and younger people. Because it’s optimized for a middle-aged white male.

The ACLU did a bit of a stunt. They ran a facial recognition—of a federal database—against members of Congress, and published which members of Congress were misidentified as criminals. And it was mostly members of color. That was one of those activism campaigns I looked at and said, “Ack, I wish I did that! That’s so brilliant!” [Laughs.]

That is brilliant. And terrible.
It’s a vicious cycle. As machine learning and AI take more prominence in policing, for those who are surveilled more often, the tools will learn that they are more “likely” to be criminals. That’s the entire problem with predictive policing. If you feed the system bad data, it will spit it right back to you, and it will fit those prejudices and those systemic patterns back.

"So Mark Zuckerberg discovered privacy for the first time a couple of weeks ago. He’s like, 'Oh my gosh, guys, there’s this thing! It’s the future!' People who have been working on it for decades are like, 'Yeah, no shit.'”

This takes different forms in different countries. Chinese AI face recognition is biased towards Chinese faces. For the dominant culture—or the people who are creating the technology—the technology will tend to be friendlier to them. In the U.S., those people tend to look a certain way. In China they predominately look a different way. The biases of the creators are fed into the systems in a way that consistently marginalizes the already marginalized.

So back to the U.S. How much of your time is spent dealing with Facebook?
For me, it’s a good half of my day, if not all of it, at times. I’m very much Yelling-at-Facebook-in-Chief. So you asked the right person. [Laughs.] So Mark Zuckerberg discovered privacy for the first time a couple of weeks ago. He’s like, “Oh my gosh, guys, there’s this thing! It’s the future!” People who have been working on it for decades are like, “Yeah, no shit.”

What did you think of his privacy plan?
The plan he laid out is great, actually. It’s good to see Facebook embracing privacy and security fundamentals. But announcing a plan is one thing, and actually implementing it is entirely something else. I think his announcement was more a philosophical statement. It was not an implementation guideline.

Can you give an example?
It focused a lot on end-to-end encryption. Now, we [at EFF] love end-to-end encryption. I want to see a world with more end-to-end encryption. But E-to-E is really not the full story. In many ways, it’s the easy part. With messaging, in particular, so much of that announcement is about how they’re going to smash together WhatsApp, Instagram, and Messenger, so WhatStaMessenger, and it’s going to be end-to-end encrypted, and it’ll be great.

Wait, so what’s your concern, exactly?
There are a lot of things that one can screw up around the encryption engine. In practice, the biggest risk to someone’s security and privacy lies very far away from the encryption engine. Like backups. Backups are an interesting chink in the armor.

WhatsApp already has a back-up option. If you back up to, say, Apple iCloud, now there’s an unencrypted copy of your messages sitting on iCloud. Apple can see it. Law enforcement could request it. It could be exposed in a leak. And that kind of undermines the whole point of end-to-end encryption.

But can’t you choose to turn off backups?
Backups are so interesting. Let’s say that I really care about my privacy, so I’m going to turn off backups. But if anyone I’m talking to has backups turned on, then our conversation is uploaded. It’s very much a team sport. For some people, backups are great—if you drop your phone in a puddle, you can get all your messages back. For some people, that’s security: “My messages are secure, and I’m not going to lose them.”

For others, that’s incredibly destructive. There’s no one-size-fits all. If Facebook is going to try to build one secure messenger to rule them all, they have an impossible task.

[Editor’s note: You can read her more comprehensive thoughts on the Facebook announcement here.]

Okay, so now’s when I have to ask: Do you see blockchain as a way to help solve some of these challenges?
[Awkward pause.] This is where I always feel really bad when I talk to folks from blockchain-focused publications. I’m like…no. Just, no.

Just…no.

That’s okay! I don’t have an agenda here. I just want to know what you actually think.
Okay, here’s the real story from my perspective: There are so many security and privacy fundamentals to get right first. So any time I see something trumpeting the use of blockchain, I take it less seriously. It’s kind of like a “buzzword salad” thing. It’s like, Whoa, if you haven’t gotten everything else perfectly right—which, by the way, no one else has—then what are you doing messing around with this flavor of the moment?

With data privacy, security, anonymity, and censorship resistance, there’s so much stuff on the ground that we have to get right first. So I don’t see a place for it. Or maybe not a place for it yet. Never say never. Technology evolves. Things could happen.

What’s an example of the stuff on the ground that we have to get right first?
Look at Facebook’s “pivot to privacy.” The fundamentals are end-to-end encryption. The fundamentals are not storing data in countries with bad human rights records. There’s still so much to do there. I’d only look for a newer technology, like blockchain, to fit once we’ve nailed all of those.

Got it. And I appreciate the skepticism. But if there were a super blockchain enthusiast in the room, they’d argue that some of these fundamental problems can be solved with blockchain. Like how data isn’t stored in one centralized sever. Wouldn’t this, in theory, tackle some of the core issues? Is the concern that blockchain technology is just too speculative at this point, and too far away from reality?
Decentralization is great, but it’s not always a silver bullet. It can create new engineering challenges. It can create new security challenges. Again, with the Facebook example—the secure messaging community has not reached consensus, and likely never will, because they’re very opinionated about federation, and about interoperability. And to achieve great end-to-end encryption, is that possible in an ecosystem that you don’t fully control?

I think I follow that, but not 100 percent.
Look at Signal, which is, to a lot of people, the best out there as far as secure messaging. Moxie [Marlinspike, CEO of Signal] is super against this [federation].

[Editor’s note: Federation refers, essentially, to the idea of multiple protocols being interoperable, or working interchangeably.]

Moxie says we have worked really hard to make the best encryption out there. If we “federate” with a lot of other messengers, or a lot of other 2.0s, we’re going to lose that guarantee. Some people think that [Moxie’s argument] is really reasonable from a technical perspective. Others see it as anti-competitive.

Mark Zuckerberg used the privacy excuse in his manifesto. He said, “We’re going to keep it to our services, so we can provide security guarantees so we can keep people safe.” Some people will say, “yeah, that’s just the limits of how to provide great end-to-end encryption.” Other people will say, “that’s an anti-competitive move.” I think neither of them are wrong. But it’s going to be a tradeoff.

Facebook does have an interest in providing end-to-end-encryption in a safe way. They also have an interest in entrenching their market condition. Conveniently, they make arguments for those to fit together.

That is convenient.
This is one that’s really hard. So when we talk about Decentralization as a great thing, it’s like…yeah, but it can create different problems, depending upon what you’re valuing. For a secure and private system, it comes back to there cannot be one to rule them all. It just cannot exist.

So what is it, ideally, that you want to see?
Systems and companies being explicit and transparent about those choices. And when possible, giving users control.

"People want really cool, high-speed tips that make them feel like Edward Snowden, but my No.1 thing, really, is to update your software. Update, update, update. Whenever it says 'Update,' for the love of god, do it. You have time."

The best we can aim for, realistically, is something that can work for most people most of the time. That’s where I think that competition is also a privacy issue. If there is one messenger in the world, and it’s the only one, then a lot of people are going to be screwed. You need an ecosystem where there are choices. If WhatsApp really isn’t going to work for me, because I cannot risk anyone having backups turned on, then I need to have Signal. And in a cyclical, almost Kafka-esque way, now we come back to: What about decentralization and federation? And then we get back to those security concerns, and we can go in circles forever, as many engineers do.

Totally Kafka-esque.
Yeah, it’s not a perfectly satisfying, Rah Rah! answer. [Laughs.] There are no easy answers.

Okay, onto some of your bread and butter. What should people do to protect their digital privacy?
We have a good resource at EFF, called Surveillance Self Defense, or SSD. It has our best tips and how-tos.

Cool. What’s your absolute No. 1 go-to tip?
People want really cool, high-speed tips that make them feel like Edward Snowden, but my No.1 thing, really, is to update your software. Update, update, update. Whenever it says “Update,” for the love of god, do it. You have time. Restart your computer. Update your software. Most things in this world to worry about—unless you are Edward Snowden—are not targeting you; they’re targeting people with out-of-date software.

Oh, man. I really need to update my software.
This is how the update ecosystem works: A company, let’s say Apple, will learn about a problem. Maybe a good security researcher notified them and said, Hey, I found a bug. Even if Apple found out about the problem secretly—no one knows—the second the update goes out, the bug can be reverse-engineered from that update.

So once the update goes out, the clock is ticking on you, and you are vulnerable. The longer you shirk updates, the more vulnerable you’re going to be. So by pressing “Update,” you can now benefit from all the work that Apple’s engineers did to patch your system. It’s the absolute No. 1 thing. It’s really boring.

You’re giving me chills. For months I’ve been avoiding this. Months.
Oh my god. Go do that.

What risks are out there, lurking, that most people should be more worried about?
I think “fake news,” to some extent, is a privacy problem. I’m getting a little philosophical, but hear me out. Privacy means control over my information, and who knows it, and when. It is also very much the right to be left alone. The scariest thing on the internet is when things are pretending to be something they’re not. Phishing links. Secret malware attachments. Fake news that’s going to lead you to a phishing link.

Then combine that with the second thing people should be more worried about: information about you that’s freely available on the internet. I often suggest googling yourself.

Open up Firefox private mode, or google incognito mode, and just see what’s there. Google your phone number. Google your home address. They can show up in places you don’t expect them to be. Once you know where they are, then you can try to take steps to remove the stuff you don’t want.

Is there anything out there that gives you optimism? Or is it all grim and bleak?
It’s not all doom and gloom, otherwise I would have trouble even making it to Tuesday. One thing that very much gives me hope is that, since Cambridge Analytica, there’s a lot more awareness.

I very much think it’s not the average consumer’s job to go get a law degree, and a computer science degree, to read the Terms of Service and understand the settings. That should not be anyone’s job, just as it’s not your job to make sure your water is safe. It’s not your job to make sure your building won’t fall down.

But at the same time, it’s making it a priority on lawmakers’ plates. It’s making tech giants like Facebook suddenly discover privacy. Consumer uproar has an effect. These voices have an impact, and it’s been fantastic to see that happen.

The thing that gives me cautious hope is: OK, now what? We learned about an egregious thing, and we’re all really upset, but now what’s going to happen? Regulation and law is definitely coming. There’s a lot to hope for, but the devil is in the details. Congress could make a great data privacy law, but there are even more ways where they could make a very bad one. That’s a place where we’ll be fighting for a while, but there’s reason for hope.