Olaf Groth is a professor of digital futures at Hult International Business School. His new book Solomon’s Code, written with AI researcher Mark Nitzberg, is an exploration of how artificial intelligence and data-gathering will shape human life in the future. The following interview has been condensed and edited.
You compare AI and public blockchains in that both are double-edged swords. Does progress always have a cost?
Most innovations are double-edged. Even a car can be abused and weaponized. Even vaccination technologies have a dark side: viruses mutate faster. AI, like blockchain, has the potential for misuse because it is covert to most people. Most of us don’t want to talk math, much less computer code. The technological impact is cloaked. It needs to be monitored better.
AI has a tendency to increase social herding, grouping people together. Blockchain, at least in its rhetoric, is pushing back against Facebook-style monitoring. Do innovations balance each other out?
The real world is more complex than that. Blockchain has potential for decentralization and data anonymization, but there’s risks there. With AI and blockchain, we don’t know who’s behind the steering wheel. Blockchain can be used to get around undue state interference. In AI, you have remote influencing, which can obscure who is doing what.
Technology evolves in clusters. The internet came together with mobile, e-commerce and video technology, giving us the content-rich web experience we have today. I think blockchain and AI are coming together in another cluster, with quantum computing, gene editing through CRISPR, 3D printing, and nanotechnologies, creating a new Cognitive Era. Increasingly, we will have capabilities to alter the physical state of humans and their environment.
We need to think of technology’s broader impact on society. Silicon Valley is used to saying ‘we put this stuff out there, it’s going to disrupt an industry, and you’re either with us or against us.’ That’s fine, as long as we’re talking about outdated industrial structures. But when you are talking about social fabric, the fun stops. We need to recognize the feedback loops. What happens when Russia starts to manipulate my machine learning and I see only targeted news on my Facebook feed? We need more of an ethical screen. These questions can’t be left to philosophers.
In the book, you have a fictional scenario where an AI chooses a restaurant for a character based on her mood. If people are defined by free will, is something like that a loss of humanity?
We have to make sure that people constantly know how and why decisions are made for them. So if they disagree, or need to know more, they can press a button, and find out. We need opt-out buttons. Without that, we are losing our human agency, which is what defines us.
Silicon Valley is used to saying 'we put this stuff out there, it's going to disrupt an industry, and you're either with us or against us.' That's fine, as long as we're talking about outdated industrial structures. But when you are talking about social fabric, the fun stops.
I would make a case for the value of random chance, as well.
Right. If these machines make decisions for me, and guide me through life in a way that’s always self-reinforcing, I lack the ability to be randomly and serendipitously impacted by my environment, and to really engage in human discovery.
The first line of your book’s first chapter talks about how quickly innovations come to seem normal. How can we be more attuned to the risks as they’re developing?
We are becoming smarter. We used to just worry if people are wasting too much time on Facebook, and they’re becoming unhappy. Now we’re actually realizing, ‘wow, this thing is changing the fabric of our society.’ I think awareness will be increased based on these negative experiences.
But to get ahead of things, we need panels that look into what algorithms do. We don’t let people just develop an airplane and then say, ‘please get on board, we’ll try this thing.’ No, you have FAA engineers certifying the parts and the planes, and they’re sworn to secrecy. We need to have similar mechanisms where experts are auditing code under strict nondisclosures. They will have to say, ‘look, this code is clean, this code has algorithms in it that negate biases proliferated in data, it’s good, it’s best effort.’
We don't let people just develop an airplane and then say, 'please get on board, we’ll try this thing.' No, you have FAA engineers certifying the parts and the planes.
Do you feel like AI is going to simplify the way people interact with the world? Or is it going to increase the level of complexity that we deal with?
It’ll simplify certain transactions. My computer, my handset, my watch, my car, my house will understand my basic preferences and help make certain transactions easier. But I also think the interplay of these algorithms will accelerate change. Because so many things are becoming smarter, we’re dealing with not only an accelerated, but a more complex environment. We’re losing the ability to understand how different parts of a system come together and interplay.
In the book, we talk about how [the Defense Advanced Research Projects Agency] is developing AI systems to help understand and mitigate climate change and food crises before they happen. That will be tremendously helpful for many people around the world. It’s good for humanity, good for society. But we are also at a point where even if you are a super-bright computer scientist, you can no longer understand how the machine thinks about everything it thinks about. We’re at this point where you accept that it can do some marvelous things, but you also accept that you cannot scrutinize everything.
Presumably, some people will benefit more from this changing environment than others?
The techno-affluent class will benefit. It has always benefited. Capital will benefit, because technology amplifies the need for and the value of capital. We will have to see if labor benefits.
And that’s the risk if we don’t democratize some of this stuff—whether it’s open data pools, or whether it is open source software. We are bound to see the smart, techno-affluent class becoming ever smarter and ever richer.
Successful tech CEOs, like Mark Zuckerberg, weren’t able to think about the second-order consequences of what they were building. How do we motivate people in those positions to do that sort of thinking?
There’s an organic way, and there’s an inorganic way. You hope Mr. Zuckerberg is a smart guy and that he’ll learn from his failures. The inorganic way is for the U.S. Congress and the European Parliament to get on top of this thing. They need to hire more staff assistants and advisors to get smarter about technology, and start speaking the language. The performance of the European Parliament and US Congress in taking Mr. Zuckerberg to task recently was abysmal.
Because they don’t know how to press him.
They didn’t know the difference between the application layer and middleware. They can’t exercise power unless they understand where it resides in the technology.
Is it going to be possible for consumers to opt out in the future?
What Europe has been doing with its General Data Protection Regulation potentially creates a more equitable model for smart technologies. But even if it doesn’t, when a market of 500 million people shields itself and says, ‘If you want to come here, you have to respect the rules,’ it sends a signal to the big internet companies to become smarter. And with China closing off increasingly, I think that’s going to hurt the Googles and Facebooks of the world.
We’ve touched on a lot of dark stuff. What are the biggest positives potentially for these technologies?
AI will help us understand the complexity of the human body and allow us to system-of-systems analyses. The sharing and the routing and management of smart transportation technologies will be a winner for AI. It will help teachers to teach and students to learn. And there’s tremendous potential to optimize crime fighting and food distribution in a way that takes destructive human political quarrels out of it.
But it’s a very tricky balance, because we need political discourse to shape mindsets, to form political will. If you preempt all of that with smart machines, then people are not going to be able to make up their own minds anymore. It’s another double-edged sword, even if I see broad transformational potential for society over the next 20 to 25 years.