James Scott Thinks Robots Will Help Us Stop Being So Dumb
10.11.2018

James Scott, co-author of the recent book AIQ, an exploration of how artificial intelligence can help humans make decisions, had the same math teacher as I did in college. I have complicated feelings about this.

Scott now teaches statistics at the University of Texas, and his relatively upbeat take on AI (written with Nick Polson) is a worthy counterpoint to rampant anxiety about the imminent robot takeover. But it was hard to focus on that after realizing that, nearly 20 years ago, Scott and I were both undergrads at UT. And that we both studied under Michael Starbird, who has written extensively on math and real-world thinking.

In the intervening years, I have built on Starbird’s award-winning teaching to produce precisely zero books about how humankind can interface with computers to accomplish great things. But I managed to suppress a storm of unwelcome self-reflection just long enough to have a fascinating conversation with Scott. We discussed NBA scouting and big data on the blockchain, Isaac Newton and e-commerce fraud—and what all of it can tell us about thinking with a little help from machines.

We’re beginning to realize just how bad people are at thinking about probability, and numbers in general. What kinds of problems does that cause?
Well, you get systematic mis-estimation of risks. A good example is risk in a medical context, for instance a labor ward. My sister in law is an obstetrician, and she tells me that the folks on the team bring very different sets of biases to every pregnancy.

The midwives tend to see more cases where there aren’t extreme medical complications, so they might bring a bias to the table that risks in pregnancy are pretty rare and everything’s going to be fine. Whereas my sister in-law said, “When I come to a pregnancy, my default expectation is that it’s all going to hit the fan immediately, because those are the only cases I see.”

And the reality is somewhere in between those sets of cognitive biases. I think that there’s a real role for algorithms to step into the breach and provide that cognitive boost, and actually use data optimally to estimate risks and probabilities, rather than putting your fingers up in the wind to see which way it’s blowing.

They think AI is just software, like Microsoft PowerPoint. It'll be expensive, but it's going to work out of the box, and it’ll just start giving you insights like magic.

Are we already seeing some of the systems in place to help people work together with machines in that way? What do you envision them looking like down the road, in general?
You’re seeing different models in many different walks of life. There are some areas where machines have already taken over entirely. Amazon’s pricing algorithms are a perfect example. In mom and pop retail stores, the shopkeeper was making a decision about how to price all the stuff on the shelf. Now you go to Amazon and there’s no human in the loop. It’s all sophisticated algorithms.

Most things are somewhere in between, though, where you have a machine off to one side, helping you make decisions. An area we’re starting to see that first is healthcare, just because the decisions are so complex, and there’s room both for hard number crunching from data sets, as well as the softer clinical intuition doctors bring to the table.

But I can’t point to a single institution that’s done that at scale, and done it well. There’s not a giant system in place where it’s routine for doctors to ask a data science platform questions, the way that Captain Kirk is asking the computer for advice on the bridge of the Starship Enterprise. That’s a cartoon, but I think given the amount of healthcare data out there, it is realistic that we could expect that kind of interaction someday. I can’t tell you who’s building that right now.

You made a comparison between Amazon and the corner shopkeeper. What is the implication when these big pools of data that we need for machine learning are owned by large private corporations? Do you see negative implications of that?
That’s absolutely the way it works now, and I think that there are both positive and negative implications to that. I get a tremendous amount of value out of Google, for example. I’m not an economist, but my impression is that Google as a corporation has created radically more value for humanity than they have realized in profits. There are immense positive externalities to the kind of information exchange that they have provided.

But of course, there are downsides as well. We trade away our data and all of a sudden it gets used against us in ways that we never anticipated and never consented to. Cambridge Analytica, Russian election hacking, the list goes on and on and on. And I think it’s possible that blockchain does offer a different model for that. As far as the specifics of what that model looks like, I know there’s a lot of hype out there, and I’m honestly not qualified to sift through that hype.

I’ve spent an immense amount of time thinking about it, and even I would have to go look at some old notes. But theoretically, the idea is that you can build a public database on the blockchain, for certain kinds of data.
I do know that part of the design of a blockchain system is that computing proof of work and actually checking the data stored in each block, that’s hard computational work. When we’re doing data analysis, we need low latency, random access, and the very, very fast ability to stream through billions of data points a second. So maybe I’m just not an expert enough to be able to see how to reconcile those two demands, but I’ve never had anybody come back and describe to me a credible way of squaring that circle.

All this stuff is surrounded by an incredible forcefield of technical tedium.

There’s been a huge crest and then maybe a slight receding in the hype around “big data” and machine learning. Have you seen a shift in the perception of how these systems can be applied?
People fundamentally don’t get this about the data requirements. They think AI is just software, like Microsoft PowerPoint. It’ll be expensive, but it’s going to work out of the box, and it’ll just start giving you insights like magic. They don’t get that the thing has to be really, really specifically tuned to huge amounts of data that you’ve got in house.

You have to have a huge amount of high-quality data about things that are relevant to your business practices. And the data that you have, has to be either exactly what you care about, or a very, very close proxy for this thing that you care about. It works for Google because they have enormous amounts of click data, and they get paid per click. It works for Amazon, because they have enormous amounts of buying and purchasing behavior, and they get paid when somebody buys something.

What’s an example of someone with the wrong idea of what works?
I was approached by a representative from an NBA team, and they emphasized: “We have terabytes of data on player performance, and we want to find hidden signals that are going to predict who’s going to have a successful career.” They kept emphasizing the sheer volume of data. But what it boiled down to was they had terabytes of data . . . on 329 players. The terabytes were in the form of like, audio recordings of interviews, and psychological profiles, and motion capture data on practices and college games. [Laughs] So they didn’t actually have a lot of data, and the data they did have was a really poor proxy for ultimate career success.

I mean, all this stuff is surrounded by an incredible forcefield of technical tedium. So it’s not people’s fault that they don’t get how it works. But I do think that there’s a lot of fault on the vendors and software companies who are actively purveying this notion that it’ll just work out of the box. The ones that come to mind are the IBM commercial during the Super Bowl where they’re advertising how Watson is already saving lives. Within the machine learning community there’s an incredible amount of skepticism, to the point of cynicism, about that. If Watson helps save lives, that gets advertised in the Journal of the American Medical Association, not on the Super Bowl. Where’s the peer review? Where’s the actual evidence?

There were no economists, because it wasn’t even recognized as a discipline back then.

One section of your book is about Isaac Newton. He has this nice tenured position at Cambridge, and in 1696 decides instead to work at the Royal Mint. Why would somebody with the intellectual capacity of an Isaac Newton decided to spend his time working on the problem of money? What is it about money that held that intellectual pull?
The job did turn out to be very well suited for his abilities, because he was pretty darn good at figures, and a big issue was the very complicated accounting system at the Mint. But it wasn’t the problem of money per se that motivated him.

We know Cambridge as one of the best universities in the world, but in the 1690s, it was a backwater, especially compared to some of the universities on the continent, or to London itself. He traveled in the top social circles of London, because of the fame he gained from writing the Principia Mathematica, and the position at the mint was given to him by the Chancellor of the Exchequer. Everybody in Europe recognized Newton is one of the greatest geniuses of the century, and he wanted to be in the limelight—there was a lot of ego involved.

I think of that era as this time when, if you were really, really good at one thing, you could still also be really, really good at several other things. Whereas now we live in this age where you kind of have to be a specialist.
That’s right. In this issue of coinage, and I think in early discussions of the idea of fiat money, Newton weighed in in ways that were very ahead of his time. The other people weighing in were folks like John Locke, who is not an economist, but is just a broadly smart guy. There were no economists, because it wasn’t even recognized as a discipline back then. It’s just all the smart people in London weighing in on these topics. And Newton wanted to be in on the action.

Wow, that’s amazing—John Locke and Isaac Newton talking about money. Why is it important to fight fraud, which was one thing Isaac Newton was trying to do at the mint?
Well, it’s just rampant. Something like 1% of all electronic transactions are fraudulent, which is shocking—every every dollar you spend online, one penny is going into the into the pockets of criminals. And I think it has gotten harder—it was a lot easier to pass fake checks back in the day, I think, than it is for a retail criminal now. There’s a reason it’s criminal empires behind most of the fraud these days, we’ve made the barriers to entry much higher.

Do you know Ken Rogoff? He has a book about cash, and the role of cash in money laundering and fraud is stunning. The U.S. Treasury prints these $100 bill blocks that pretty much immediately go to Colombian cocaine cartels. Meanwhile, something like three to 5% of global GDP is is dirty money that gets laundered every year.
Wow, that’s crazy. And a lot of the same kind of mathematical techniques that are used to mine transaction records for fraud at credit card companies, regulatory agencies are turning those towards scrutiny of banks for your money laundering reasons. It’s the HSBCs of the world that facilitate this.

Human behavior on the internet is already being very closely tracked. But we’re going to have a lot more data about other parts of the world, very soon. What do you think is going to change about the way we interact with the world?
I wish I had an answer. I have some speculation.

We welcome speculation.
One of the areas in which that kind of change is overdue is in the decisions that we make day to day in our workplaces and organizations, and in the criminal justice system. Think about the way that people have made hiring decisions, or promotion decisions in HR, or the way that judges and juries have made decisions about guilt and innocence, or at the amount of years in prison that somebody should receive. They’ve always tried to do some loose quantification—you know, how good or how productive and employee this person is going to be, or how much of a threat to society do you think this person is going to be.

If we're going to put machine learning algorithms under the microscope, I think it's going to be impossible for human decisions to escape that same microscope.

But now there’s going to be data on it, as more and more of our actions in general are recorded, and you can correlate past to future. There are a lot of issues to confront there, particularly about bias in machine decision making, and the problems of biased data sets. You can’t just turn loose machine learning algorithms and expect them to get things right.

But that backlash is going to produce more scrutiny over human decision making. I was chatting with a judge here in Texas just the other day at lunch, and it’s not as though he ever gets a report card: Here are the decisions that you made, here are the defendants that you classified as high risk, here’s how their actual recidivism records look like. Here’s how you did. I’m sure he’d probably bristle if he got one.

I’m sure you’re familiar with the study showing that courtroom sentences are harsher right before lunch.
Absolutely. And I think if we’re going to put machine learning algorithms under the microscope, I think it’s going to be impossible for human decisions to escape that same microscope. And given the record of bias, and frankly buffoonery, that happens in human decisions these days, I think it’s a very healthy microscope to turn on.

How do you see people adjusting, on an emotional level, to letting AI have a role in in their own decision making—whether it’s automated sentencing, or automated driving? How quickly or slowly are we going to come to trust the system?
I don’t know, probably one funeral at a time. This is just a question of generational change as technology advances. As people grow up with different expectations about what technology can and should be involved in, there’s going to be different cultural norms and different ideas about morality and the injustices that surround these things. Just as you see with any major technological revolution of the past.

Well, I think that’s a nice note to end—or, I don’t know, a complicated note to end on. Would you like the last word?
I’ll come back to this theme with healthcare. I hope that there is some combination of technical expertise and venture capital and demand from the healthcare industry to really bring these systems to market, where people can actually benefit from the kind of machine learning that is now routine at Google and Amazon and Facebook.

[Healthcare is] 20% of our economy, and there is some incredibly low-hanging fruit out there that simply is not being picked due to the lack of correspondence between shareholder value and patient value in the American healthcare system. This is something that’s so important, such a large chunk of our economy, that I really, really hope the right combination of people comes together that makes sense.