Joi Ito, Scott Dadich, and President Barack Obama photographed in the Roosevelt Room of the White House on August 24, 2016.

Frontier-FPO.png

FSB-Title-Desktop.svg

FSB-Title-Mobile.svg

IT’S HARD TO think of a single technology that will shape our world more in the next 50 years than artificial intelligence. As machine learning enables our computers to teach themselves, a wealth of breakthroughs emerge, ranging from medical diagnostics to cars that drive themselves. A whole lot of worry emerges as well. Who controls this technology? Will it take over our jobs? Is it dangerous? President Obama was eager to address these concerns. The person he wanted to talk to most about them? Entrepreneur and MIT Media Lab director Joi Ito. So I sat down with them in the White House to sort through the hope, the hype, and the fear around AI. That and maybe just one quick question about Star Trek. Scott Dadich

Q.svg

Scott Dadich: Thank you both for being here. How’s your day been so far, Mr. President?

Barack Obama: Busy. Productive. You know, a couple of international crises here and there.

Dadich: I want to center our conversation on artificial intelligence, which has gone from science fiction to a reality that’s changing our lives. When was the moment you knew that the age of real AI was upon us?

November 2016. Subscribe to WIRED.

Obama: My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture. There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? Computers start getting smarter than we are and eventually conclude that we’re not all that useful, and then either they’re drugging us to keep us fat and happy or we’re in the Matrix. My impression, based on talking to my top science advisers, is that we’re still a reasonably long way away from that. It’s worth thinking about because it stretches our imaginations and gets us thinking about the issues of choice and free will that actually do have some significant applications for specialized AI, which is about using algorithms and computers to figure out increasingly complex tasks. We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.

Joi Ito: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Obama: Right.

Ito: But they underestimate the difficulties, and I feel like this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence1. Because the question is, how do we build societal values into AI?

1

Extended intelligence is using machine learning to extend the abilities of human intelligence.

Obama: When we had lunch a while back, Joi used the example of self-driving cars. The technology is essentially here. We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transpor­tation grid, and help solve things like carbon emissions that are causing the warming of the planet. But Joi made a very elegant point, which is, what are the values that we’re going to embed in the cars? There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?

2

The car trolley problem is a 2016 MIT Media Lab study in which respondents weighed certain lose-lose situations facing a driverless car. E.g., is it better for five passengers to die so that five pedestrians can live, or is it better for the passengers to live while the pedestrians die?

Ito: When we did the car trolley problem2, we found that most people liked the idea that the driver and the passengers could be sacrificed to save many people. They also said they would never buy a self-driving car. [Laughs.]

Dadich: As we start to get into these ethical questions, what is the role of government?

Obama: The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the govern­ment needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.

Temple.jpg

3

Temple Grandin is a professor at Colorado State University who is autistic and often speaks on the subject.

Ito: I don’t know if you’ve heard of the neurodiversity movement, but Temple Grandin3 talks about this a lot. She says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today.

Obama: They might be on the spectrum.

Ito: Right, on the spectrum. And if we were able to eliminate autism and make everyone neuro-­normal, I bet a whole slew of MIT kids would not be the way they are. One of the problems, whether we’re talking about autism or just diversity broadly, is when we allow the market to decide. Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

Obama: That goes to the larger issue that we wrestle with all the time around AI. Part of what makes us human are the kinks. They’re the mutations, the outliers, the flaws that create art or the new invention, right? We have to assume that if a system is perfect, then it’s static. And part of what makes us who we are, and part of what makes us alive, is that we’re dynamic and we’re surprised. One of the challenges that we’ll have to think about is, where and when is it appropriate for us to have things work exactly the way they’re supposed to, without surprises?

Dadich: When we’re talking about that extended intelligence as it applies to government, private industry, and academia, where should the center of that research live, if there even is a center?

Ito: I think MIT would argue that it should be at MIT. [Laughs.] Historically it probably would have been a group of academics with help from a government. But right now, most of the billion-dollar labs are in business.

PQ-1.svg

Obama: We know the guys who are funding them, and if you talk to Larry Page or others, their general attitude, understandably, is, “The last thing we want is a bunch of bureaucrats slowing us down as we chase the unicorn out there.”

Part of the problem that we’ve seen is that our general commitment as a society to basic research has diminished. Our confidence in collective action has been chipped away, partly because of ideology and rhetoric.

The analogy that we still use when it comes to a great technology achievement, even 50 years later, is a moon shot. And somebody reminded me that the space program was half a percent of GDP. That doesn’t sound like a lot, but in today’s dollars that would be $80 billion that we would be spending annually … on AI. Right now we’re spending probably less than a billion. That undoubtedly will accelerate, but part of what we’re gonna have to understand is that if we want the values of a diverse community represented in these breakthrough technologies, then government funding has to be a part of it. And if government is not part of financing it, then all these issues that Joi has raised about the values embedded in these technologies end up being potentially lost or at least not properly debated.

Dadich: You bring up a really interesting tension that Joi has written about: the difference between innovation that happens in the margins and the innovation that happens in something like the space program. How do we make sure the transmission of all these ideas can happen?


SCROLL DOWN
PQ-1.svg

Obama: I’ve tried to emphasize that just because the government is financing it and helping to collect the data doesn’t mean that we hoard it or only the military has it. To give a very concrete example: Part of our project in precision medicine is to gather a big enough database of human genomes from a diverse enough set of Americans. But instead of giving money to Stanford or Harvard, where they’re hoarding their samples, we now have this entire genetic database that everybody has access to. There is a common set of values, a common architecture, to ensure that the research is shared and not monetized by one group.

Nick.jpg

4

Nick Bostrom is a renowned philosopher at the University of Oxford who has warned of the potential dangers of AI.

Dadich: But there are certainly some risks. We’ve heard from folks like Elon Musk and Nick Bostrom4 who are concerned about AI’s potential to outpace our ability to understand it. As we move forward, how do we think about those concerns as we try to protect not only ourselves but humanity at scale?

Obama: Let me start with what I think is the more immediate concern—it’s a solvable problem in this category of specialized AI, and we have to be mindful of it. If you’ve got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight. And if one person or organization got there first, they could bring down the stock market pretty quickly, or at least they could raise questions about the integrity of the financial markets.

Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now.

Ito: I generally agree. The only caveat is that there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we’re going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen.

Obama: And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man.

Ito: What’s important is to find the people who want to use AI for good—communities and leaders—and figure out how to help them use it.

PQ-1.svg

Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.

What I spend a lot of time worrying about are things like pandemics. You can’t build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we’ve got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.

Ito: And just one thing that I think is interesting is when we start to look at the microbiome. There’s a lot of evidence to show that introducing good bacteria to fight bad bacteria—to not sterilize—is a strategy.

Dogs.jpg

5

The first pets. Portuguese water dogs. Very cute.

Obama: Absolutely. I still don’t let Sunny and Bo5 lick me, because when I walk them on the side lawn, some of the things I see them picking up and chewing on, I don’t want that, man. [Laughs.]

Ito: We have to rethink what clean means, and it’s similar whether you’re talking about cybersecurity or national security. I think that the notion that you can make strict orders or that you can eliminate every possible pathogen is difficult.

Dadich: Is there also a risk that this creates a new kind of arms race?

Obama: I think there’s no doubt that developing international norms, protocols, and verification mechanisms around cybersecurity generally, and AI in particular, is in its infancy. Part of what makes this an interesting problem is that the line between offense and defense is pretty blurred. And at a time when there’s been a lot of mistrust built up about government, that makes it difficult. When you have countries around the world who see America as the preeminent cyberpower, now is the time for us to say, “We’re willing to restrain ourselves if you are willing to restrain yourselves.” The challenge is the most sophisticated state actors—Russia, China, Iran—don’t always embody the same values and norms that we do. But we’re gonna have to surface this as an international issue in order for us to be effective.

Ito: I think we’re in a golden period where people want to talk to each other. If we can make sure that the funding and the energy goes to support open sharing, there is a lot of upside. You can’t really get that good at it in a vacuum, and it’s still an international community for now.

Obama: I think Joi is exactly right, and that’s why we’ve been convening a series of meetings with everybody who’s interested in this. One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularity—they are worrying about “Well, is my job going to be replaced by a machine?”

I tend to be on the optimistic side—historically we’ve absorbed new technologies, and people find that new jobs are created, they migrate, and our standards of living generally go up. I do think that we may be in a slightly different period now, simply because of the pervasive applicability of AI and other technologies. High-skill folks do very well in these systems. They can leverage their talents, they can interface with machines to extend their reach, their sales, their products and services.

Low-wage, low-skill individuals become more and more redundant, and their jobs may not be replaced, but wages are suppressed. And if we are going to successfully manage this transition, we are going to have to have a societal conversation about how we manage this. How are we training and ensuring the economy is inclusive if, in fact, we are producing more than ever, but more and more of it is going to a small group at the top? How do we make sure that folks have a living income? And what does this mean in terms of us supporting things like the arts or culture or making sure our veterans are getting cared for? The social compact has to accommodate these new technologies, and our economic models have to accommodate them.


SCROLL DOWN
PQ-1.svg

Ito: It’s actually nonintuitive which jobs get displaced, because I would bet if you had a computer that understood the medical system, was very good at diagnostics and such, the nurse or the pharmacist is less likely than the doctor to be replaced—they are less expensive. There are actually very high-level jobs, things like lawyers or auditors, that might disappear. Whereas a lot of the service businesses, the arts, and occupations that computers aren’t well suited for won’t be replaced. I don’t know what you think about universal basic income6, but as we start to see people getting displaced there’s also this idea that we can look at other models—like academia or the arts, where people have a purpose that isn’t tied directly to money. I think one of the problems is that there’s this general notion of, how can you be smart if you don’t have any money? In academia, I see a lot of smart people without money.

6

Universal basic income is a concept where all citizens receive at least a living wage, provided by the government as a form of social security.

Obama: You’re exactly right, and that’s what I mean by redesigning the social compact. Now, whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years. You’re also right that the jobs that are going be displaced by AI are not just low-skill service jobs; they might be high-skill jobs but ones that are repeatable and that computers can do. What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated—the computers are doing a lot of the work. As a consequence, we have to make some tougher decisions. We underpay teachers, despite the fact that it’s a really hard job and a really hard thing for a computer to do well. So for us to reexamine what we value, what we are collectively willing to pay for—whether it’s teachers, nurses, caregivers, moms or dads who stay at home, artists, all the things that are incredibly valuable to us right now but don’t rank high on the pay totem pole—that’s a conversation we need to begin to have.

Dadich: Mr. President, what technology are you looking at to solve the biggest challenges that you see in government?

Obama: There is a whole bunch of work we have to do around getting government to be more customer friendly and making it at least as easy to file your taxes as it is to order a pizza or buy an airline ticket. Whether it’s encouraging people to vote or dislodging Big Data so that people can use it more easily or getting their forms processed online more simply—there’s a huge amount of work to drag the federal government and state governments and local governments into the 21st century. The gap between the talent in the federal government and the private sector is actually not wide at all. The technology gap, though, is massive. When I first got here I always imagined the Situation Room would be this supercool thing, like Tom Cruise in Minority Report, where he’d be moving around stuff. It’s not like that, at all. [Laughs.] Particularly when it comes to hunting down terrorists on the other side of the globe, the movies display this omniscience that we possess somehow, and it’s—it’s just not there yet, and it has been drastically underfunded and not properly designed.

In terms of the broader questions around technology, I am a firm believer that if we get climate change right, if we’re able to tap the brakes and figure out how we avoid a 6-foot rise in the oceans, that humanity is gonna figure stuff out. I’m pretty optimistic. And we’ve done a lot of good work, but we’ve got a long way to go.

Figuring out how we regulate connectivity on the Internet in a way that is accountable, transparent, and safe, that allows us to get at the bad guys but ensures that the government does not possess so much power in all of our lives that it becomes a tool for oppression—we’re still working on that. Some of this is a technological problem, with encryption being a good example. I’ve met with civil libertarians and national security people, over and over and over again. And it’s actually a nutty problem, because no one can give me a really good answer in terms of how we reconcile some of these issues.

Since this is a frontiers issue, the last thing I should mention is that I’m still a big space guy, and figuring out how to move into the next generation of space travel is something that we’re significantly underfunding. There’s some good work being done by the private sector, because increasingly it has displaced government funding on some of the “What the heck, why not?” ventures, the crazy ideas. When we think about spaceflight, we’re still thinking about basically the same chemical reactions we were using back in the Apollo flights. Fifty years later and it seems like we should—I don’t know if dilithium crystals7 are out there—but, you know, we should be getting some breakthroughs.

crystal.jpg

7

Dilithium crystals are the material powering faster-than-light warp drives in almost all Federation starships.

Dadich: I understand you’re a Star Trek fan. That was a show inspired by a utopian view of technology—what about it shaped your vision of the future?

Obama: I was a sucker for Star Trek when I was a kid. They were always fun to watch. What made the show lasting was it wasn’t actu­ally about technology. It was about values and relationships. Which is why it didn’t matter that the special effects were kind of cheesy and bad, right? They’d land on a planet and there are all these papier-mâché boulders. [Laughs.] But it didn’t matter because it was really talking about a notion of a common humanity and a confidence in our ability to solve problems.

A recent movie captured the same spirit—The Martian. Not because it had a hugely complicated plot, but because it showed a bunch of different people trying to solve a problem. And employing creativity and grit and hard work, and having confidence that if it’s out there, we can figure it out. That is what I love most about America and why it continues to attract people from all around the world for all of the challenges that we face, that spirit of “Oh, we can figure this out.” And what I value most about science is this notion that we can figure this out. Well, we’re gonna try this—if it doesn’t work, we’re gonna figure out why it didn’t work and then we’re gonna try something else. And we will revel in our mistakes, because that is gonna teach us how to ultimately crack the code on the thing that we’re trying to solve. And if we ever lose that spirit, then we’re gonna lose what is essential about America and what I think is essential about being human.

Ito: I totally agree—I love the optimism of Star Trek. But I also think the Federation is amazingly diverse, the crew is diverse, and the bad guys aren’t usually evil—they’re just misguided.

Obama: Star Trek, like any good story, says that we’re all complicated, and we’ve all got a little bit of Spock and a little bit of Kirk [laughs] and a little bit of Scotty, maybe some Klingon in us, right? But that is what I mean about figuring it out. Part of figuring it out is being able to work across barriers and differences. There’s a certain faith in rationality, tempered by some humility. Which is true of the best art and true of the best science. The sense that we possess these incredible minds that we should use, and we’re still just scratching the surface, but we shouldn’t get too cocky. We should remind ourselves that there’s a lot of stuff we don’t know.

1Extended intelligence is using machine learning to extend the abilities of human intelligence.

2The car trolley problem is a 2016 MIT Media Lab study in which respondents weighed certain lose-lose situations facing a driverless car. E.g., is it better for five passengers to die so that five pedestrians can live, or is it better for the passengers to live while the pedestrians die?

3Temple Grandin is a professor at Colorado State University who is autistic and often speaks on the subject.

4Nick Bostrom is a renowned philosopher at the University of Oxford who has warned of the potential dangers of AI.

5The first pets. Portuguese water dogs. Very cute.

6Universal basic income is a concept where all citizens receive at least a living wage, provided by the government as a form of social security.

7Dilithium crystals are the material powering faster-than-light warp drives in almost all Federation starships.

Scott Dadich (@sdadich) is the editor in chief of WIRED.

This article appears in the November 2016 issue.

This interview has been edited and condensed.

ILLUSTRATIONS BY JOE MCKENDRY. GROOMING BY JACKIE WALKER.