Subscribe to receive ARCADE in print.
Curious about design developments in artificial intelligence (AI) and human-machine interaction, I asked Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2), to contemplate a world where empathy and algorithms come together to achieve a higher social consciousness.
Bogbot, Thomas Eykemans / Monocol
Brian Boram: Going back to the early days of computer science, advanced AI was always the Holy Grail. In the years since, we’ve created machines that are better than humans at chess and many other tasks. Why is artificial intelligence important now? Are we at a tipping point in AI science?
Oren Etzioni: I think artificial intelligence has always been important. Today we have tools and resources we did not have before but only because we stand on the shoulders of giants to see farther. However, it is not the case that AI will finally emerge as ubiquitous in the next five years. The problems are still extremely difficult, and the days when robots will keep humans as pets are not in the foreseeable future. I have a four-year-old who is far more sophisticated and a far better learner than any program we can write today or will be able to in the near future. I have more and more respect for the human mind and human creativity—we call it “wetware.”
BB: Because of Hollywood and the fantastical visions of certain artists, are mainstream notions of human and machine interaction arguably ahead of the science?
OE: I think there have been some significant misunderstandings of AI; I do not believe that robots and intelligent machines will be taking our jobs. Nor do I think that is the objective trend, as aesthetics is one of the most difficult things to formalize and teach a computer. This is the last bastion—even if in some distant future robots take up half of our jobs, designers will be the last ones standing.
BB: Have you seen the movie Her?
OE: Yes, we made it into a field trip. We felt obliged to see it, so off we went.
BB: What was your takeaway?
OE: I really enjoyed it. Like much of science fiction, the movie is really about relationships, and it uses science as a tool to explore them. From an AI point of view, it is way overblown, but there is this crazy gleam to the ideas in the film. I got into this field when I was young and naïve, and now that I’m older and still naïve, I think this notion of a computer reaching a tipping point at which it starts to learn on its own, then accelerates and starts to collaborate with other computers, is really exciting. The result of that will be better science, better medicine and a better chance to deal with huge problems like climate change. In the movie, things evolved to a point where they were proceeding at the speed of light, and the computer mind separated from us. What I see as a much more likely scenario is a situation in which these challenges we’ve been grappling with as humans, like cancer, will benefit from collaborative entities working together at mind-boggling speed to study the problem. A doctor might say to an AI medical assistant, “Look, you read at ridiculous speed, you can keep up with the literature; I am seeing patients all day; let’s talk about the side effects of this drug.” I see huge collaborative and social potential but not the way Hollywood described it in Her.
BB: The similarity between humans and robots depends not only on anthropomorphic appearances or sophisticated algorithms but also on the capability for empathetic interaction. How is robotic interaction an integral part of your AI research?
OE: I personally do not work on that issue, but your question is dead-on. One of the things that emerged as robots became more commonplace is the field of human-robot interaction. Whether it’s in the factory, at the office or in elder care, we see that robots can do more to help us, but we have to work out the interactions. At Microsoft Research they are exploring the question, “When can a robot interrupt?” To enable a robot to embody our basic social conventions and nuances is the challenge. If you brought a person from the jungles of Borneo into the modern office environment, they wouldn’t quite know to knock before entering. Why would a robot know that? This is something we’re working on fixing.
BB: What opportunities for social innovation do you believe will come from your research?
OE: Well, take politics as an example: Our algorithms for making statistical predictions have become increasingly sophisticated as computers have gotten more powerful. Those algorithms are very closely tied to artificial intelligence—a prediction based on historical observations is one of the hallmarks of intelligence. That means that with the use of computer models, somebody like Nate Silver is able to predict who is going to win an election with very high accuracy. It means that because these algorithms are now sitting in the Democratic and Republican National Committee headquarters, these groups are doing very targeted political outreach and advertising based on very sophisticated models of who you are and what will influence you. And those models are built—for better or for worse—using the kind of technology we developed in the field of AI. So that’s politics for you.
BB: Now that Paul Allen has an institute for both brain science and artificial intelligence, what should we expect from this adjacency? Could a human brain be transferred to a computer?
OE: Paul Allen is an amazing individual. Here is a guy who has given a huge amount to Seattle. He provides tremendous passion, resources and intellectual input to the institutes. I do think down the road there will be more and more collaboration; Allan Jones, CEO of the Allen Institute for Brain Science (AIBS), sits on the AI2 board, and I have given talks at the AIBS conference. As far as downloading human minds to the computer—which, of course, is a chance at a kind of immortality—I think it is a fascinating notion, but I am not counting on it happening in my lifetime, or even my children’s. It’s beyond the horizon.
BB: What are the big questions for AI going forward? What do designers need to know?
OE: The big questions in AI are based around general intelligence. We’ve built savants who can win at Jeopardy and chess, but what is elusive is the general intelligence of a three-year-old. The big question we are all attacking in different ways and at different paces is how to build general intelligence and common sense.
For designers, I believe even the most artsy person should know how to write a simple computer program. This digital literacy would help to demystify computers for the better.
BB: In what way can AI transform our lives with smart design in social settings?
OE: A great example is Nest, which took elements from Apple and algorithms from AI and coupled them with insights about interaction to design an elegant thermostat: “What’s the most annoying thing in your house?”—the smoke alarm—“Let’s fix that with a sprinkle of intelligence.” I like to talk about the “raisin bread” model of AI, in which the intelligence is the raisin, and all that other stuff is the bread. If you do not have raisins, it is not raisin bread, and if you do not have bread, it is just a bunch of raisins. So you really need these things to come together. I believe an exciting future lies in these intelligence interactions in the home and office.
BB: When we first spoke, you mentioned that these are hectic times. Have you considered designing a super-intelligent proxy that can act on your behalf?
OE: Hey, I am trying to design a four-year-old. So I have absolutely considered that, but it is still a ways out.
BB: What inspired you to study AI?
OE: Douglas Hofstadter and his book Gödel, Escher, Bach: An Eternal Golden Braid, which speaks interestingly about the connection between art, mathematics and computers. When I was in high school, it was a big bestseller and a very inspiring book for me.
BB: What is the most common question you get asked at dinner parties? What is a favorite question?
OE: “Can you fix my Mac?” People think that because I am an expert in computers, I can actually be a helpful support technician.
I love the questions you are asking. We’re all craftsmen and practitioners spending our time with our heads down working on our trades, and these big questions force me to take a step back and think about AI’s social implications and our grand future. I wish I had more time to spend on them.
BB: How do your vast talents in computer science translate to your hobbies or other interests?
OE: My favorite hobbies involve being with people, whether in a social setting or playing basketball or chess. I enjoy people as an antidote for my work.