Forget the Robot Apocalypse, Focus on Building More Useful AI: A Q&A with Gary Marcus

Just last week, a robot solved a Rubik’s Cube with its own robotic hand. It’s the buzzy snapshots like these that often inform our perceptions of artificial intelligence. But unpacking the implications of AI’s latest advances can be difficult. How quickly will that robot be able to write memos and respond to emails? Does the technology from the Rubik robot have any impact on our quest for self-driving cars? Questions like these are important, especially now, as we gear up for an election where candidates are split on how AI’s advances will impact our economy.

I reached out to Gary Marcus, co-author of Rebooting: Rebuilding Artificial Intelligence We Can Trust, to learn more about what the current capabilities of AI truly are and what to watch for as AI continues to shape the products and policies of our future.

Antonia Violante: What’s your definition of “artificial intelligence?” How does it relate to other ideas people may have heard mentioned, like “deep learning” or “machine learning?”

Gary Marcus: Intelligence, biological or otherwise, has many facets, ranging from verbal to mathematical to social; there is no single sharp definition. Machines thus far have been very good at narrow intelligence (e.g., for board games with clearly defined rules) but struggled with the kind of flexible intelligence we need for language and reasoning.

Machine learning—a set of techniques for getting computers to learn things from data—is a subset of artificial intelligence, and deep learning is a subset of machine learning. We have a nice diagram of this in chapter 3 of Rebooting AI, making clear that there are many other machine learning techniques, and many other aspects of artificial intelligence, such as search, planning, and knowledge representation. 

The relationship between artificial intelligence, machine learning, deep learning, and other forms of learning. Source: Rebooting AI.

You state in the book that we’re not on course to build AI that we can trust to handle the world’s toughest problems. What do you think the next trend in AI advancement needs to be in order to build systems we can trust to help us solve issues like road safety or climate change?

The most important challenge, in my view, is to figure out how to represent human knowledge in machines. Road safety, for example, relies on us recognizing not only common categories (like cars and bicycles) but being able to recognize unusual things (say a unicycle on a city street) and interpret them, knowing what they might be used for. Ultimately, machines need to understand what “harm” is, in order to figure out the risk and benefits associated with their actions, and this requires a rich understanding of the world. Current systems literally have no way to do this, which is why we need some real breakthroughs. In our book, we emphasize the need for research in core domains like space, time, and causality—getting machines to understand those sorts of things as well as a five-year-old child does would be a huge advance.

We have one conversation for a few minutes with a machine, and we assume (wrongly) that it is intelligent.

As someone who works in applied behavioral science , I was intrigued to learn about behavioral biases that lead people to overestimate what AI is capable of—for example, discerning real news from fake news. What are these biases and how do they impact our judgement of AI?

The biggest is what we call the gullibility gap: the tendency to ascribe intelligence based on tiny samples of information that don’t genuinely establish robust intelligence—thinking a chatbot is smarter than it really is, or that a driverless car is safe because it works on highways when it might be unsafe on residential roads in poor weather. We have one conversation for a few minutes with a machine, and we assume (wrongly) that it is intelligent, or we see a car work on “autopilot” for a few minutes and ascribe more intelligence to it than is really proper.

I really enjoyed your use of the “nature vs. nurture” dichotomy to unpack how techniques for building AI have changed in recent years. Can you explain more?

In the early days of AI, people tried to hand wire everything, using human knowledge to carefully formulate solutions that were built-in rather than learned; nowadays people in AI try to learn everything from scratch, beginning with a nearly blank slate, and learning virtually everything. As I described in an earlier book called The Birth of the Mind, biology does something in between: nature builds a rich rough draft that nurture builds upon. AI ought to consider doing the same.

Nature builds a rich rough draft that nurture builds upon. AI ought to consider doing the same.

Are you concerned about a Bladerunner situation where we’re not able to tell humans and nonhumans apart?

Not in the next 20 years; you can fool a person for a few minutes, but robots for now just don’t understand language or the world well enough to fool a competent expert. But it shouldn’t be about fooling humans; it should be about building systems we can rely on.

In the book, you cite a number of risks or problems with the way we’re building AI. Which one keeps you up at night the most?

The fact that people are putting so much emphasis on worrying about unlikely scenarios (like robots taking over the world) rather than thinking about how to make AI smart enough that we can trust it for basics like driverless cars and eldercare robots we could really rely on.

This interview has been edited for length and clarity.