AI Meets Natural Stupidity Revisited
By
Steven Shwartz
In the early 1980s, AI was a hot topic — just like it is now. Back then, nearly every software product was re-branded as containing some form of AI, and the hype was out of control. This re-branding is happening again in 2020.
In 1976, my Yale colleague, Drew McDermott, chastised our AI colleagues in an article entitled “AI Meets Natural Stupidity.” In that article, McDermott took issue with the names that his colleagues were using for their AI systems. For example, an AI system named General Problem Solver was developed in 1959 and was a pioneering technological achievement, but its performance fell far short of its grandiose name. He implored his colleagues to use more “humble, technical” names that do not anthropomorphize these systems.
Professor McDermott was concerned that the use of grandiose terms would increase the hype around AI, the result would be overly high expectations, and that it would all end badly. And that’s in fact what happened. By the end of the 1980’s, AI had fallen out of favor because the reality did not live up to the hype. Many AI companies went out of business. Stuart Russell said that his AI course at the University of California at Berkeley had 900 students in the mid-’80s and had shrunk to only 25 students in 1990.
McDermott’s criticism is as applicable to today’s AI systems as it was forty years ago. Let’s look at some of the terms in everyday use today in the AI community:
Learning: Researchers apply the term “learning” (e.g., as in “machine learning”) to AI systems. When a child figures out that “1+1=2”, we call it learning. So, when an AI system learns to add two numbers, shouldn’t we call that learning also? Absolutely not! The problem is that adding two numbers is the only task that AI system will ever learn. For a child, learning to add two numbers is part of a lifelong process of learning that the child can apply to many different tasks and contexts. It is misleading to equate what machines and people do by using the term “learning” for both.
Planning and Imagination: People start using their imaginations from the minute they wake to the minute they go to sleep. They imagine what will happen if they let the dog out and a cat is in the yard. They imagine what will happen if they go out in the rain with and without an umbrella. They imagine what their black shirt will look like with their tan pants. When we pick up our clothes or make the bed, we imagine the resulting improvement in the appearance of our living quarters. If you think about it, you will find that you use your imagination in many different ways. A self-driving car is said to have an “imagination” because it can “learn” to project where all the other vehicles and pedestrians will be a few seconds into the future. It “imagines” that future state. However, projecting that future state is the only task the machine learning system can perform. It cannot imagine anything else. So, machine imagination is really nothing like human imagination and should not be given a label that suggests it is.
Inference: In machine learning, the term “inference” refers to taking an algorithm (e.g., logistic regression or a deep neural network) and applying it to previous unseen instances. However, machine learning systems can only perform the “inference” step for a single well-defined task. In people, inference refers to the result of commonsense reasoning, which is a generic capability that humans apply across many different tasks and environments. Computers don’t have generic inference capabilities like people.
Forty years ago, when McDermott made his comments, I had the impression that most people in the AI community agreed with what he said, and yet they still opted not to change their terminology. The world would have a much less optimistic view and a lot less fear of the possibility of evil robots and killer computers if researchers and vendors took to heart McDermott’s recommendations.
Steve Shwartz is a successful serial software entrepreneur and investor. He uses his unique perspective as an early AI researcher and statistician to both explain how AI works in simple terms, to explain why people shouldn’t worry about intelligent robots taking over the world, and to explain the steps we need to take as a society to minimize the negative impacts of AI and maximize the positive impacts.
He received his PhD from Johns Hopkins University in Cognitive Science where he began his AI research and also taught Statistics at Towson State University. After receiving his PhD in 1979, AI luminary Roger Schank invited Steve to join the Yale University faculty as a postdoctoral researcher in Computer Science. In 1981, Roger asked Steve to help him start one of the first AI companies, Cognitive Systems, which progressed to a public offering in 1986. Learn more about Steve Shwartz at AIPerspectives.com with him on Twitter, Facebook and LinkedIn.
References
D. McDermott, “Artificial intelligence meets natural stupidity,” SIGART Newsl., no. 57, pp. 4–9, 1976.
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.