12

1.

In March 1975, I began a series of interviews for my proposed history of AI with pioneers around Cambridge, Massachusetts. Among the first I interviewed was Marvin Minsky, one of AI’s four founding fathers, along with John McCarthy, Allen Newell, and Herbert Simon. Minsky was welcoming and deeply generous with his time. By then, Minsky had already won the Turing Award and would go on to win many more honors, including the Japan Prize in 1990, the International Joint Conferences on Artificial Intelligence’s Award for Research Excellence in 1991, and the Benjamin Franklin Medal from The Franklin Institute in 2001. He even consulted on Stanley Kubrick’s groundbreaking movie, 2001: A Space Odyssey, with explicit credits.

Everybody agreed that Minsky was one of the smartest people on the planet, but what few mentioned was his appealing generosity of spirit. This might be why the list of his students is an impressive roster of scientists who’ve made their own dazzling contributions to AI and other computing areas. “I don’t think of myself as a teacher,” Minsky once said to me. “I’m more like a gardener. I let the plants grow, I nourish them, and I weed the garden.” By that he meant that he encouraged creativity and gently (or maybe not so gently) guided his students along paths that would allow their creativity to flower.

Another example of Minsky’s generosity of spirit. We were chatting about an early worker in AI who’d had one great success and then failed to do more. “Ah,” he said quietly, “we don’t know what circumstances in people’s lives might bottle them up. It isn’t necessarily failure of intelligence. Things just happen.” It was a reminder not to judge so quickly.

2.

Marvin Minsky was born in New York City and attended Ethical Culture Fieldston School and The Bronx High School of Science. He came from an established New York family, and when, in the mid-1980s, he told his elderly mother that he was about to publish The Society of Mind with Simon and Schuster, she murmured thoughtfully, “Liked Simon. Never liked Schuster.”

I reported in Machines Who Think how the young Minsky, as an undergraduate at Harvard, fed his curiosity by going to all the teas before or after talks on topics of every description. (He fed his youthful appetite too, scarfing up the cookies.) He understood what most shy undergraduates do not: generally people are delighted to talk about their research with anyone, even an undergraduate, who shows a polite interest. Although he was nominally doing mathematics (he overlapped at Princeton in graduate mathematical studies with McCarthy and Newell, but none of them knew each other well then), he was interested above all in the questions surrounding intelligence. He’d been deeply influenced by Warren McCulloch at MIT, who did early studies on neurons, and Minsky’s PhD dissertation was a mathematical model of certain neural functions in the brain. He visited Bell Labs in the summer of 1955, where, with Claude Shannon’s blessing—Shannon the father of information theory—he and McCarthy dreamed up the whole idea of a conference the following summer at Dartmouth of people who suspected these new machines called computers could be made to think.

After the Dartmouth conference, Minsky was still formulating how AI might be achieved and wrote the first of many versions of what would come to be called “Steps Toward Artificial Intelligence.” He admired what Newell and Simon had done, but thought he was no longer interested because Newell and Simon were constructing models of human intelligence. Instead, he wanted to achieve machine intelligence in any way possible.

With Seymour Papert, Minsky wrote an influential, if difficult book, Perceptrons (“We didn’t leave enough easy problems for graduate students to tackle,” he laughed—although much later when computing power was up to the task, the book would be seen as a forebear to deep learning). He also continued to cultivate his graduate students, whose achievements were signal, and invented new theoretical approaches to achieving machine intelligence. Besides Perceptrons, in 1969, which was subsequently expanded twice, he wrote a book on frames, a computational structure for laying out facts about objects and events—in other words, knowledge representation—that significantly influenced AI program design.

Meanwhile, Minsky invented some important instruments, such as a precursor to the laser-scanning microscope, and an early graphical display. One of his most famous instruments was the Logo turtle, developed with Seymour Papert. This was a robot that executed the instructions of children who were learning the simple but powerful programming language Logo, Papert’s creation. In the mid-1970s, I spent several hours watching Boston–area eight- and nine-year-olds at computer keyboards, their faces radiant with mastery, as they instructed the turtle to move around the floor.

But gradually, Minsky came back to examining human intelligence because that had been his original impetus. Even in the 1970s, he laughed with me that it had only seemed as if Carnegie Mellon and MIT had gone different ways. In fact, they were both interested in understanding and modeling human intelligence as the best proof of concept, as engineers like to say. His later books, The Society of Mind and The Emotion Machine, testify to exactly that. They also testify to Marvin Minsky’s significant contributions as a theoretician of intelligence, human or machine.

Yet a theory wasn’t enough, computer scientists like Newell and Simon would grumble. You needed experimental evidence to prove or disprove it, to refine or expand it. This was a friendly but persistent difference between the two schools of thought.

3.

Although Minksy and I taped numerous interviews at MIT,[1] in my journal I mentioned a couple of visits to his Brookline home.

February 7, 1977:

Spent most of the day with Marvin at his house, and if I hadn’t been ready to turn into a block of ice by the time it was all done, I’d have been better company. The house deserves description. A large house, many rooms, each one lined, stacked, stuffed with memorabilia, objets, such as: a large harmonium (which looks like an organ to me), a jukebox, dolls, piñatas, odd chairs and sofas. In the family room is an impressive amount of sound equipment, a piano, games, records, a human arm attached to the wall with significant bones painted red, white and blue, a trapeze suspended from the ceiling beams, various mirrors, including a searchlight mirror, two mirrors from a telescope, and a couple of concave mirrors fit on top of one another so they look like a large wok. Their function is to reflect a little metal frog, who sits on the lowest mirror, up into a hole in the upper one, thus giving you the impression you have a solid metal frog suspended in the interior, which you can put your finger through quite easily if you’ve a mind to.

Here, after we’d talked AI for a while, Marvin gave me a sample of his new love, which is composing music. Now of all the kinds of music there are, I wouldn’t have expected Marvin Minsky to compose this, but out it comes, one beautiful, fluid Bach-like fugue after another. I was enchanted. And told him so. The melodies were lovely, lyrical, beautifully realized and then counterpointed. If he’d told me old J.S. himself composed them, I’d have believed it. Then he played some Prokofiev-like music, and finally some music for children, all of it, it seemed to me, exceptionally fine. I was surprised that I liked it so much at once, but it had a natural grace that spoke to me directly. We talked about composing and he told me he simply put down the music he heard in his head—the relationships weren’t (necessarily) mathematical but were discovered after the fact.

A brief lunch, and we spoke some more. Then Gloria Rudisch, Marvin’s wife, who is both a pediatrician and the health officer for the City of Brookline, came home “with a robot for Marvin to fix.” She’s a small, stocky woman, black hair in a neat pageboy, and she was almost overwhelmed by the suitcase she was toting. When she opened it, a hand and sneakered foot fell out. She extracted a very lifelike woman, dressed in a blue jogging suit, rigged up in such a way you could measure on a meter whether you’d “restarted her heart,” or “restarted her breathing,” by mouth-to-mouth. I’m hard put to describe the picture of Marvin and Gloria working furiously over this mannequin to get her prepared for a class Gloria was about to teach, this life-sized and so lifelike stiff lying on the couch, the dog and me riveted by the whole affair. Gloria skittered out at last with the suitcase, and Marvin carried the dummy under his arm to the car. Not a sight I’ll soon forget.

Marvin is very serious about his composing, wonders if he should just make the big break and change his life altogether. If I hadn’t been so cold, I could’ve gone on for a long time. I don’t know whether the Minskys keep the heat down because they’re good citizens, think it’s good for our health, or they’re just indifferent. An hour in the semi-tropical heat of “the fine old fellows” [Joe and I were staying at Boston’s Harvard Club, and the house manager used that phrase to describe our elderly fellow guests, to explain why the heat was so high] and I’m still not thawed out, but grateful indeed for the old fellows’ terrible circulation which keeps everything nearly molten.

A few days later, the Minskys invited Joe and me to dinner with a large, congenial group. February 10, 1977:

Dinner tonight chez Minsky, cooked by Gloria and also by Seymour Papert. I had a long talk with Seymour’s friend, Sherry Turkle, [later, for a while, his wife, and to become a celebrated investigator of human behavior with computers] who’s doing a sociological study of why computer scientists do what they do, having just completed a study of French psychoanalysts, called French Freud.[2] Also at table were Felix, Marvin’s friend since grammar school, Albert Mayer, an MIT professor acting as our social secretary for the week, and Marvin’s son, Henry, perhaps fourteen, who complained to me about having to read Jane Austen in school when he’d rather be reading Kurt Vonnegut. “They’re doing the same thing,” I said, “social satire.” I’m not sure he was convinced.

4.

For Minsky and his students, robotics raised fundamental issues. How did a dumb video camera, connected to a dumb contraption that served as an arm, connected to a computer, produce intelligent behavior? How did the arm understand that it was being asked to pick up building blocks and move them from one place to another? This stood for one of the central questions about intelligence: How does intelligent behavior emerge from dumb tissue, or dumb components of any kind? (It was nearly half a century later before we began to get answers to those questions—an elaborate set of reciprocal signals between brain and limb.)

In the early 1970s, Minsky and Papert began formulating what would become Minsky’s 1988 book, The Society of Mind, at the time a somewhat speculative, but persuasive, and finally influential, set of theories proposing that all minds, natural and artificial, were made up of small unintelligent components. Yet acting in concert, sometimes using well-tested algorithms, sometimes using rules of thumb, they produced what we call intelligence. This is a common assumption now, an early exploration of the phenomenon of emergence, but the book caused a tremendous stir among brain scientists, psychologists, and philosophers, who were laboring toward something more elegant in the way of a grand unified theory of human intelligence.

Almost twenty years after The Society of Mind, Minsky turned to what we call emotions. Could he account for the role that emotions play in intelligence? Given the distinction Western culture has always made between reason and passion, did emotions play any role at all in intelligence? He came to believe that this distinction, asserted since the classical Greeks, was simply wrong.

In a 2006 book called The Emotion Machine, Minsky proposed that emotion plays a vital role in intelligence. In The Society of Mind, he’d argued that agents in the mind worked together toward goals. Now he changed the concept of agents to resources, because the word agent misled readers into thinking that a person-like thing—a homunculus, so to speak—existed in the brain and could operate independently or cooperate with other agents, in much the same ways people do in the real world. On the contrary, he said, most resources in the brain are specialized to certain kinds of jobs and cannot directly communicate with most of the brain’s other resources.

In The Emotion Machine, he argues that our longtime distinction between passion and reason rests on misunderstanding both terms. Passion and reason each are probably a hundred different things at least, the consequences of the behavior of tens of thousands of inherited genes, their expressions raw and uncontrolled, until we mature and learn to control them. Many of these resources are inaccessible to deliberate scrutiny, for we’ve overlaid other processes on them as we’ve matured.

For convenience, or from laziness, we use what Minsky calls “suitcase words,” like love, hunger, anger, suffering, and pleasure, as if they had precise meanings. Instead, he argues, each suitcase word has many different items stuffed into it as we attempt to describe large networks of processes inside our brains. Consciousness, for example, refers to more than twenty such processes. “Each of our major ‘emotional states’ results from turning certain resources on while turning certain others off—and thus changing some ways that our brains behave” (Minsky, 2006.).

As a rule, emotions are ways to think that increase our resourcefulness. This is vital. If a program worked only one way, it would get stuck when that one method failed. “The resourcefulness of the human mind comes from having multiple ways to deal with things—no matter that, from time to time, this causes bad things to happen to us” (Minsky, 2006). Even our sense of self is impermanent: we have multiple models of the self and switch between them as we learn when it’s useful to do so (Simon, 1991).

Although the The Emotion Machine presents a different way of thinking about the role of emotion in intelligence,[3] it grows out of what Minsky had learned over a lifetime’s research in AI. Both The Society of Mind and The Emotion Machine are lucid expositions of ideas that are current in—or at least not alien to—both brain and AI research. He also called on findings from psychology, animal behavior, cognitive science, and genetics (a substantial part of our behavior is endowed in our genes).

Several AI researchers conceded that Minsky might be right, but where were the computer programs that instantiated these ideas, that separated science from mere conjecture?

A partial answer comes from Minsky’s MIT colleague, Rosalind Picard, who had already coined the term affective computing. She too argued that reasoning and emotion were inseparable, and emotions were necessary for true machine intelligence. Picard, along with her graduate student, Rana el Kaliouby, began testing software that could read emotions on the human face. They formed a company called Affectiva to sell the systems, but their customers, instead of being clinical researchers in autism, say, were overwhelmingly market resesarchers who wanted to use the software to refine products and advertisements. Picard stepped away from this as too distant from her original medical goals, but Kaliouby stayed with Affectiva, now a thriving business of reading human emotions for its international clients. To the train the software, Affectiva began with a handful of actors and now has massive amounts of data. This has refined the program’s skills to the point where it’s more sensitive at reading emotions than most humans.

Meanwhile, Picard has pursued the brain-mind-body connection along multiple fronts. One helpful wrist device she helped develop reads brain and body electrical signals, allowing epileptics to anticipate a seizure twenty minutes before it takes place. “We want to give individuals something to help them do better, rather than just focusing on AI that only people in powerful positions have access to.” She now studies healthy people, to see how they maintain their wellbeing. “In the world of AI, some of us are stepping back and asking what are we doing to human health. What leads to true human flourishing and wellbeing? Are we enabling the kind of AI that gives wealth and power to a smaller and smaller number of people? Or are we enabling AI that helps people?” (Wapner, 2019).

Many people, Minsky writes, have come to accept that the human brain is an electrochemical organ, but they still believe that a mystery will always remain about how a living thing could ever result from nothing more than material stuff, whether synapses or electrons. “That once was a popular belief, but today it is widely recognized that behavior of a complex machine depends only on how its parts interact, but not on the ‘stuff’ of which they are made (except for matters of speed and strength). In other words, all that matters is the manner in which each part reacts to the other parts to which it is connected” (Minsky, 2006).

In machine or human brains, these resources are proving to be hierarchical networks of processes (again, the central idea of Allen Newell’s Soar model), many of the lowest systems not even available to the higher systems. Your conscious mind can’t access the processes that keep you steadily breathing or standing upright, for example, though they’re basic to your existence. In humans, mapping just where these processes reside in the brain is one of the great goals of present-day brain science.

Minsky (2006) observes: “Exploring, explaining, and learning must be among a child’s most obstinate drives—and never again in those children’s lives will anything push them to work so hard.”

Minsky proposed a group of hypotheses still to be fully validated. Neuroscientists had already begun such exploration as he wrote The Emotion Machine, and they continue. Even now, no one knows whether Minsky’s ideas are correct in general or in particular. We do know that emotions are finely nuanced and contain a wide variety of fleeting, sometimes contradictory aspects. Machines can read human emotions and respond to them, whether they’re evaluating audience responses to TV pilots, guiding autistic individuals through a world of affect that puzzles them, or assisting a digital nurse to evaluate a patient (Stone & Lavine, 2014).[4]

That emotions are a fundamental resource already integrated into intelligence, not merely to be ignored, suppressed, or overcome, has an appealing economy. Individual maturation involves learning how to control these potent fundamental resources. Oxford philosopher and cognitive scientist Nick Bostrom (2016) argues that such maturation must take place with AIs too, and perhaps this is so.

5.

In the fall of 2013, I was lucky to sit in on the first weekly meetings of the Center for Brains, Minds, and Machines at MIT and Harvard, meetings that continue. I listened to scientists in each of those fields offer to one another a brief description of their work. One afternoon began with what we know about how humans understand scenes. Another scientist described how humans recognize scenes (slightly different from understanding a scene). A third scientist presented findings of experiments with a brain imaging technique, where the scientist shows her subjects an image and then decodes the brain waves. A fourth scientist offered a means of teaching machines common sense via storytelling.

At the end of their presentations, each scientist added: if my models, questions, or answers are useful to you, use them. If you think I can help you, get in touch. Get in touch anyway.

During these openhanded afternoons, scientists across disciplines tried to help each other understand what intelligence is. Right now, this kind of exchange is taking place all over the country and the world. The challenge is enormous, and the investigative instruments are barely up to it, though they’ll surely continue to improve. Minsky made no apologies from the outset. Years ago he said to me, “Look how long physicists have been studying physics. Do we think the brain and mind are less complicated?” E. O. Wilson says decisively: “The human brain is the most complex system known in the Universe, either organic or inorganic” (Wilson, 2014).

The brain’s energy efficiency is one complexity that scientists have yet to understand. David Cox, a professor of molecular and cellular biology and computer science at Harvard’s Center for Brain Science, points out that the human brain has the capacity for tens of petaflops yet consumes only 20 watts of power. (A petaflop is a measure of supercomputing speed; one peta equals a quadrillion floating point operations per second, or flops.) Current supercomputers have arrived at the tens of petaflops, but their appetite for power is gargantuan—just getting rid of the heat they generate is a challenge.

The brain can solve problems we don’t know how to program computers to solve, regardless of the power those computers can muster. That doesn’t mean we won’t ever know. But we don’t know now. I asked Tomaso Poggio, the head of MIT’s Center for Brains, Minds, and Machines, which set of researchers, neuroscientists, cognitive psychologists, or computer scientists, was likely to develop—or discover—the mechanisms of intelligence first. “It’s a race,” he replied, smiling.

6.

On one of my early visits to Cambridge in the 1970s, I interviewed Ray Solomonoff, one of the original attendees at the Dartmouth conference. Ray’s fan-like beard was already gray, and he’d lost much of the hair atop his head. Behind his glasses, his eloquent eyes seemed spiritual in their intensity. He was very much a free spirit, still doing mathematical modeling of mind, but attached to no institution. After we talked, he and his girlfriend offered to take me out to forage for salad greens in Harvard Yard.

After the Dartmouth Conference, Solomonoff’s work fell into eclipse for several decades, but in the mid-2000s, was revived in a subfield called artificial general intelligence, where researchers sought a universal way of learning and acting in any environment. This pattern of eclipse and revival has happened several times in AI (recall Newell and Simon’s General Problem Solver) where original good ideas, impossible to implement with the technology of the time, suddenly become possible, and even better, useful. Deep learning is a grand example.[5]

Oliver Selfridge, officially at MIT Lincoln Laboratory (also known as Lincoln Labs) but with a post as associate director of MIT’s Project MAC[6] in the early 1960s, was another early advocate of an integrated approach to AI. He’d been working on pattern recognition and machine learning—a presentation he made had electrified Allen Newell at RAND in the mid-1950s—and Selfridge’s 1959 paper, “Pandemonium,” a proposal for machine learning, is considered a classic in the AI literature. Selfridge coined the term intelligent agents for autonomous software capable of sensing and responding to changes in their environments, an idea that would develop more fully in later years (Feigenbaum & Feldman, 1963). In the mid-1970s he was also seeking an approach to general intelligence and was disappointed, he said, that pattern recognition had been pushed off to be its own subfield, unrelated to mainstream AI. This too was to be slowly reversed, but only after some decades.

For Machines Who Think, I also visited the elusive Claude Shannon, best known for his work on information theory, the theoretical foundation of the digital revolution.[7] He’d allowed the Dartmouth conference to take place under his aegis with the understanding that John McCarthy and Marvin Minsky would do the work. In his seventies, Shannon was a prepossessing man, his features finely modeled, courtly and soft-spoken, happy to talk about early times at both Bell Labs and MIT. He’d retired from MIT, so no longer rode his unicycle around the academic halls, but he was still full of playful and intellectual verve.

Shannon then lived in a grand old Victorian house in Somerville, with sweeping views of the Boston skyline. After our interview, he took me into another room to see the remains of a legendary maze that a mechanical mouse called Theseus had run through in 1950, part of a very early experiment in machine learning. Years after I interviewed Shannon, Joe was stunned to see him as a new inductee into the National Academy of Engineering. Shannon should have been a member for decades, having already won the National Medal of Science, among many other honors. Sadly, he eventually suffered from Alzheimer’s and died in a Massachusetts nursing home in 2001, oblivious, his widow said, to the wonders he’d helped bring about.


  1. These and all other interviews I conducted for Machines Who Think are available in the archives of Carnegie Mellon University.
  2. What a great title! When I rediscovered this in my journal, I asked Sherry Turkle why she’d changed it to the bland Psychoanalytic Politics. “The publisher,” she replied. “They thought it might be misunderstood, or confused with another book. I was very young and didn’t know better.” Weren’t we all, I agreed.
  3. An answer to that question was only to come more than half a century later, in a collaboration between Caltech neuroscientists and roboticists. They devised a robotic arm, a prosthesis, equipped with a brain-machine interface that can read and respond to the intentions of its human patient, a man otherwise unable to move his arm owing to an old gunshot wound. The scientific team showed that an elaborate set of messages travels from the brain (in this patient’s case, implanted with sensitive electrodes) to the appendage, and back in a rich feedback system. Richard Andersen, “The Intention Machine.” Scientific American, April 2019. A similar system is under construction jointly between the University of California, San Francisco and the University of California, Berkeley, for brain messages to cause speech. Carey, Benedict. “Scientists Create Speech from Brain Signals.” The New York Times, April 24, 2019. Much psychological literature, especially popular reading, had treated emotion as distinct from intelligence, sometimes a separate kind of intelligence in its own right. That view has been hotly contested and is different from Minsky’s more integrated role for emotions in intelligence. The March 2014 issue of Global Advances in Health and Medicine includes a long paper, “Emotion: The Self-regulatory Sense,” by K. Peil, who says that emotion, broadly construed, plays a fundamental self-regulatory role in any organism. In the April 2015 issue of Scientific American, the article “Conquer Yourself, Conquer the World” by Roy F. Baumeister discusses the complicated role self-control plays in human behavior. For a focus on hatred specifically, see “The Point of Hate” by Anna Fels in The New York Times, April 14, 2017. Brain scientists generally agree that emotions play a key role in individual decision-making, but the current model suggests that networks in the brain compete for supremacy, with emotions often winning over reasoning, because emotions are a fast, economical way of deciding and help lift the daily cognitive load.
  4. A special issue of Science called The Social Life of Robots has many articles that cover robots as coworkers, neuromorphic robots, the challenge of robot sensors, giving robots the big picture of the world, the psychological implications of robots that look human, robots and the law, and robots in biological research. Yes, I find emotion-reading robots creepy. But that’s a personal reaction, which may or may not be germane to AI’s future research. If there’s one thing I’ve learned, it’s that a thinking-fast reaction needs much more thinking slow to properly examine it. Suppose, for example, an emotion-reading robot becomes a pedagogical tool that teaches humans how to understand and respond better to the emotions of people around them.
  5. Computer science on the whole is regrettably ahistorical. An eager researcher will gladly reinvent the wheel before he’ll take the time to search the literature and see if anyone else has tried what he has in mind. Acknowledging this, Manuela Veloso, an eminent roboticist at Carnegie Mellon then, exploded to me, “Such a waste!” But William A. Wulf, for eleven years the president of the National Academy of Engineering and a computer scientist himself, says this allergy to history reflects the way funding is appropriated and papers are selected for publication; only the new matters, whether or not it’s actually new. Unlike mathematics, with its longstanding cultural traditions to cite precedents, computer science in general has no such pressure. Raj Reddy, maybe to tease me, said dismissively, “Oh, it’s just easier to reinvent than try and track down some original idea.” To finger these reinventions requires a canny practitioner-turned-historian with breadth and depth, like Nils Nilsson, in his The Quest for Artificial Intelligence (Cambridge University Press, 2010). However, Professor Mary Shaw has informed me that at Carnegie Mellon, the introductory course for new PhD students in software engineering begins with about two dozen classic papers that every software engineer should know, and each unit of the course bridges from some of the fundamental papers to how the ideas have evolved. Those early papers account for about a third of the course reading. “We introduced this in a curriculum revision a few years ago because we were frustrated about exactly this problem.” (Private communication)
  6. The acronym stood for a number of phrases, including Mathematics and Computation, Man and Computers, and so on.
  7. Shannon would tell Joe and me at a 1984 conference in Brighton, England, that he’d tried to get people to call what he did communications theory and not information theory, but the name stuck. “Let’s start a campaign to rename it,” Shannon joked to us, knowing how impossible it now was. Joe soon found an early paper of Shannon’s where he’d used the term information theory himself, setting the precedent.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book