29

1.

All flesh is as grass, and many of my teachers and mentors in artificial intelligence have died. The voices of the dead are said to be what we forget about them first, but I hear the distinct timbres, the laughter, the rhythms.

Allen Newell’s death was too early—he died of cancer in his sixties, and although Joe and I were able to say goodbye to him, our loss was deeply personal, enduring, and made all the more melancholy by the work that remained undone.

His work in the last years of his life was enviably ambitious. From the 1950s Logic Theorist, designed to simulate only “a small but significant subset of human thinking,” he eventually proposed and brought forth a program called Soar. Soar adapted the model Newell had first presented in 1980 of levels of intelligence in the computer from the zero-one level all the way up to the highest, what he called the knowledge level. Now he proposed Soar as a unified theory of human cognition whose details are in his 1987 William James Lectures (Newell, 1990). A multilayered, asynchronous model of human cognition seized scientific imaginations, and the late 1980s and early ’90s saw a spate of such programs, including stunning breakthroughs in a subset of machine learning, known as deep learning. Deep learning, as I’ve said, was invented by Geoffrey Hinton and his colleagues, Yann LeCunn and Yoshua Bengio about the time Newell was giving his James lectures, but needed nearly another three decades until computing technology became powerful enough to implement it so fruitfully. Hinton, along with LeCun and Bengio, won the 2019 Turing Award for their development of deep learning.

Newell died on July 19, 1992, aged 65. The days of Allen’s dying and death must have been unendurable for Noël. Yet endure she did. Soon I began to hear that Noël had started traveling. For those of us who’d known how fragile she was in Allen’s lifetime, this seemed inconceivable: we swapped the stories, slightly disbelieving. But there were the facts—to Europe, to Asia. I saw her briefly at a Pittsburgh dinner sometime in the late 1990s. She’d just returned from Vietnam, where she’d crawled through the tunnels used by the Vietcong during that tragic war.

Allen’s devotion to science, his exacting standards for himself, and his disdain for those who didn’t measure up might have been more of a burden to live with than we’d realized. Certainly the facts were that, at Newell’s death, Noël was released and took wing. For all the lugubrious yearnings she and I shared in the 1970s, she didn’t go back to San Francisco. If anyone in her adoptive family remained, what did they mean to her? Allen’s parents were gone. She’d made her life in Pittsburgh after all. Unlike the stoic Dorothea Simon, who fled to California after Herb’s death, to spend her remaining days with her sister there, Noël stayed, tended the flame of Allen’s legacy, and lived a life of her own for the first time. She was admired for her courage—in her early eighties, she’d slipped a few months earlier on winter ice, broken a hip, but came to a talk I gave at CMU’s School of Computer Science, stepping smartly if slowly with her cane, alert and amused, fully self-possessed. I saw her again in 2015 at the fiftieth anniversary celebration of the Carnegie Mellon computer science department, and she was lively and mobile, despite some eye surgery; she calculated for me that her great travels had gone on for seventeen years after Allen’s death. Once more in 2018, at age 90: “Don’t tell me how good I look!” she hissed. But she did look good.

2.

After I left Pittsburgh, Herb Simon and I met from time to time, always warmly. When Columbia University dedicated its new computer science building in 1983, he received an honorary degree at the occasion. We sat together through the opening talk by Columbia’s provost, who spoke nearly verbatim remarks I’d prepared for him. Simon leaned over and whispered to me: “Do you know whose ideas those are?” I laughed. Sure, Simon, recycled by McCorduck. Simon corrected me: “Alan Perlis, Allen Newell, and Herb Simon.” The following day, The New York Times ran a classic picture of Joe putting the doctoral hood on Simon’s shoulders, with Columbia’s president standing by. The cutesy story that accompanied it (by a new reporter called Maureen Dowd) seemed to me yet another effort to put those unsettling machines in their place (1983).

A few years later, Simon was my dinner partner at a New York Academy of Sciences affair. Also at the table were several others, including Donald Knuth, a giant in the history of algorithms, whose volumes are known simply as “Knuth.” He had loosened up with dinner wine, and said pointedly to me, “I suppose I shouldn’t ask this, but are you ever going to do Herb’s biography?” Simon said: “She’s waiting til I’ve done enough to fill a book.”

Eventually, Simon wrote his own memoir and sent me a draft. It was lengthy and candid. When I was thinking of doing his biography, he’d warned me away from writing anything personal, but to my surprise, he confessed that many more of his relationships with women might not have been platonic if he hadn’t been fearful of rejection. The manuscript described a mind-affair he had not too many years after he was married, deftly handled, honest, just the kind of thing I couldn’t have put into a biography, but it fills out the portrait. Although I worried the manuscript must be shortened, I loved its luminous good cheer.

Perhaps the last time I saw Herb Simon was at the 1990 25th anniversary celebration of Carnegie Mellon’s computer science department. It was a warm and lovely few days, celebrating the past, yet mortality was haunting us, with Allen Newell dying in his Squirrel Hill home. Ed Feigenbaum and Penny Nii were there, and we all savored each other’s friendship, took snapshots “like Japanese tourists,” I wrote in my journal after a lunch together. Along with his close colleagues, like Pat Langley, Simon’s work now was about simulating the process of scientific discovery. Their Bacon program had rediscovered Kepler’s third law and Ohm’s law.

Bacon would be a precursor to programs that work not in the history of science but its future. Descendants of that program are now at the frontiers of science. Yolanda Gil and her colleagues at the University of Southern California write that these programs can

radically transform the practice of scientific discovery. Such systems are showing an increasing ability to automate scientific data analysis and discovery processes, can search systematically and correctly through hypothesis spaces to ensure best results, can autonomously discover complex patterns in data, and can reliably apply small-scale scientific processes consistently and transparently so that they can be easily reproduced. (Gil, Greaves, Hendler, & Hirsh, 2014)

Not bad. Moreover, Gil and her colleagues write, “AI-based systems that can represent hypotheses, reason with models of the data, and design hypothesis-driven data collection techniques can reduce the error-prone human bottleneck in scientific discovery.” Even better.

These new techniques aren’t limited to text: they analyze nontextual sources, such as online images, videos, and numerical data. “The world faces deep problems that challenge traditional methodologies and ideologies,” Gil and her colleagues continue. “These challenges will require the best brains on our planet. In the modern world, the best brains are a combination of humans and intelligent computers, able to surpass the capabilities of either one alone” The Defense Advanced Research Projects Agency (DARPA), one of the original sponsors of AI research, has begun automating some research this way, still adhering to its mission to invent revolutionary technology.

Joe and I often tried to coax Simon to visit the Santa Fe Institute. The Institute’s core research focuses on the sciences of complexity, where complexity arises from simplicity. All of the Institute’s original scientists understood their debt to Simon’s ideas about complexity; everyone quoted The Sciences of the Artificial, where the ideas had been laid out. Simon would’ve been warmly welcomed. Maybe he was already feeling too old to travel just for lionization, and, as he once said about China, he could learn more in the University of Pittsburgh library than he could by visiting (although he loved traveling to China and often did). We failed to bring him to Santa Fe.

My journal is oddly silent about Simon’s death in February 2001, aged 84. I didn’t go to his memorial service, although Joe did. Did I need to meet a class I was teaching at Columbia in writing about science? I regret missing the memorial service, but more, I miss Herb Simon. He’s still alive to me in his intellectual acuity and his capacious ability to synthesize and make connections between deeply different fields, dissolving the outerwear of disciplines to find their commonalities. He’s still alive to me in his joyous laughter.

In November 2013, Carnegie Mellon announced the launch of the Simon Initiative. Named in Herbert Simon’s honor, the Simon Initiative is a cross-disciplinary initiative in which learning science impacts engineering education and vice versa. As the world’s largest database on student learning, it examines the uses of technology in the classroom, identifying best practices, helping teachers to teach, accelerating innovation and scaling through start-up companies (a specialty at Carnegie Mellon), and improving student educational experience. It follows from Simon’s long interest in the cognitive sciences involved in teaching and learning. Its scope is now international; anyone can contribute to or use it. Dan Siewiorek says the Initiative has two aspects: the deeper science underneath teaching and learning, and a higher vision of those that goes well beyond the much-hyped MOOCs (massive online open courses).

Also on the Carnegie campus, Newell-Simon Hall honors them both. Housed in Newell-Simon Hall is an extraordinary if gawky-looking robot called Herb, a prototype household assistive robot that can find and manipulate objects in the visually confusing environment of the ordinary household.

One of the most maddening things about Simon’s legacy is how fundamental his ideas became in so many fields that they began to be counted as derived from God. Ed Feigenbaum and I sent outraged messages to each other each time Daniel Kahneman was described as “the first behavioral economist to win a Nobel Prize.” No, Herb Simon, who destroyed the myth of rational man in economics, was the first.

3.

Over the years, John McCarthy and I ran into each other from time to time at meetings and other events, and he would entertain me, as always, with new ideas and stories to illustrate how knowing the science and doing the calculations mattered. My favorite do-the-calculation example is his Magic Doctor. A young doctor is inexplicably gifted with the ability to heal anybody he or she touches. McCarthy proposed various outcomes—that the poor doctor soon drops from fatigue, is sequestered by the wicked and economically threatened medical establishment, or is assassinated by a religious lunatic who believes suffering is the proper fate of sinful humankind. But McCarthy shows how, by doing the arithmetic, the young doctor can in fact heal everybody on earth afflicted with disease in a few hours each day. (The details are in my book, The Universal Machine.)

“It’s what I call the literary problem,” McCarthy said to me more than once. “You can’t make stories out of things working well. You need conflict, failure, drama, to tell a story. That’s why most science fiction is dystopian. It wouldn’t work as narrative if everyone lived happily ever after.” [1]

Ed Feigenbaum threw himself a sixtieth birthday party at a Silicon Valley funhouse, whose main entertainment was for the guests to climb into a flight simulator and pilot whatever kind of aircraft (or maybe spacecraft) they wanted to try. So woozy in the demo that I never made it to the simulator, I staggered out to look for other diversions. With great good luck, I found John McCarthy leaning against a wall, uninterested in the electronic entertainment, I guessed, because he was already a licensed pilot and had done the real thing.

We fell to talking, and this night McCarthy was especially animated. Did I know his daughter Susan’s work? She’d had a best seller as the coauthor of When Elephants Weep. I was astonished. “Sumac is your daughter?” I knew Susan McCarthy from the online forum, The WELL; we belonged to a small group of women who chatted online with each other almost daily. John’s daughter? No, I certainly hadn’t known. I’d followed the adventures of Susan’s children as they graduated from high school, moved on to college, moved out into the world, never knowing I was also hearing about John McCarthy’s grandchildren.

It was touching to see his pride in her, not only in how well she was doing as a writer, but how she’d dedicated her life to watching and caring for animals in the wild. Susan would continue to write sharp-eyed, witty pieces about all sorts of wild animals (from bugs to crustaceans to blue whales) on her blog and published another book, Becoming a Tiger, about how baby animals learned to be grownups of their species. Later, she’d coauthor an acidic but funny blog called SorryWatch, which calls out public figures for their sleazy non-apologies.

When Susan McCarthy told me she’d be coming with her father to New York City for The New Yorker Festival in September 2002, I insisted they be my guests for lunch. It was wonderful to see them both together, teasing each other, taking pleasure in each other’s company. It might be at this lunch that John McCarthy expanded on his quarrel with literature:

When stories take up the theme of technology, especially AI, it’s always dystopian. I suppose in a story you need conflict, you need an us the reader can identify with, and a them the reader can root against. The them to root against is always some technology. In stories, AI is always out to get us, and we must outwit it. Given the conventions of stories, I don’t see how that can be fixed. But in real life, it’s simply not so. Technology is mixed, but on the whole, it’s been a tremendous benefit to the human race. There’s no reason to think AI will be any different.

After lunch, I took them out to hail a taxi. I insisted that John McCarthy give me a goodbye hug. Awkwardly, he did. It was the last time I saw him before he died in 2011. Susan McCarthy swears that somewhere in the house are her father’s notes for how technology came to, and improved, Tolkien’s Shire.[2]

4.

In the fall of 2013, many years after my original interviews with Marvin Minsky, I sat in the same room where I’d once heard him play his music and watched him try to fix his wife’s CPR dummy. Gloria Rudisch Minsky was with us, and I reminded them both of that moment. Gloria remembered at once and began to laugh merrily. “That dummy never did work very well!” she exclaimed. “But the dummies are much better now.” As she’d get up to go toward the kitchen, moving from table to table to support herself (she walked with difficulty now), sometimes her hands would reach for the couch where her husband was sitting, and their fingers would touch for a moment, reassuring each other silently, lovingly.

At age 87, Minsky looked astonishingly unchanged. He was trim, upright, his face unlined, his entire cranium radiating intelligence, as it always had.[3] He told me he was still composing music. Serious health problems had slowed him, but with luck (and once, thanks to his wife’s fast diagnosis), he’d weathered them well. We sat in the same crowded room, every surface, table, floor, covered with odds and ends, such as a little Christmas theater, toy trucks still in their boxes, sheet music, a giant wrench and screw over the fireplace (each item scrupulously dusted, which signaled it was very much an intentional collection), the harmonium and the piano still in place along with assorted other keyboards too, books stuffed into shelves and lining the staircase. We drank tea, ate cookies, and found much to laugh about.

Finally, I asked Marvin where he’d like to see artificial intelligence go in the future. He didn’t answer immediately. At last he said, “I’d like to see it step in where humans fail.”

Minsky was still active at MIT when I saw him then, planning to teach the AI course the following semester. “What will you say?” I asked casually. He responded, “Oh, I’ll probably just say, ‘Any questions?’”

He died on January 24, 2016, of a cerebral hemmorhage. Susan McCarthy, John’s daughter, doing research then in Antarctica, had called him and Gloria a few weeks earlier. They were thrilled to be called from the Antarctic, she said, Marvin his usual self, but sounding very weak. He was the last living of the four founding fathers of artificial intelligence, and everyone who knew him knew a giant intellect, an inspiring teacher and mentor.

5.

Why did none of these four share the fevered fears of later scientists, like Stephen Hawking, or entrepreneurs, like Elon Musk? One answer is ars longa, vita breva, and success seemed very far off. Better to let the problems be met by people who actually needed to grapple with them than lay down hypothetical rules that would be overtaken by reality and time.

But in addition, the founding fathers were all realists. As John McCarthy had often said, technology is mixed, but on the whole, it’s been an enormous benefit to the human race. Why should AI be any different? Yes, it would require many adjustments, some of them major—imagine not having to work at disagreeable, boring jobs just to keep body and soul together—and prudent governments would eventually understand how economic security for citizens was not only needed, but easy to supply. People could then take on tasks that might give them satisfaction, for humans do not like to be idle or without purpose.[4]

Current researchers already aim to build systems that extend, amplify, and provide functional substitutes for human cognitive abilities. “A principal goal of applied AI is and should be to create cognitive orthoses that can amplify and extend our cognitive abilities. That is now and near; a computational Golem is not” (Ford, Hayes, Glymour,& Allen, 2015).[5]

These orthotics will assist the normally aging, or others with small cognitive disabilities. AI already helps operate exoskeletons, devices that allow disabled people to stand upright, walk, and use their arms in easy, intuitive ways. Rehabilitation robots can physically support and guide patients’ limbs during motor therapy, but to do that successfully requires sophisticated AI. In partnership with other disciplines, AI is poised to transform the experience of learning; in both formal and informal settings, classrooms of the future will be places “to achieve challenges together rather than . . . places where teachers teach and students listen and do problem sets” says Janet Kolodner (2015). Her view is far more capacious than the old teaching machine environment, and emphasizes especially the need for collaboration across disciplines.

6.

My best teacher in science, as in so many things, was my husband, Joseph F. Traub, who died suddenly in the late summer of 2015. I’d been a humanities student, but Joe continued the education Ed Feigenbaum had begun and taught me to think like a scientist, too. Why? Where’s the evidence? Is that what the evidence really says, or is there a different way of looking at it? Is it sufficient evidence? Validated evidence? What theory can we abstract from this? Suppose instead, we. . . What if? I wonder. . . He set a great example—and I was open to it—of working hard and playing hard.

He’d overseen the transformation of the Carnegie Mellon computer science department in the 1970s from ten faculty to fifty professors and researchers before he left for Columbia, and his intellectual legacy there was commemorated in 2015 by Carnegie Mellon with a chair named in his honor. His New York Times obituary mentioned how he’d been recruited to his alma mater, Columbia University, to bring computing to one of the great Ivy League universities. In his oral history, in the archives of the Computer History Museum in Silicon Valley, he said that his challenge was “to convince one of the great arts and sciences universities in the United States that computer science was really central” (Raghavan, 2011). You’ve seen from these pages that this wasn’t simple.

In some ways we faced a parallel task, Joe with a great but decidedly backward university when it came to computing, and mine with the literary leaders of my generation. But he began the task and lived to see it carried out by an energetic young computer science faculty. At the Columbia University memorial service in his honor, a common theme emerged: he was a sensitive mentor to young people. Some of them were middle aged and in mid-career now, but each spoke of how Joe had guided, advised, and nurtured them and, in one case, beat on the Columbia bureaucracy to allow Kathy McKeown, a woman with two little babies, to be allowed some slack leading up to tenure so she could have both babies and a career, a revolutionary idea in the early 1980s.

As he had at Carnegie Mellon, he pursued his own scientific career in parallel with building a computer science department at Columbia, doing research and publishing until the end of his life.

Joe was equally active in public service: the founding chair of the Computer Science and Telecommunications Board of The National Academies of Sciences, Engineering, and Medicine, the country’s leading advisory group on science and technology, which he served twice as chair. After that, he moved on to serve on a board of the National Research Council.

Although his own research was distant from it, Joe was an ardent supporter of AI research. Allen Newell and Herbert Simon introduced him to AI, and he figured that the smartest people he’d ever known must be on to something. When he began hiring at Columbia, he looked first not for specialists in his own field, but for AI people—he believed they’d be the intellectual leaders in a new department. He propped me up when I wanted to collapse from the frustration and difficulties of writing Machines Who Think; he shared my enthusiasm for later work; he was very glad I was writing the human side of the story in this book.

In the meantime—I take a breath to say it—he pursued other passions: he loved international travel; took cooking lessons in Pittsburgh from the man who would eventually head the Culinary Institute of America. When we moved to New York, he took several courses at Juilliard—and then convinced me I should join him—to learn more about the music we both loved. He signed us up for courses at the Museum of Modern Art.

He assembled a wonderful collection of early computational instruments. We went to European flea markets at dawn and the annual auction of “office machines” in Cologne, or Joe asked friends, then friends of friends, who might know how he could get his hands on a particular instrument. This eventually included two Enigma machines, a three-rotor and a four-rotor (all now on exhibit at Carnegie Mellon’s Hunt Library).

In the summer of 2012, we traveled to Alsace, mainly to eat and drink. At the town of Colmar, we expected to see a celebratory exhibit of early arithmometers, one of the first widely distributed digital calculating machines, manufactured by Thomas de Colmar in the early 19th century. We owned a couple of these handsome instruments. But curators at the municipal museum stared at us blankly. Somebody allowed as how a few arithmometers might be stashed in a warehouse outside town, but on exhibit? Why would that be interesting? The beginning of the digital age, Joe explained patiently. You should honor this distinguished son of Colmar.

In the last ten years of his life, Joe returned to his first love, physics, partly because he thought his research might have applications to quantum computing and partly because he loved to know. He was thrilled by all the new physics discovered since he’d been a graduate student at Columbia so many decades earlier.

He loved the outdoors, especially the mountains, which he climbed and skied in when he was younger and hiked in until three days before his death.

He loved me. I used to tease him that I was the pampered darling of a doting husband, and he agreed unashamedly. It was an intimate marriage of nearly half a century, emotionally and intellectually, that gave us deep pleasure, joy, and strength.


  1. Alex Garland’s 2015 film, Ex Machina, is an entertaining example. Saturated in literary precedent (from Pygmalion to Frankenstein to R.U.R., with a nod to Bluebeard’s Castle), it has a well-marked hero, villain, and—well, just what is the robotic woman? You can find precedent for her character in each of those works.
  2. When I heard some dozen years later that The New Yorker was experimenting with an AI to deal with the volume of entries it received for its cartoon caption competition at the end of each issue, I wished John alive so I could laugh about this with him.
  3. Patrick Winston likes to tell this story: Danny Hillis once asked him if he ever had the experience of telling somebody a new idea, only to have his listener misunderstand, and in his misunderstanding, make it a better idea. “Almost every time I talk to Marvin,” Winston replied.
  4. Given the prevailing attitudes of our time, say sages of the second decade of the 21st century, a basic minimum income is a nonstarter. But prevailing public attitudes can change very quickly: examples include attitudes toward gay marriage or sexual harassment.
  5. The entire Winter 2015 issue of AI Magazine (in which this quote appears) is devoted to AI for rather than instead of people and is a bracing corrective to the fevered fears of the last few years in the popular media.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book