13

1.

Edward Feigenbaum, a prominent member of the second generation of AI researchers and an academic son of Herb Simon, took AI research in the opposite direction from his forefathers. This was the Tristan chord I’d been deaf to when I worked for him. Because I didn’t know what had come before, I couldn’t know how radical his departure was.

The second generation of AI researchers departed from their forefathers by being less interested in modeling precisely how human intelligence works than in devising ways to help humans accomplish things—as you’ll see with Feigenbaum and, in the next chapter, Raj Reddy.

Feigenbaum was born in Weehawken, New Jersey, on January 20, 1936, in the heart of the Great Depression. While he was still a young boy, his father died, and his mother remarried to Fred Rachman, an accountant for a baked goods firm. The boy and his stepfather developed a warm relationship, and his stepfather would take him faithfully each month across the Hudson to New York City to see the show at the Hayden Planetarium. (“They did new shows once a month in those days,” Feigenbaum recalls.) Then they’d add a visit to one or more rooms of the American Museum of Natural History. These visits got him started as a scientist.

Fred Rachman often brought work home, and a mechanical (soon, an electromechanical) calculator to do it. The boy loved these Marchants and Fridens, and learned to work them skillfully. “I didn’t have a letter on my sweater, but I could lug these calculators on to the bus to school, and show all my friends what I could do with them.”[1]

From Weehawken High, Feigenbaum went on a scholarship to Carnegie Institute of Technology (now Carnegie Mellon) to study electrical engineering. Money was tight: he often had to work outside school to help support himself. One of those jobs was teaching science in a Lubavitcher elementary school in Pittsburgh’s Squirrel Hill. “I couldn’t mention sex, I couldn’t mention evolution, I couldn’t mention a whole bunch of things that the rabbi forbade,” he laughed once. “Teaching science under those circumstances was a challenge.”

As a sophomore in electrical engineering, Feigenbaum felt “something was missing.” He found a graduate-level course called Ideas and Social Change, taught by the behavioral scientist James March. March allowed Feigenbaum into the course, where he learned about John von Neumann and Oskar Morgenstern’s Theory of Games and Economic Behavior. Feigenbaum loved it. Soon, modeling of behavior was introduced, even more fascinating to the undergraduate. That summer, March gave Feigenbaum a job doing experiments in social psychology, which led to his first published paper with March, on decision-making in small groups. March also introduced Feigenbaum to the senior colleague with whom he was writing a book on organizations, Herbert Simon. Simon took an interest in the youngster and helped him get a summer student fellowship the following year. Feigenbaum subsequently enrolled in Simon’s course called Mathematical Models in the Social Sciences. This was the course where Simon announced, “Over the Christmas holidays, Al Newell and I invented a thinking machine.”

Feigenbaum would later call that a born-again experience. He took the IBM 701 manual home and, by dawn, was hooked on computers. In graduate school, his PhD dissertation, written under Simon’s supervision, was a computational model of some aspects of human memory, Simon’s great preoccupation. “Here’s the data,” Simon had said, showing him what the psychology literature had carefully accumulated by experiments. “Let’s make sense of it.”

Feigenbaum remembered later, “Never, ever was the brain brought up. This was altogether a model of the mind, of human information processing with symbols at the lowest levels.” (McCorduck, 1979)

Psychologists had collected much data on how people memorized lists of nonsense syllables. Could Feigenbaum write a computer program that remembered and forgot the same way that people did, and thus explain the behavior? He could. In memorizing lists of nonsense syllables, he realized, people didn’t memorize whole syllables. Instead, they memorized tokens that stood for the syllable, tokens that then called up the entire memory. He incorporated this and other memorizing and forgetting patterns in a groundbreaking program called Epam, for Elementary Perceiver and Memorizer, but also because at the time Simon was studying the Theban general and statesman, Epaminondas. Simon would eventually take this work further with psychology colleagues, but by then, Feigenbaum was in pursuit of something more interesting.

Between Feigenbaum’s PhD and his first academic post, he took a year to visit the National Physical Laboratory in Teddington, England, and then came to Berkeley, where he and his friend from Carnegie Institute of Technology, Julian Feldman, taught organization theory and artificial intelligence and where Feigenbaum and I first met. When he and Feldman saw how eager students were to know more about the topic of AI and its growing importance, they knew a textbook was needed, and thus was born Computers and Thought, the first collection of readings in the field.

And so was our friendship. To write of friendship is to consider the sweep of a lifetime’s respect and affection. Such a friendship, Montaigne observes, has no model but itself and can only be compared to itself. In 1960, Ed Feigenbaum had detected in a young Berkeley co-ed something out of the ordinary (or so it felt to me, that young co-ed). He and Julian Feldman invited me to work on Computers and Thought, my introduction to the field. When I left the field for other interests, I often returned to Ed to hear what was new in AI. But the friendship endured, with great depths that transcended anything professional. For that, I’ve always been grateful.

Years later, I’d reflect on how much Ed Feigenbaum is a man who loves women. He has two beloved daughters from his first marriage. His second marriage to Penny Nii, a Japanese-born woman who became his scientific colleague, brought him two beloved stepdaughters. He’s been drawn to strong, imaginative women, and made sure the women around him, in his family, in his research groups, flourished magnificently. All of those women went on to singularly successful careers. To me, he was teacher, mentor, big brother, and finally, beloved friend.

So Feigenbaum and I got along smoothly and happily with each other from the outset. Once during the Computers and Thought days, Feldman walked into a small office where Feigenbaum and I were chatting, listened to us for a moment, and shook his head. Oy, such yentas!

I shrugged. Yes, Feigenbaum and I loved to talk to each other about everything under the sun. Feldman nodded. It was, he said, bashert. That sent me to a Yiddish dictionary: foreordained, fated. So it was.

2.

After Computers and Thought was delivered to the publisher, I moved on. Five years later, when Feigenbaum went from Berkeley to Stanford, he called me to come and join him as his assistant, which would change my life.

I learned. I watched. I absorbed. I asked questions—always patiently and fully answered. I didn’t know that, at this moment, his hands plenty full with running the Stanford Computation Center, not to mention the serious sailing he was doing on San Francisco Bay and beyond the Golden Gate, he yearned for something more ambitious for AI. It was coming to him that nothing was bigger than induction.

“Induction is what we’re doing almost every moment, almost all the time,” Feigenbaum said. We continually make guesses and form hypotheses about events. Brain scientists believe that at the level we do it, this is a uniquely human speciality, but in the 1960s, Feigenbaum was only asking how induction works in scientific thinking. Here was a significant challenge for AI, more ambitious, certainly more important, than how people memorized lists of nonsense syllables. Was the field ready to tackle something so sophisticated? Was he?

By chance, Feigenbaum encountered Joshua Lederberg, a Nobel laureate in genetics at Stanford, and told the geneticist what kind of problem he was seeking. “I have just the thing for you,” Lederberg said. “We’re doing it in our lab.” It was the interpretation of mass spectra of amino acids, the task of highly trained experts. Lederberg was heading a project for a Mars probe to determine whether life existed on Mars but knew he couldn’t ship human experts to operate mass spectrometers on the Red Planet.

In 1965, Feigenbaum and Lederberg gathered a superb team, including philosopher Bruce Buchanan and later Carl Djerassi (one of the “fathers” of the contraceptive pill) plus some brilliant graduate students who would go on to make their own marks in AI. The team began to investigate how scientists interpreted the output of mass spectrometers. To identify a chemical compound, how did an organic chemist decide which, out of several possible paths to choose, would be likelier than others? The key, they realized, is knowledge—what the organic chemist already knows about chemistry. Their research would produce the Dendral program (for dendritic algorithm, tree-like, exhibiting spreading roots and branches) with fundamental assumptions and techniques that would completely change the direction of AI research.

As Richard Wagner’s celebrated Tristan chord changed all subsequent musical composition, Dendral changed all subsequent AI. Until Dendral, the most important feature of AI programs was their capacity to reason. Yes, the earliest programs knew some things (the rules of chess, the allowable rules of logic) but emphasis had always been on reasoning: refining and elaborating the way the program moved toward its goal. Hadn’t the great Aristotle called humans the reasoning animal? Wasn’t this confirmed by nearly every philosopher who ever thought about thinking? This unquestioned assumption led Allen Newell and Herb Simon to design the General Problem Solver program, which tried (but mostly failed) to solve problems generally.

More than two thousand years of philosophy was wrong. Knowledge, not so much reasoning, was essential. You can almost hear the protests from the shades in the agora.

Although Dendral’s reasoning power, what would come to be called its inference engine, was strong, Dendral’s real power and success came from its detailed knowledge of organic chemistry. Knowledge allowed the program to plan, put constraints on possible hypotheses, and test them. As a stand-alone program, Dendral became essential to working organic chemists. Its heuristics were based on judgment and specific chemical knowledge, what in humans we call experience and intuition. Joel Moses at MIT would say to me later, “It’s insane to think you can do brain surgery without knowing anything about the brain—just reason your way through it.”

The knowledge principle, as Feigenbaum came to call it, asserts that specific knowledge is the major source of machine and human intelligence. With the right knowledge, even a simple inference method will suffice. Knowledge can be refined, edited, and generalized to solve new problems, while the code to interpret and use the knowledge—the reasoning, the inference engine—remains the same. This is one reason why, in the last few years, AI has become noticeably smarter. The amount of knowledge on the Internet available to Watson, Google Brain, or language-understanding programs (or scores of startups) has grown dramatically. Big data and better algorithms implemented at multiple processing levels have vastly improved performances. But even the field of machine learning, dependent as it is on algorithms, acknowledges that domain knowledge is essential to intelligent behavior.

Dendral, Buchanan said, was the first program to attempt to automate scientific inference. It was the first program to rely on textbook knowledge and the knowledge of human experts in a scientific domain. Dendral was the first program to represent such knowledge in an explicit and modular fashion. “We were learning how to represent the knowledge in a nice, clear, high-level symbolic way—you could actually see what the knowledge was,” Feigenbaum added. (This idea was to be significant in the future digital humanities.)

It didn’t matter that it was knowledge already known: patterns of remembering and forgetting nonsense syllables had also been well documented when Feigenbaum sat down to write Epam. Those empirical experiments verified that the program successfully imitated human learning and forgetting in one small domain. Now he and his colleagues had set out to model the process of spectra interpretation well enough that a computer program would match or exceed what a human expert could do. But Buchanan, the trained philosopher on the team, whose interests were in scientific discovery and hypothesis formation, was eager for Dendral to go further and make discoveries on its own, not just help humans make them. In decades to come, this would happen, but by then, other scientists had taken up the challenge of a progam that makes scientific discoveries on its own, as we’ll see later.

The team discovered that the more you trained the system, the better it got. Because it embodied the expertise of human specialists, this kind of program came to be known as an expert system. In short order, Dendral was followed by Mycin, a program to help a physician identify and recommend antibiotics for infectious diseases. If asked, Mycin could also explain its line of reasoning. Another later program, Molgen, generated and interpreted molecular structures.

If Dendral and later Mycin came to outperform human experts, Molgen had a different challenge. Not much was known about generating and interpreting molecular structures, and that modest knowledge was stored in the heads of human experts around the world. To store and draw on that geographically distributed knowledge, the Molgen program ran on the only non-ARPA-funded machine allowed on the ARPAnet, the precursor to today’s Internet. Users could dial in from all over the country—university biology departments, pharmaceutical companies—to access the Stanford sequence manipulation routines and add their own knowledge. Before long, some 300 users were coming in over the ARPAnet.[2] But it was another twenty years until computer graphics and networks were up to the task of generating wide-scale automatic molecular structures. Molgen thrives worldwide now.

The first major step in constructing an expert system was to interview human experts and gather their specialized knowledge. Next, that knowledge had to be cast in executable computer code. Both jobs were pioneered by Penny Nii, the first knowledge engineer and Feigenbaum’s wife. Extracting knowledge often required several cycles: experts didn’t always know exactly what they knew, nor could they articulate it. Seeing their expertise laid out in code or seeing the results of an executed program, they might realize they’d forgotten to mention an important step, mischaracterized the importance of it, or identify any number of other glitches that became apparent only after the program was run.

But once knowledge was successfully extracted and coded, it was an extremely powerful way of solving real-world problems. Dendral’s success also came because it solved a relatively narrow and well-defined problem with clear solutions. Although Mycin, the infectious disease-detecting program, often outperformed the Stanford specialists in that task, it too was ahead of its time. Because it couldn’t easily be integrated into local area networks, it wasn’t useful for a physician on the job.

Knowledge-based systems, as they came to be called, would permeate AI, whether humans jump-started the program’s knowledge, or the machine collected and interpreted the knowledge autonomously, as would happen in the early 21st century in machine learning and data science.

Developments in computer technology helped AI’s successes in the late 1960s and through the 1970s immensely. Solid-state hardware, telecommunications interfaced with computers, better time-sharing, more sophisticated software generally all made expert systems possible, practical, and then commonplace.

3.

I vividly remember Ed Feigenbaum visiting Carnegie Mellon in the early 1970s and addressing his colleagues about his expert systems research. “Guys, you need to stop fooling around with toy problems,” he declared to researchers engaged in chess and speech understanding. It was a nervy challenge to his two great mentors, Newell and Simon, and to Raj Reddy, who, after all, had been hard at work on making computers understand continuous human speech, hardly a toy problem. Yet if Feigenbaum’s comment bent noses out of shape, I didn’t hear about it.

Feigenbaum was convinced that the scale of AI itself needed expansion. AI was being practiced by a handful of people, and there was no source book. Thus was born The Handbook of Artificial Intelligence, an encyclopedia of all that was then known in AI. It was important to the field’s growth, and its royalties went to the Heuristic Programming Project at Stanford to support yet more graduate students. After the book made these principles public, researchers all over the world, especially the Japanese, would seize and develop them.

4.

After I left Stanford in 1967, Ed Feigenbaum and I remained good friends, phoning each other (too early for personal email in those days) or dropping in on each other on either coast. A comfortable harmony existed between us, because each of us was working, especially in the late 1960s and early 1970s, on how to reshape the roles of man and woman, husband and wife, inherited from our culture. How could we live a life that was both fulfilling, yet considerate of those we loved?

The path wasn’t smooth or obvious. Ed saw me not long after Joe and I first moved to Pittsburgh and later confided that he’d been worried about me. He could see I was already resentful of the long hours Joe spent at Carnegie, but I seemed to have no life of my own. That was the winter I explored Pittsburgh and western Pennsylvania by myself, knowing no one, marooned in an alien landscape. What degree of autonomy I could allow myself? I’d begun writing a novel about TV news. Was it okay not to be home to fix dinner because I was sitting in a TV studio watching a news program being produced? Tradition said my husband was free to do his job at any hours he chose, while I must wait passively to have my time programmed by his schedule. It was puzzling to work out.

In mid-August of 1972, Ed and I met at Stanford and spoke frankly of our friendship and how important it was to each of us. My journal records our very personal exchange. I told him I loved him because he’d known me in bad times and good, and I always felt like he was on my side. Above all, I said, he knew how to listen.

Ed protested, “I’m not an indiscriminate listener. I listen to you because our thought processes are so much alike, and I feel like we have a special understanding because of that. I can talk to you in turn because I never feel as if you’re judging me. You understand, you accept, period. I’m always scared before we meet that somehow it won’t work, that I won’t be able to convey to you that I’m—”

I interrupted. “Me too, because every once in a while it doesn’t click, and I feel sad, and empty, and frustrated.”

In my journal, I wrote:

A magical afternoon, the sun as tangy as club soda, the blue sky and green of Stanford’s trees vivid. I didn’t want it to end. When will we see each other again? We know that the friendship would not be as intense if we saw each other regularly, yet we also know that we did see each other daily for years, and our affection and respect were steadfast. I’ve never felt as warm and affectionate toward him as I do after today.

When I thought I might write a history of AI, but had doubts whether I could tackle the scientific complexities of the field, Ed stepped in firmly to shore up my self-confidence. Yes, you can, he said; we, your friends, will help you. They did, him chief among them. During the time I wrote that book, I was at my most eager to get out of Pittsburgh, so Ed began inventing jobs for me. The Stanford computer science department might publish a journal, and I could be executive editor. Expert systems research was being commercialized, and Ed was involved with two startups. If I came to Silicon Valley, there’d be a high level job for me at one of those places. What kept me from saying yes was the conviction that I was meant to put my name on the spines of books, not edit other people’s words or make the wheels of commerce turn.

So we made do with phone calls. We both loved music; we’d often tell each other about new music we’d heard, wanted to share. Ed was singing in the Stanford Chorus. Years later, he heard I too was singing (though the American Songbook, around a piano in midtown Manhattan) and teased me: what took you so long? We had ambitions to read novels together, exchanging reactions across the continent, and I think we did read One Hundred Years of Solitude together. In any case, we’d send each other titles of books that we thought the other might like.

Our friendship has been one of the great blessings of my life.


  1. Feigenbaum would return the favor of those planetarium visits and calculator loans. Years later, in the mid-1960s, when Fred’s job looked precarious because industry was shrinking in New York City, Ed brought his stepfather to Stanford to learn how to be a computer operator, switching the tapes on tape drives and watching the console lights that signaled to operators the steps they needed to take.
  2. Molecular biologist Larry Hunter has argued persuasively that molecular biology simply cannot be done without AI techniques that verify knowledge, trace lines of reasoning, and keep ontologies (agreed-upon knowledge) straight and consistent.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book