23

1.

I’d given up the idea of writing a biography of Herb Simon and returned the advance to the publisher. I was just too close to write anything but hagiography, and he deserved better. I wasn’t idle: commissions appeared from the dozens of magazines that suddenly blossomed in the early 1980s to present science to an apparently insatiable lay readership. I was teaching science writing at Columbia University and also worked for women’s magazines, Cosmopolitan and Redbook. They knew what they wanted, they gave expert editorial guidance, we all had fun, and I made money. But journalism, with the exception of Wired, was basically frustrating,

I wrote another book, The Universal Machine, a series of connected essays about the worldwide impact of the computer. Although both my agent and my editors thought the book was good, it fell into the hands of a self-declared humanist, who reviewed it for The New York Times, hated computers (and by extension me), and shellacked it. The review was one of the few times that this kind of ignorance didn’t make me laugh. Deep points in that book were worth making. However, the reviewer was a part of the First Culture that remained deliberately ignorant of what was already possible and impervious to what lay ahead.

September 13, 1984:

Lunch at the invitation of the chairman of the English department at Columbia. In his opinion, computers might possibly be useful, but his colleagues are hostile. The usual reasons for this, including “another passing fad” which stuns me into silence. What he really wants to talk about is that he feels his PhDs are at a comparative disadvantage for not knowing word processing. Can I suggest anything? Lobotomy, I think, smiling politely all the while.[1]

About this time I was at a dinner party with the Nobel Laureate physicist I. I. Rabi, who leaned over to me with a good-natured chuckle and said, “You can learn a lot from the humanities.” Pause. “But not from the humanists.” I often walked Rabi up the hill from Riverside Drive to the Columbia campus (he had a well-calculated route to avoid the ferocious wintertime winds that blew along 116th Street) and I once asked him how physicists reacted when he brought back from Germany in the 1930s all these new-fangled ideas on quantum physics. How long did it take them to accept the new? He laughed merrily. “Never! I had to wait for them to die.”

2.

In The Fifth Generation, I told a story recounted to me by numerical analyst Beresford Parlett and worth repeating:

It was early in July 1953, a rare hot day at the end of the summer term at Oxford. Two punts were being languidly poled down the river Cherwell, filled with high-spirited young men who were on their way to a twenty-first birthday picnic for Beresford Parlett. Parlett, who would later become a professor of computer science at the University of California, Berkeley, was an Englishman with an affinity for American friends, and it happened that his punt carried the college’s American contingent of Rhodes Scholars, men who were studying economics and mathematics. Among them was Alain Enthoven, later Assistant Secretary of Defense for Systems Analysis and still later, a professor of economics at Stanford University. Enthoven stared meditatively at the punt ahead of them, which contained, by everyone’s estimate, the brainiest young men in the college. They were all “reading greats”—studying the Greek and Latin classics. “There,” said Enthoven, fixed on the punt ahead of them, “there is England’s tragedy.” (Feigenbaum & McCorduck, 1983)

I reported this then because it fit neatly into the saga of British efforts in AI. But I missed its fundamental significance. Accidentally or by design, in the early 19th century, universities had become a ghostly simulacrum of the British class system—belles lettres at the top, the study of music and painting, history and philosophy just below, and so on, all the way down to the contemptibly practical, like science and engineering, considered no better than intellectual shopkeeping.

Here’s another way British education preserved the class system: My father, a clever boy, had left school at fourteen, as most working class children did in 1926. A few years later, when the Great Depression had already arrived in Europe, he sat for a university scholarship and, to his deep joy, came in first in the competition. The authorities congratulated him warmly but then told him that of course the scholarship must go to Lord So-and-So’s son, who could actually use it.

For me to suggest a half century later that artificial intelligence—whose very name put people on edge—might have something to do with the mind, might profitably be attended to by people whose brief was the human mind, was hopeless. AI was about machines and engineering. One might as well suggest a wedding to the dustman. Although the gods of the First Culture continued to reign in Valhalla, their unwillingness to consider the digital world had already put the castle to the torch.

In mid-April 1985, my agent sent me to talk to various editors about ideas for books. One of them, occupying one of the loftiest thrones in Valhalla, spent a while with me, assuring me he was an agnostic on the subject of the information revolution (though he lamented the millions his firm had spent on teaching programs instead of textbooks). He asked me—justifiably—are things changing? Isn’t it really that expert systems will only replace people who weren’t really experts? I replied with the cost effectiveness argument: at that time, no firm would build an expert system except to replace a costly expert—the undertaking was just too expensive. “But really,” I wrote in my journal, “it’s just the old ‘if it’s intelligence, it can’t be automated’ argument.”

The culture clash was acute. As I’d entered, he showed me proudly that Joe Weizenbaum had contributed a blurb for one of his authors, and I paused to think anyone took this seriously. Yet because I was so awed by this editor’s name and splendid eminence in the publishing world, I was tongue-tied trying to explain myself. Afterwards I understood it was also because the paradigm had shifted for me. It hadn’t for him. It was useless to say that.

Although I was intimidated by this editor’s First Culture renown, I could also see how pompous he was (pontificating at length and so softly that I could barely hear him over the air conditioning, deliberately causing me to lean forward over his desk), how oblivious he was to the intellectual excitement surrounding the computer. He fit perfectly C. P. Snow’s old description of the First Culture: he was an “intellectual” and excluded from that category anything that didn’t interest him. Culture meant only what he said it meant. An atavistic reverence from my youth had overcome me, that younger part of me that honored and wanted to be accepted in the First Culture. Part of me mocked myself; part of me wondered why I longed to be welcomed.

For this major editor was like the minister of culture of a tiny, once important country, who hasn’t yet had the news that power has shifted. God only knows what he thought of me, gasping and burbling, totally undone by the border crossing. He thought: here’s another techie who can’t put together a declarative English sentence. I thought: once more I question the worth of language, heresy for a writer. Weizenbaum as an endorsement to be proud of? World literature teems with the pious hypocrite, from Tartuffe to the Reverend Arthur Dimmesdale to Uriah Heep: what’s the point of being the Grand Vizier of Literature if you can’t detect one on the hoof? “The mustard gas of sinister intelligent editors,” Allen Ginsburg had written in Howl.

Said my husband consolingly afterwards: yes, you’ll have to wait for this generation to die. The next generation will wonder what all the fuss was about.

I, Sisyphus. But as Camus argued—long story—Sisyphus was happy. Me too.

In the next thirty years, the heated (or pleaful) arguments for studying the humanities arrived as the night follows the day. The first point was nearly always that the humanities teach critical thinking. Did anyone doubt that studying to be a scientist or an engineer requires sharp critical thinking? Clarity of expression, then? No: that’s the purpose of composition courses, left routinely to teaching assistants, adjuncts, and specialized remediators. Reading novels and poems trains your empathy? More like it. The humanities enrich your life? You bet. Profoundly. But the world had changed: students needed to know they’d graduate with some promise of gainful employment to pay off the staggering debts they were now incurring. For that, the humanities seemed unpromising.

3.

December 21, 1985:

My survey of contemporary American literature tells me one could read a sizable chunk of it and be innocent that any technology besides the telephone and the internal combustion engine affect modern life. Since both these are a hundred years or more old, this doesn’t seem especially brave on literature’s part. Norman Mailer, writing to invite PEN members to the coming international meeting, jokes that no one pays attention to writers. But why should they? Meanwhile, the writer’s imagination, the imagination of a religious fanatic, believing divinity on its side, is fat and megalomaniac, dreaming it has answers. Preposterous: it doesn’t even know the questions.

The following month, Joe and I were in California. I’d been invited to join a panel at Santa Clara University in Silicon Valley to respond to the remarks of the main speaker, Ashley Montagu. A celebrity pop anthropologist, forgotten now, he’d made his name with twenty popular books and was a fixture on late-night TV talk shows. Hundreds were turned away from the large lecture hall.

January 11, 1986:

My worst fears are realized when I hear, first, that he always talks by “spontaneous combustion,” as he puts it, and second, that he long ago stopped reading other people’s books. He artfully quotes Hobbes, that if he were to read the work of others, he would be as ignorant as they. I judge he’s a man living on his intellectual capital, and I’m right.

He tells us the topic—the impact of the computer—is so important, however, that he’s going to read his speech, which he doesn’t. When we’re fifteen minutes along, and still on The Fall (with Cain and Abel thrown in) it looks to be a long night. I’m fascinated by his technique—many irrelevant parentheses, mainly jokes at the expense of the professoriate and other professions, snatches of poetry, storytelling—and fascinated too by how he evokes sheer adulation from the audience, as if their critical faculties were simply nonexistent. I have a pretty dilemma. To tell the truth, thereby making enemies of the seven hundred who adulate, but also permitting me to wake up and face myself tomorrow morning; or to be a well-mannered guest. Eventually I choose to praise him for his truths—though platitudes most of them are—and merely “raise questions to which I have no answers” about some other topics he’s raised.

For instance, if we have evidence that people have been dehumanized since the agricultural revolution by their technology, maybe, after ten thousand or more years, we need to redefine what it means to be human. I add that since (as Dr. A.M. has correctly pointed out) tools are human thought made manifest, then it can’t be dehumanizing to come face-to-face with another aspect of our humanity in the computer.

And so on. I put in a plug for computer science, which A.M. obviously doesn’t understand, but feels free to criticize. I say more, including raising doubts that happiness is the same as the simple, untechnological life. (A.M., meet Raj Reddy.) But all the time I was immensely polite with my Nice Girl smile, and kept congratulating him for his insights. I said not a word at how shocked, and then contemptuous I felt of the audience, that this string of platitudes, half-truths, and outright fabrications were so inspiring to them. Truly the triumph of style over substance, and frankly, I could take a lesson.

November 8, 1987:

At the Art Institute of Chicago. A perfectly awful panel led by some young woman at the Art Institute School, who misunderstands computers, art, and, God knows, physics. She repeats from time to time: “Here’s how I FEEL about physics…” The artist Harold Cohen beside me is snorting in rage. I’m laughing. Yes, dear, please tell us how you FEEL about physics. I was put in mind of 19th-century Margaret Fuller’s apocryphal declaration: “I accept the universe!” and Thomas Carlyle’s reply: “Egad, she’d better.”

4.

Yet all the while I kept asking myself: was I so high on what promised to be one of the grandest intellectual accomplishments of humanity that I was cruelly impervious to the deep, perhaps unconscious, fears of the humanists? Did I fail to see how frantic they were that the earth was shifting under their feet—or the smoke was drifting upward from Valhalla’s cellar? Couldn’t I moderate my enthusiasm, extend empathy, compassion?

No. They were neither fearful nor frantic. They didn’t need my compassion and would have refused if I’d offered it. They were the aristocracy, sublimely self-assured in their faith.

That then raised another question: did they not actually assimilate the texts they claimed to honor? The texts that, over the centuries, counseled open-mindedness and humility in the face of the new; counseled caution in the arrogance of faith; texts that mocked the complacency of the status quo, that ridiculed the zealously pious (who always had a seamy underside—celestial thoughts and subterranean conduct, as Montaigne put it)? Did they draw no lessons? Learn to tell the ersatz from the genuine? Did they not for a moment think that this could be important?

Yet some humanists were intrigued. I described a holiday party we’d attended our first semester at Columbia:

December 24, 1979.

The humanities professors—French, history, philosophy—showed how the first-rate are so different from the second-rate. A) They’re fascinating to talk to on their own subjects, having a wide and relaxed view, and B) they’re eager to know about other things, and welcome news about whether artificial intelligence will have an impact on their own field; very little in the way of derisive laughter, or the just plain indifference I was used to getting from the English department in Pittsburgh. Or maybe it’s what my mother would’ve called breeding—no matter how ridiculous you consider your conversation to be, you politely dissemble. Either way, more than a few cuts above what I’m used to from the humanities.

November 11, 1986.

Two days at Kenyon. A feeling of letdown. I think: at last, recognition, but it’s the techies who’ve invited me. The English department—largest by far on the campus—doesn’t know what to make of me (nothing, in the end: the chairman shifts uneasily as we’re introduced. “Oh yes, I’d heard, uh, maybe we could get together tomorrow?” this vaguely, and doesn’t come to my lecture so I think to hell with him too). The students are marvelous—sharp without being smart alecks (though the women still keep silent; I have to draw them out). Very much like the Chautauqua experience [I’d given a talk there the previous summer]. I both admire and am appalled by such hermetic decency. I’m too hard—the artists came to my talk and were enchanted. The techies were grateful I’d come to validate their worth. When I give Joe a précis tonight, he understands, says once more: yes, and when the dust settles you’ll get no credit. . . .Reading Furbank’s Forster. E. M. Forster understood he was “important” by his mid-20s. I spend my days reconciling myself to my unimportance, hoping against hope it isn’t so.

Septmber 6, 1988

I accept an invitation to speak on AI and the Humanities at Pitt. The enthusiastic organizer tells me he’s going to get right off the phone and “tell everyone you’re coming.” I laugh, do not tell him how the book he’s praised most, MWT, was the cause of my banishment from Pitt. Ah, the wheel always turns. . .

As I recall, no one from the Pitt English department came to that talk either.

5.

History loves irony, and so the 1980s were exactly the decade that AI research was moving brusquely and impatiently into territory long claimed by the humanities, particularly philosophy. What was mind? Could it be, as Marvin Minsky proposed, a “society” of competing, relatively independent agents inside your head, each one jockeying for dominance? If you moved from representing problems to representing knowledge in a computer system, just how was this knowledge to be represented? Was a general-purpose representation possible, or did different kinds of knowledge require different representations? If you chose a general-purpose representation, how did you organize and connect knowledge in several domains? After you chose a suitable representation, how was the ontology, the agreed-upon knowledge, to be kept consistent and valid? How were beliefs, or even truth, to be revised, validated, and maintained in the light of new knowledge?

Philosophers from at least Aristotle on—including more recently, Charles S. Peirce and Ludwig Wittgenstein—had wrestled with these issues with little success. AI was quietly breaking and entering into a lordly old mansion owned by the philosophers for centuries. Unfortunately for AI, most of the rooms in that mansion were vacant.

For decades, those few philosophers who considered AI even worth their attention treated it like a great game of poker—grave visage, dazzling plays, strategies, bluffs, and quick adaptation as the game changed. Every once in a while, a fellow philosopher named Daniel Dennett would stop by this game for a few hands, clean out the pot, and depart.[2] The other players hardly noticed. After all, what mattered were bravura playing and clever rhetoric for each other and especially for the partisan spectators (“I knew machines could never think and now you’ve proved it”).[3] No. They’d invented parables, but hadn’t proved anything.

In its intellectual contributions, the philosophers’ game was finally inconsequential and embodied the Arab proverb: the dogs bark and bark, and still the caravan moves on.

AI had a problem that philosophers had never faced: its researchers needed to write programs that demonstrably worked. Forced to make vague concepts precise enough to turn them into executable computer programs, researchers of the 1980s were absorbed with figuring out more of what constituted thinking: programs that planned ahead and took into account limited resources, such as time and memory. Programs began to learn from explanations and began to function in an environment where multiple agents, often in conflict with each other, needed to act.[4] During this decade, foundational work in applied ontology emerged because truth maintenance was suddenly a necessary goal, a way of keeping beliefs and their dependencies consistent. Systems cleverly increased the speed of inference and exhibited a much better understanding of the interaction between complexity and expressiveness in reasoning systems. Artificial agents began to use psychological reasoning about themselves and other agents.

All these sound dauntingly technical. They are. Not then nor later did they lend themselves to sexy journalism or inspire Dionysian passions. Indeed, those years were sometimes described as “the AI winter,” largely because no one could figure out how to monetize such research. But the work is the anatomizing of what, for centuries, was casually known as intelligence—along with all its synonyms: cogitating, reasoning, considering, planning, keeping consistency, inferring, leaping to conclusions, drawing parallels, imagining, mulling, analyzing. Intelligence is a suitcase word, in Marvin Minsky’s phrase, a word that needs careful unpacking to reveal all it contains. Moreover, revelation isn’t sufficient. Each part of this deeply complicated process must be understood and then described in explicit detail so that a computer can carry it out.

I’ve said AI was doing normal science, in the Kuhnian sense of normal, as distinct from revolutionary science. It was dynamic and abundant nevertheless. Those advances would raise further challenges: as data sets grew larger and computation faster and deeper (but more costly in both time and computational resources), how could searches that could never be exhaustive instead be automatically guided? How could goals be reached in a timely way? This was exactly the quarrel Herbert Simon had earlier with classical economists and their impossibly idealized Rational Man, who could never explore all alternatives to arrive at a rational economic decision. Searches needed to be guided, and tradeoffs made between computation costs and timeliness. Meta-level reasoning, over and above the busy lower-level searches, had to find those balances and make those tradeoffs in real time. These were tremendous, exhilarating challenges for AI researchers then, and they remain so now.

Earlier, Marvin Minsky had quietly said to me, look how long it’s taken physicists to get where they are. Surely intelligence is as difficult as physics. Martin Perl, the Nobel laureate in physics reminds us: “The time scale for physics progress is a century, not a decade. There are no decade-scale solutions to worries about the rate of progress of fundamental physics knowledge” (Overbye, 2014). Intelligence is at least as hard, at least as exhilarating.

Indifferent at best, usually hostile, the First Culture disdained it all.


  1. In 2012, Harvard University, never an institution to rush precipitously into change, released a report on revitalizing the humanities at Harvard. The report’s pervasive theme was the imaginative use of the computer. Granted, this occurred some thirty years after my lunch with this particular English department chairman. At Harvard, plunging enrollments in the humanities had helped inspire this reevaluation.
  2. Bruce Buchanan, a principal of the Dendral program and other pioneering AI work, had certainly earned his PhD in philosophy, but he’d gone over to the dark side so early that people outside the field hardly considered him a philosopher. “I wanted to do something important,” he once told me. And so he did.
  3. It bears repeating that Daniel Dennett and I nearly always end up in the same place, but he does the heavy lifting of thinking us through to that end, while I arrive by shortcut. See especially his 2017 book From Bacteria to Bach and Back: The Evolution of Minds (W. W. Norton).
  4. One early cooperative multiagent program was “boids,” a program that simulates the emergent behavior of flocking. Its creator, Craig Burton, eventually won a special award from the Motion Picture Academy of Arts and Sciences for the program’s application in such movies as Batman Returns.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book