17

1.

The phone rang before I was up. It was Arno Penzias, the physics Nobel Laureate from Bell Labs. I’d met him when I was sketching a book on computer graphics (the book went nowhere). At some point, Arno and Lillian Schwartz, the computer animation pioneer who used Bell Labs software for her art, and I had a grand time over lunch. Arno and I met socially several times again, and I found him good-natured and likable, if amusingly sure of himself. This morning he wasn’t happy. Without so much as taking a breath, he told me for thirty minutes why machines would never, ever think and how deluded and misled I’d been to spend a chunk of my life on such a project. (It may have been this occasion that he told me he thought Herb Simon was arrogant.)

I tried to get a word in edgewise, but the loquacious Arno was not to be gainsaid. His arguments were pseudotechnical or not technical at all, so in my enforced silence, I wondered if his religious beliefs were firing his sermon. He’d once told me that, as a child, he’d been on a train to Poland, part of a massive relocation of Jews of Polish extraction out of Germany. The train was halted by Germany’s invasion of Poland. That fortunate stop had eventually saved him from Auschwitz. Since then, he’d had a strong feeling that God had saved him for something special, and thus he was a deeply observant Conservative Jew.

Finally, I pleaded. “Arno, I know you’re a married man, so you’ll understand. You woke me up, and I haven’t even been to the bathroom yet.” He roared with laughter and let me go.

2.

The phone rang again on the morning of November 9, 1979. “This is Joe Weizenbaum. I’m calling from Berlin.” For forty minutes he picked nits in the newly published Machines. This was wrong; that was wrong; I’d misquoted him, misunderstood him. But beyond the nits, I’d represented myself as being neutral, even distant. This was fraudulent: I emerged as a partisan, which I admitted in print.

I’d begun as neutral, I replied, but found myself excited by the audaciousness of such a human project. He objected: I’d thanked three people who’d read the final manuscript, Newell, Simon, and Minsky, which told him whose side I was on. They’d read it for technical content only, I replied, and didn’t add that each of them complained about different things. I’d had to resist writing the book each wanted.

Finally, we came to the nub of it all. Who had told me that he, Weizenbaum, said he so admired a piece of AI work that he’d have given his right arm to do it? Who? I refused to say because it had been said to me in confidence, to confirm my guess that Weizenbaum had been unable to do science and had thus turned to moralizing. Since I’d witnessed the evolution of Weizenbaum’s quarrels with AI, I’d written in Machines Who Think that there might be correlation between his professional detumescence and his rise as the field’s ethical critic. It was plausible and widely believed among his colleagues. I’d disclosed it with regret, I told him, but I believed it. Was it untrue? He didn’t reply.

I might not have mentioned this sad little backstory in my book, except his book, Computer Power and Human Reason (1976), received remarkable attention, especially from people in the First Culture, who were finally stirring uneasily about computers. Look! Here was one of the Second Culture people, arguing that bad things might happen with the infernal machines.

Yet Computer Power and Human Reason seemed to me poorly argued, impressionistic, full of exaggerations and late-age Romanticism, and just plain wrong. It contained long paragraphs, maybe chapters, about the pathetic narrowness of people who imagined they could make computers think.

Who were these spiritually and culturally stunted creatures Weizenbaum was lamenting and lambasting simultaneously? Polyglot Herb Simon, who delighted in music, painting, languages, and literature? Allen Newell, reaching eagerly across, and contributing to, one field after another, but always passionately dedicated to understanding the human mind? Marvin Minsky, fast friends with leading science fiction writers, widely read, thinking more broadly about the brain than most brain specialists, and now composing serious music? John McCarthy, exploring the counter-culture, taking risky political stands, and full of provocative and amusing scenarios he dreamed up, each a parable to illustrate why technology was human salvation, not human menace? Raj Reddy, out to illuminate the most benighted villages of the developing world? Ed Feigenbaum, fearless sailor, avid chorister, and such a lover of literature that, years later, he’d lead long public discussions about the future of the book? Weizenbaum had no right to pass off his fictional stereotypes as authentic portraits.

I was especially offended by his facile arguments that AI could bring about another Holocaust. The most repressive societies then going, I countered, were controlled by ballpoint pens and the gun: China and the Soviet Union. Weizenbaum would certainly prove to be prophetic about AI techniques that corporations and governments use greedily to track us massively, closely, and perhaps unconstitutionally. But this, I’d say now, is a human failure, enabled by, but not the fault of, AI. No science or technology of any significance comes to us unambivalently. Humans must (and we are beginning to) take responsibility in that regard. About the agricultural, industrial, and then the scientific revolutions, nearly everyone agreed that each had its costs, but the benefits outweighed those costs. I believe that about the information revolution and AI, too.

3.

What I didn’t realize was that Weizenbaum’s book was an early example of AI’s Dionysian side, passionate eruptions aimed at stopping the whole enterprise at the same time they soothed and reassured humans in their deep need to be number one. Philosophers, mathematicians, scientists, social critics, literary critics, even public intellectuals would all have a fling. Flawed, neo-Romantic reasoning might repel me personally, but Computer Power and Human Reasoning certainly found an audience—it won an award from Computer Scientists for Social Responsibility and launched Joe Weizenbaum on a lifelong career of cheerless lectures about the coming apocalypse. The book’s arguments allowed readers, and later Weizenbaum’s listeners, to feel righteous and comforted, without the inconvenience of examining the facts too deeply.

After some further conversation, I said to Weizenbaum that we basically had two different worldviews. I didn’t see why plausible arguments couldn’t be made for either—my view that life was getting slowly better, or his, that it was getting worse. You could take your choice.

Finally, I asked him what he was doing in the small mill town of Berlin, New Hampshire. Given the staggering costs of overseas calls in those days, it never occurred to me that, as he informed me, he was calling from West Berlin, Germany. (We’d talked for forty minutes on what to me was mostly third-order stuff, and he was spending three dollars a minute to do it.) This led to some discussions about the ambivalence of being a former German Jew in the new Germany. I told him that my husband had escaped Germany by the skin of his teeth in 1939, two months after Kristallnacht, and the Holocaust was vivid for us because my husband’s parents had lost every member of their immediate families. I’d been born in a rain of bombs that was indifferent to my religion, so long as I was dead. Or at least terrified.

To me, the conversation ended cordially, us agreeing to disagree on whether the world was improving or degenerating. But Joe Weizenbaum was to take revenge.

4.

I’m not sure Hubert Dreyfus’s book, What Computers Can’t Do (1972),[1] was even the first in this series of feel-superior-dear-human screeds—an example of what, in my own book, I’d called the Wicked Queen syndrome: Mirror, mirror on the wall, who’s the smartest of them all? Dreyfus’s intellectual contributions to the AI debate were finally inconsequential. But he was publicly on the warpath against AI while I was writing Machines Who Think, a path he’d brandished his hatchet along since 1962, almost fifteen years earlier. So I felt obliged to interview him.

We met at a panel discussion on the Berkeley campus on May 26, 1976, organized by Lotfi Zadeh, who’d overcome my strong resistance to such spectacles by assuring me I’d have fun. To prepare, I marshaled notes I’d made about the many 19th-century physicians and philosophers who’d averred with pomp and certainty that women could never think nor be permitted to try (grievously ruining the lives of so many of them). In my opening remarks, I drew a little parallel between that and a philosopher who, these days, might be tempted to say that machines could never think. It was meant to make the audience laugh, and it did.

It made Dreyfus furious. His face flushed; he bounced on his chair like the marionette of a demented puppeteer. I noted in my journal that he was vicious, denied statements he’d made, and denied others that no one had made (“I never said women couldn’t think!” Who said you had?). I lost count of the number of times he began a sentence with, “That’s not what I said; I said…” If I rose to the provocation, I knew I couldn’t win. These were the tricks of the rhetorician, and as a philosophy professor at Berkeley, he was a master of rhetoric.

Rhetoric is a shadow weapon in science, no matter how convincing it might seem in debate. Results, not rhetoric, are what really count. In this too, I’d moved away from humanists, who’d disagree. But afterwards all the panelists had dinner together cordially, and he agreed to be interviewed for the book I had underway.

To make an anti-AI stance into a busy cottage industry might puzzle me—Dreyfus had been at it since 1962, and his defeat in chess by a computer and other primeval AI tales are in Machines Who Think. But to persist so long, the field must have fascinated him. Maybe part of his otherwise unfathomable anger with AI was disappointment that it hadn’t succeeded better. Only that, I thought naïvely, could account for his eagerness to attack so passionately, undeterred by any successes the field might have.

Making notes as I went, I drove myself through Dreyfus’s book, What Computers Can’t Do, somewhat outdated by then because computers were now doing some of the things that they were supposed never to do. I wasn’t sure I understood it all, but I wasn’t sure it was all that clear in his mind, either. Hyphenated phrases, like “being-in-a-situation,” presumably adaptations from the German, always make me reach for my peashooter.

After I interviewed Dreyfus on July 21, 1976, I found him likable, “though I surely wouldn’t want him jumping all over me with both feet,” I noted in my journal, which continued:

A dreadfully nervous man—afterwards we walked across the campus to a film, and as he walked and talked, he clutched at his breast regularly, rhythmically, every twenty seconds or so. He was surprised when I asked him why he was so mad at all these AI types. It had never occurred to him to ask himself! After five years of analysis! He hypothesized that it might be that he attacked in them what he most disliked in himself, an excessive rationality. Can’t say I noticed any excess myself. What I did notice was that I’d come to grips with his objections, that I understood them, raised questions about them, which, to my astonishment, he couldn’t answer: “Yes, that’s weak,” “No, I don’t have an answer for that.”

His responses to my questions reassured me that I was coping okay with an intellectual field distant from my own. Dreyfus was screening a film for a class and I went along. That evening, I wrote in my journal:

Turned out to be Carl Dreier’s Day of Wrath, which I found so riveting I was nearly late to see the people who want to rent our Berkeley apartment. A stunning study in the power of evil, but where does evil lie? In female sexuality? In men’s weakness in the face of it? It raised many questions.

In Machines Who Think, I treated Dreyfus respectfully but also told the truth, which often made him look foolish. When the 25th anniversary of the book was to be published, I emailed him, asking permission again to use the quotes from his book that I’d used in the original edition. He insisted I call him. On the phone he told me I decidedly did not have permission, and furthermore, now that he was retired, he’d been talking it over with his friends, and was seriously considering suing me for defaming his character.

“That book’s twenty-five years old,” I said, starting to laugh. “Nevertheless!” he cried, shimmering with such indignation he couldn’t finish the sentence. “I’ll wait to hear from your lawyer,” I said, and hung up before I was convulsed. The new edition went to press without those quotes.

Dreyfus wasn’t done. As I was promoting the new edition on a morning call-in show in San Francisco, he was first on the phone. Gleefully, he told me and the radio audience that the recent DARPA self-driving car competition had ended in a rout; on the 142-mile course, the best car had gone only 7.4 miles. This was proof that machines could never…etc. I explained patiently on the air that science was often incremental, and that maybe next year the best car would go ten miles, and later fifteen, and so on. In fact, the following year, 2005, several cars completed the course, with Sebastian Thrun’s self-driving car in the lead. Nowadays, nearly all automobile companies have prototypes (and Dubai has even announced plans for flying “drone taxis that skip drivers and roads” using a Chinese-made vehicle [Goldman, 2017]). Legislators worldwide ponder what the rules of the road should be for autonomous vehicles. But my phone didn’t ring with an apology, because Dreyfus could never, ever utter the words, “I was wrong.”

In the Winter 2013 issue, California, the University of California alumni magazine, ran a brief sidebar about Dreyfus’s quixotic fight against AI and quoted a statement he made in 2007: “I figure I won and it’s over—they’ve given up.”

Over all these years, I’ve suspected Dreyfus of many things, but a sense of humor?

Dreyfus died in Berkeley on April 22, 2017.

5.

In the mid-1980s, I met Vartan Gregorian, then the head of the New York Public Library, at an international PEN meeting in New York City. I introduced myself to this distinguished-looking, gray-haired, neatly bearded presence, his deep dark eyes containing all the sorrows of the Armenian diaspora. “I know who you are,” he cried with surprising glee. “I’m giving a party for Bert Dreyfus at the Library next week. I’ve read all the literature. I insist you call my office with your address so we can send you an invitation.”

I was stunned. Why a party? “Because I’m a Romantic, and I like Bert’s idea that machines will never be interchangeable with humans.” Is that what he thought I was about? Humans interchangeable with machines? Me reduced to a facile formula that bore no resemblance to what I’d thought or written? I didn’t even think one human being was interchangeable with another. Mark Harris’s dictum rushed to mind: “A writer should not run just for local office.” I didn’t call Gregorian.

John Searle came to speak at a Columbia University convocation in 1981 and presented the Chinese Room Argument. A computer (or the philosopher himself) is isolated in a room with slips of paper, on which are written Chinese characters. He must translate Chinese into English, matching character to word without having the least “understanding” of Chinese. All he does match a symbol in one language to a symbol in another. He produces a translation, but if he doesn’t “understand” what he’s doing, then the act doesn’t qualify as intelligence in any sense.

From the rostrum we strolled along College Walk together, and I told him how disappointed I was that a challenge as substantial as the Chinese Room Argument didn’t exist earlier. But he’d only begun thinking about AI the year after Machines Who Think was published.

Philosopher Daniel Dennett at Tufts University and computer scientist Doug Hofstadter at Indiana University made the first plausible attack on the Chinese Room Argument in their book, The Mind’s I (1981), and you can read the history of post and riposte over the past decades in Dennett’s delightful Intuition Pumps and Other Tools for Thinking (2013).

To summarize, the isolated computer, or, for that matter, human philosopher, cannot translate Chinese character for English word, one-to-one, after all. Fundamental to language translation is real-world knowledge, just as it is to most linguistic transactions. However, thanks to the vast amount of data on the Internet, machines can now acquire considerable real-world knowledge, as the program Watson showed when it triumphed over the best human Jeopardy! players in 2011. Watson’s win required not only real-world knowledge, but also the ability to catch puns, jokes, and other subtle linguistic properties.[2] Did Watson really “understand” what it was doing? Or was the machine only an example of “weak”—albeit pretty dazzling—artificial intelligence (which Searle was okay with)?[3] The Chinese Room Argument was constructed on venerable but misleading philosophical tradition that for intelligent behavior, reasoning was far more important than knowledge.

To make matters worse for the Chinese Room Argument, in October 2012, Rick Rashid, then head of Microsoft Research, gave a lecture in China to demonstrate software that transcribed his spoken English words into English text with an error rate of about seven percent. Then the system translated them into Chinese-language text (error rate “not bad,” Rashid would tell me in late 2015) and followed that with a simulation of Rashid’s own voice uttering them in Mandarin. A real Chinese Room: you can see it on YouTube.[4] It wasn’t perfect, but as Rashid said to me later, with all the examples to learn from on the Internet, it’s much better now and improves daily.[5]

What do we mean by understanding (yet again)? Only humans can really understand, Searle has argued, because they exhibit “strong,” not weak intelligence. Thus Searle says “strong” artificial intelligence contradicts itself. Only humans can have strong, or real, intelligence, because only humans understand. Whatever that is. It must be the wonder tissue in our heads, says philosopher Daniel Dennett with a wicked grin. (But then, tens of petaflops of processing on 20 watts of energy, as the human brain exhibits, is pretty wonderful.)

As I write, machines have all but closed down the Chinese Room Argument and similar hypothetical problems in text and are whizzes in facial recognition, better than most humans at reading the emotions of other humans, better than any humans in molecular recognition and generation (for molecular biology) and in image recognition and generation.[6] They’re beginning to read human brain messages and transmute them into physical action, an answer to the question that puzzled early AI researchers: how does intelligent behavior emerge from dumb tissue, or dumb components of any kind? They’re capable of many other useful applications, employing what is known as deep, or multilevel, learning. But they’re still machines, woefully deficient in wonder tissue.

6.

Back in the early 1980s, my husband Joseph Traub was also provoking people. As the founding head of the new computer science department, he’d been invited to address Columbia College alumni and spoke to a packed hall on the topic of whether computer science was a liberal art. He argued yes and stunned—perhaps insulted—the deep core of humanities professors and former and present-day students, who, not surprisingly, thought of computer science only as Coding 101 and computers themselves as nothing but big, dumb machines (as ads from IBM kept reassuring them). You can sympathize with their disbelief. The Ivy League had been late coming to computer science, and Columbia was one of the last of all.

New York City might have been the cultural capital of the free world, but computationally speaking, I’d taken Joe from an advanced civilization that existed in maybe three places on the planet and brought him to a windowless sod hut on a desolate prairie. To transform that sod hut, Joe faced a mighty task. For this, I felt deeply sorry. He never reproached me for taking him away from the bright lights of his own field to the bright lights of mine.

But time passes. In February 2014, Columbia University’s The Record celebrated the university’s Digital Storytelling Lab, which brings together statisticians, English professors, filmmakers, and social scientists “to tell stories in unexpected and, sometimes, never-before-imagined ways” (“Humanities cross,” 2014). That same issue of The Record also profiled Alex Gil, Digital Scholarship Coordinator, Humanities and History Division, a part of the Columbia Libraries. He helps Columbia faculty to use digital technology in humanities scholarship and teaching (Shapiro, 2014).

A second profile in that issue of The Record was of Dennis Tenen, assistant professor of English and Comparative Literature, whose brief is digital humanities and whom you met in Chapter 4 when he addressed a group of Harvard scholars and suggested that intelligence might reside in the system as much as in a human heads—and didn’t promptly get booted out of the seminar room. Tenen told The Record he was at work on a book about algorithmic creativity (think the sonnet form) and was devoted to understanding culture through a computational lens and computation as a cultural experience (Glasberg, 2014).[7] As we’ll see later, nearly all major American universities and many in Europe now have equivalent centers and similar scholars. Professional organizations and journals flourish.

7.

Computer scientists themselves didn’t always appreciate how intellectually rich computers would prove to be. Almost thirty-five years would pass after Joe’s talk before we’d read anything like what Leslie Valiant, of Harvard’s computer science department, writes in Probably Approximately Correct:

Contrary to common perception, computer science has always been more about humans than about machines. The many things that computers can do, such as search the Web, correct our spelling, solve mathematical equations, play chess, or translate from one language to another, all emulate capabilities that humans possess and have some interest in exercising. . . .The variety of applications of computation to domains of human interest is a totally unexpected discovery of the last century. There is no trace of anyone a hundred years ago having anticipated it. It is a truly awesome phenomenon (Valiant, 2014.)

As these examples show, dissenters fell into several categories. Many scientists in distant fields felt moved—threatened?—enough to show why, by their lights, AI couldn’t be done. In the case of Arno Penzias and others, the empirical evidence that might contradict their beliefs wasn’t even worth examining. Nor did most philosophers respond to empirical evidence: in their hearts they knew it couldn’t be done, so constructed parables to prove it. In the case of Vartan Gregorian, he simply misunderstood—Pamela McCorduck, at least, did not think machines and humans were interchangeable—and went with fast thinking, his Romantic impulses. Someone like Joe Weizenbaum believed it could be done, but no amount of good AI might do would compensate for its potential evil.

In July 1999, my husband and I went to Oxford for an international meeting on the foundations of mathematics. Knowing nothing about the topic, I planned to be a carefree Oxford tourist, admiring the greens and Gothic spires. But to my surprise, this exceptionally abstruse meeting featured a panel on “Computation, Complexity Theory, and AI.”

I joined Joe in the plenary audience wondering why the panel had no expert in AI. Two exalted mathematicians sat onstage: Richard Brent, an eminent theoretician in complexity who’d stepped in for Tony Hoare, who’d mixed up the date; and Stephen Smale, a Fields Medalist and specialist in some of the more arcane parts of mathematics. With them was one physicist, Roger Penrose. However, Penrose had recently published a second book saying why, for reasons of quantum physics, AI was hopeless.

I’d read the first of Penrose’s books attacking AI—or tried to. The parts about quantum physics seemed right, at least as far as I could judge, but the parts about AI seemed shockingly ignorant. Maybe, I thought, he knows something about the other topics the panel means to address, computation and complexity theory.

From my journal, July 27, 1999:

It’s the usual physicist-twit’s view that he can come in and clean up the problems in any field whatsoever, but alas, knowledge counts, and Roger P. knows zilch about any of this. As Richard Brent says privately later, it’s as if his knowledge of all three topics began and ended with Alan Turing. Richard also suspects Penrose has religious reasons for his antipathy to AI, but this we don’t know. On the whole, Penrose is a slightly more interesting adversary than Bert Dreyfus, but not more convincing. In fact, less. For he keeps proposing experiments that “can’t yet be done but in the future…” or “I’m assured could be done…” or “experiments might be performed…” etc., all this to prove/disprove what he calls “my position,” which turns out to be the most ridiculous sort of phrase-dropping and general obfuscation. Big emphasis on “consciousness” as essential to intelligence, by which I take him to mean self-consciousness. He’s so innocent of intellectual history that he doesn’t realize “consciousness” in the sense he means is a cultural construction, missing in great parts of the human world to this day (so I guess they aren’t “intelligent”) and only making its first appearance in Renaissance Europe. Blech.

So WTF is this all about? I think of standing up and informing the audience that I’ve never been to an AI meeting (and I’ve been to plenty) where panels were convened on “Partial Differential Equations: Fact or Fiction?” or “Why Don’t These People Understand That Reynolds Numbers Don’t Help Navier-Stokes Calculations of Turbulence?” The whole performance is bizarre, another example of AI-envy disguised as sermon, cold shower, neener-neener. The house is packed, of course. My old argument that rhetoric is beside the point in science occurs to me, but then I think of the Lighthill Report, that more or less killed AI funding (hence research) in Britain, and can possibly be held responsible for the dismal state of computing here. The Brits were there first, and now they’re simply not players. What a price people in the UK paid for that piece of rhetoric. Not that I blame Sir James, especially. He was the author of the blunt instrument, but many hands wielded it, and many more refused to rise up and stay that blunt instrument, all complicit.

Joe actually took on Penrose for his misstatements about complexity—computational complexity, since he’s obviously completely ignorant of other forms of the genre—but I thought it wasted breath; this man is not open to contradiction or even learning. So much for “intelligence.”

The upshot of the panel was that AI is, as usual, barking up the wrong tree, premature, blah blah. I could only laugh.

As a former subject of the U.K., and sentimentally attached, it gives me pleasure to report now that the London firm of DeepMind is in the avant garde of AI. Twenty years ago, it wouldn’t have seemed plausible.


  1. It would be more accurately titled What First-Order Logical Rule-Based Systems Without Learning Can’t Do, write Stuart Russell and Peter Norvig in the third edition of their monumental textbook, Artificial Intelligence: A Modern Approach (2010), although they add wryly that title might not have had the same impact.
  2. In January 2013, reports circulated that Watson had been hooked into the Urban Dictionary, a crowd-sourced, online, up-to-the-nanosecond dictionary of slang, used by teenage boys and certain elderly connoisseurs of living language like me. One of the Watson team developers thought that Watson should be more informal, conversational, hip. But when Watson answered a query with “bullshit,” team members decided to purge the Urban Dictionary from Watson’s memory. I haven’t checked this story. I can only hope it’s true.
  3. In The Quest for Artificial Intelligence, Nils Nilsson (2010) observes astutely that when Herbert Simon’s children enacted the Logic Theorist’s moves and proved a theorem, the children’s understanding was in doubt and yet the theorem was indeed proved. Philosophers might cry out on behalf of intentionality, but if you’re going to ascribe intentionality to every cell, it gets mighty complicated.
  4. See https://youtu.be/Nu-nlQqFCKg
  5. Rashid told me about this at a symposium to honor the fiftieth anniversary of Carnegie Mellon’s computer science department. He also told the symposium’s audience that, at that lecture, some members of the Chinese audience had wept with joy to hear such a momentous thing. Google’s percent of word error dropped from 23% in 2013 to 8% in 2015. Similar improvements were apparent in image recognition and machine translation from one natural language to another. See Dietterich, Thomas G. (2017, Fall). Steps toward robust artificial intelligence. AI Magazine, 38(3) pp. 3-24. doi: https://doi.org/10.1609/aimag.v38i3.2756. A current challenge is to understand spoken words that mix several languages.When President Donald Trump visited China in 2017 and gave a public speech, the translator was now a Chinese program called iFlytek, a demonstration of how quickly China was climbing the AI achievement ladder.
  6. Image recognition and generation is a razor-edged sword: so useful for so many applications but so good at generating fake images that soon you won’t be able to believe your own eyes. The same week “Afterimage,” a long article on this topic by Joshua Rothman, appeared in The New Yorker (November 12, 2018), the White House itself was accused of using a doctored video to justify suspending the press credentials of an aggressive reporter. The doctoring was a speeded image, needing no AI, but the implications of the forensics were deeply disturbing.
  7. Dennis Tenen’s 2017 book about algorithmic creativity is Plain Text: The Poetics of Computation (Stanford University Press). Forgive a certain unseemly triumphalism here. For thirty years, one dean in that early Columbia audience of Joe’s, considering me the friend of his enemy, mustered the sourest, angriest look he could whenever we encountered each other in Morningside Heights. In 2014, it must have been bitter news to read that the president of Columbia, Lee Bollinger, said in an interview in the Spring 2014 issue of Columbia, the university’s alumni magazine: “Ten years ago, our engineering school was at the periphery of the University, and its faculty members, I’m told, felt unappreciated. Now they are at the center of intellectual life on this campus.” Bollinger hastened to add that so too were the business, journalism, and public health faculties, so I suppose the computer science faculty shouldn’t get a collective big head. “But data science is certainly a dominating force of our time, one that is having a transformative effect on many fields” (“The evolving university,” 2014, p.31).

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book