21

1.

In The Fifth Generation, Ed Feigenbaum and I were hard on IBM, then the premier U.S. computing company, for its longstanding and very loud aversion to artificial intelligence. The usual story (I heard it first from Arthur Samuel, an IBMer who’d developed the first checkers program in the 1960s) was that T. J. Watson, Sr. worried that loose talk about intelligent computers would scare away customers.

At a Manhattan cocktail party in April 1983, just after The Fifth Generation’s publication, my husband introduced me to Ralph Gomory, IBM’s senior vice-president for science and research, and stepped back to watch the fun. Gomory and I made some polite small talk, and then, also very politely, Gomory said that IBM did not have a corporate anti-AI bias, although it was true they hadn’t seen much potential in symbolic reasoning programs until a few years ago. However they’d done “excellent work” on robotics, speech recognition (which he claimed was enough like understanding to be understanding), and more. IBM viewed the Japanese challenge across a wide spectrum, from devices and packaging to software, including symbolic reasoning programs. In Gomory’s words, it was a life and death struggle. He urged me to come up to Yorktown Heights, IBM’s major research laboratory, and see for myself, an invitation I soon accepted.[1]

To Gomory, I replied that I was glad for any new information, and it would go into the paperback version of The Fifth Generation. But my perceptions about IBM’s corporate dogma came from many sources, I said, silent on having heard just weeks earlier in Washington, D. C., from a legislative assistant, who told me IBM had given corporate approval to big government outlays for new generation computers on the condition that they weren’t called AI machines. Nor did I bring up (because I forgot) IBM’s full-page ads as recently as two years earlier in news magazines and the Times, reassuring an uneasy public that computers were only big dumb machines that could never think.

As I stared down into my wine glass, somewhat chastened, a member of the research staff at IBM’s Watson Labs in Yorktown Heights named Alan Hoffman barged up to us. He ignored me but said to my husband without preface: “What’s all this about expert systems? They haven’t done anything so far as I can see. What have they accomplished?” Joe pointed silently to me. “Oh? You’re in AI? I don’t see any progress in expert systems and their accomplishments are puny.” The problems are very difficult, I murmured. Let IBM persuade him, I thought, he’s theirs. What his truculence showed was the deep ambivalence among IBM scientists, not surprising. Even the consensual Japanese weren’t in total consensus.

A year later, in the spring of 1984, a vice president for systems in IBM’s research division—nameless here out of courtesy—gave a major talk at Columbia, and Joe and I gave a party in his honor. He seemed torn, I noted in my journal. Happy about the party—but peeved at me, which showed itself first in a mean-spirited attack on Harold Cohen’s computer-generated art on our walls. Our guest was a noted art collector, no philistine.

Having unburdened himself, he asked innocently: “You’re not taking this personally?” No, I replied, smiling on the outside, laughing on the inside. “Tell Cohen what I said,” he went on. “Oh, I will,” I lied politely. Time and again he dug me about the Japanese: “Pamela thinks the Japanese are going to take it all,” he explained to the group around us. Pamela believes cooperation, Japanese-style, is better than competition. . . .” “To everything its season,” I replied with a smile. Yet he seemed pleased to be honored by this party and thus was a man in minor torment. I was sorry, for I hadn’t intended anything personal in the book, yet I saw how wounding it was to him.

I much preferred Ralph Gomory’s low-key confrontation: it cleared the air, he offered something concrete as remedy—a visit to the research laboratories in Yorktown Heights—and if I was wrong, there was something I could do about it later, in the paperback version of The Fifth Generation, which I did. With this man, I could only shrug.

A month after the art-attack party, Frank Cary, chairman of the board of IBM, was to be awarded an honorary degree at Columbia’s commencement. Joe, as head of the computer science department, was asked to escort him; I was to escort Mrs. Cary. With the imminent publication of The Fifth Generation in paperback, we thought we’d better send along both Joe’s and my CVs, so Frank Cary wasn’t under the impression that Mrs. Cary was to be shepherded by some blameless faculty wife. I imagined Cary aghast, refusing to accept an honorary degree unless someone more obliging (or less insulting to IBM) was rounded up. In fact, he was a model of graciousness, aware but not aggrieved that we’d worked IBM over in our book (an interesting contrast to Gomory, who wanted to take on the issues directly; and the other vice-president, who was peevish without being straightforward). At the morning reception, Joe and I enjoyed talking to Cary so much that we worried that we were monopolizing him.

During commencement, Flora Lewis, the first woman with her own column on the op-ed page of The New York Times, gave a splendid talk. But after, I was surprised by the passion it inspired in Cary, who complained, with legitimacy: “You spend years building an organization piece by piece and one of these people comes along and destroys it carelessly…they have their biases, and that’s understandable, but they have such power…” I suddenly wondered if he meant me, not Lewis. “Ah,” I thought, “though we aren’t always right, some of us think very hard about what we write, sensitive to that power.” But as Lewis had said, we don’t always write what people would have us write about them.

2.

Later in the day, I felt comfortable enough with Cary to speak to him about an issue at the Museum of Modern Art that involved IBM, and my friend, Lillian Schwartz, a celebrated pioneer in computer art and animation. She’d been commissioned to design a poster for the opening of a new wing of MoMA, and it was to be—ta da!—computer art, the museum’s acknowledgment that, by 1984, computers as a medium for art might actually be legitimate. IBM was underwriting the effort and allowing Schwartz to use their advanced graphics systems, especially their large-scale color printers.

The project had been difficult. For months, the curators rejected everything Schwartz did. First, it “didn’t look like computer art”—no jaggies, those stepped borders around images typical of early computer art. She explained that jaggies were being smoothed away by brilliant programming and technology. They complained that the dots were too small to be seen by the naked eye. She explained that the dot-matrix look was also disappearing, thanks also to programming and advanced printing technology.

But as Schwartz digitized and distorted well-known paintings (one of her interim ideas displayed the interior of the museum, a god-like view of the galleries), the curators were horrified that she was deforming sacred art. They eventually picked a New York City scene—she could deform New York all she wanted—but as Schwartz was the first to point out, a straightedge and an airbrush could have achieved the same result. What a missed opportunity this was turning out to be, I thought sadly.

I loved the things Schwartz didn’t even submit. For example, she did “homages to” using the palettes of various artists, changing their designs subtly. A grand piece called Big MoMA was a six-foot image of Gaston Lachaise’s Standing Woman, a bold female sculptural figure that often stands in MoMA’s garden. Wittily wrapped around Big MoMA’s contours were many of the great MoMA holdings: Jasper Johns’s Target at her kneecaps, Andy Warhol’s Marilyn at her crotch, Salvador Dali’s melting eggs at her breasts, Henry Moore’s Family in her womb.[2]

But all this paled beside the biggest problem. Somebody at IBM stood stubbornly in her way. He was Benoît Mandelbrot, the brilliant mathematician and “father of fractals.” Although his reputation was surely secure by now, he was obsessed with the idea that anyone who came near IBM’s high-end graphics systems could only be there to steal his fractals without giving him credit.

“Isn’t it enough that they adore him at Lucasfilm?” I asked. Schwartz shook her head. Mandelbrot was furious at Loren Carpenter, one of Lucasfilm’s great fractals deployers, for what the scientist thought was stealing, not giving credit. I protested this was nonsense: I’d heard Carpenter rapturous in his praise of Mandelbrot.

Moreover, Mandelbrot was soon leaving IBM for Harvard, and Schwartz suspected that to persuade IBM to permit him to take various machines with him, especially an advanced high-resolution printer, he wouldn’t allow the graphics systems to be used very much, thereby proving that nobody was using them and thus no one would miss them.

True or not, Schwartz had much material stored on IBM’S advanced graphics machine that she could neither get printed nor have any other access to. Her collaborator at IBM, a protégé of Mandelbrot’s, got frantic every time she tried using the machine’s editor. She wondered if he’d picked up Mandelbrot’s paranoia, that she was trying to steal from Mandelbrot.

Meetings at IBM with the curators (who were having their independent misgivings about this project) always began with Mandelbrot and his assistant already on the machine, tying it up with fractals. Determined to seize the art establishment’s blessings on fractals, Mandelbrot, in his hybrid Polish-French-British accent, would launch into an explanation that was well beyond the technical grasp of anyone else in the room. It was irrelevant to the goal of these meetings, which was to view the progress of Schwartz’s commissioned work. By the time Mandelbrot was finished, the curators were exhausted and confused, and Schwartz now had to deal with them in their muzziness. This happened again and again, she told me. No matter how early she got to IBM on the day the curators came to visit, Mandelbrot and his assistant already had fractals running on the screens. She didn’t know what to do. She reminded me of myself: having this commission was so important to her career that she was ready to be very, very accommodating. As a consequence, she was being run over by both sides. When she’d call me to vent, we always ended up asking each other: would this happen to a man?

On this Columbia University commencement afternoon, I gave Cary the briefest possible précis of the situation. At his Parnassian level, he hadn’t known that IBM was underwriting the poster commission, but, as luck would have it, he was not only IBM’s chairman of the board, but also on MoMA’s board of directors. He nodded, took names and numbers, and sliced through the problem in two days. Lillian called me gratefully to tell me.

3.

In short, Ralph Gomory was right. Despite years of advertising to the contrary, IBM took AI very seriously. The firm’s successes in the late 1990s and in the 2000s were delightfully public and decisive—Big Blue’s defeat of the world’s human chess champion, Garry Kasparov, and Watson’s triumph in Jeopardy! In early 2014, IBM announced that it was investing a billion dollars in machine intelligence, and that October the Watson Research Group moved to the East Village in New York City. Watson was already at work on real-world problems in medicine, scientific research, management and sales guidance for large and small businesses, even on teaching devices disguised as toys.

For example, in partnerships with medical centers, including the Mayo Clinic, the Cleveland Clinic, Sloan-Kettering, Baylor College of Medicine, and Columbia University Medical Center, Watson scrutinized patient data to guide better outcomes for cancer, such as genomic implications, or faster matches of patients to appropriate clinical trials. Watson has been identifying the proteins associated with cancer, a search that yielded one per year in the old days, but Watson was finding them at the rate of seven per year. These then suggest new targets for chemotherapy. “Watson has truly become a colleague to clinicians in making treatment decisions,” said Lauri Saft, director of IBM Watson Ecosystem (Morais, 2015).

With what IBM was calling cognitive computing, Watson was learning to think like humans think. Rob High, chief technology officer of IBM Watson Solutions wrote:

These machines won’t be our adversaries. Instead, they’ll augment our knowledge and creativity with skills that they’re really good at, including computation, memory, speed-reading, and the ability to find insightful patterns in huge quantities of data. Computers will be our ever-present intelligent assistants. Thanks to cloud computing, a wide variety of software programs called cognitive advisors will be at our beck and call whenever and wherever we need them….Cognitive machines will democratize expertise. (2013)

Distribute it, anyway.

At the Tribeca Film Festival in New York City in 2015, Watson illustrated these kinds of help. IBM’s Lauri Saft told the audience:

Film and artists and creative people and narratives—that is the essence of what Watson handles best. Words and language and sentiment and ideas, right? That’s what Watson does for a living.. . . It’s man with machine, not man versus machine. There are things that we do really well as humans, but there are also things that systems do really well. (Morais, 2015)

Watson could be a colleague to help screenwriters with ideas, with plots, with combinations of traits for characters. Saft told Betsy Morais of The New Yorker that “He’s constantly saying, ‘What about this? We could do that,’ perpetually feeding you with ideas” Watson will not unseat Steven Spielberg: “You need that combination: people and machines are more powerful together” But to Saft, Watson was already he.

In Chapter 14, I described IBM’s Project Debator, meant to be another form of personal assistant. Meanwhile, along with being a lab research partner, clinical colleague, financial advisor, and a collaborator in the arts, Watson published “his” first cookbook, Cognitive Cooking with Chef Watson. If you can tolerate twenty-five-step recipes, these concoctions will wow your dinner guests.

Analysts complain that Watson is losing money and isn’t that good anyway. Scientific tides ebb and flow. But I see Watson as an amazing about-face from the days when old T. J. Watson worried that even the mention of intelligent computers would scare away the customers. Now, expert systems are for every one of us. We’ve come a long way since Grandpa Dendral.


  1. At the same party, I learned (though not from Ralph Gomory himself) that Gomory had given a talk ten years earlier, declaring that AI was the wave of the future. So he meant it. And the program(s) called Watson show his prescience.
  2. Big MoMA had a run of six. One is in the possession of the artist, two in the possession of two of MoMA’s curators, and one is mine which I recently gave to Carnegie Mellon University. Where are the other two? The artist, casual about recordkeeping, doesn’t know.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book