Painter Harold Cohen’s work thrust him into the center of one of the 20th century’s most contentious conflicts—it endures yet—the war of authenticity. You’ve heard it before. “Is it really thinking?” For him, the question is also “Is it really art?” In time, there’d be guerilla actions around creativity, learning, the new role of the artist, and the appropriate role of the computer. Writing Aaron’s Code (1990), a book about Cohen and the ways he used AI to create art, would bring me face to face with the same problems I’d met writing Machines Who Think.
Cohen’s work fits into the traditions of Western art in two major ways: The first is self-portraiture. A long tradition, reaching back at least to the early Renaissance, has honored artists who offer deep and provocative self-portraits. The difference in Cohen’s work is that the self-portrait is dynamic (that is, it changes over time) and it’s a portrait not of the artist’s physiognomy, but of his cognitive processes as he works. The essential work of art, one might argue, is the program called Aaron, not necessarily the images that Aaron produces—though they are the physical evidence that code has captured cognitive processes to a significant degree.
Self-portraits allow us to imagine that we can detect the artist’s emotional state, not his cognitive state. Contemporary psychology gently corrects us: the cognitive and affective cannot reliably be separated. In any case, surely Aaron’s actual code is the result of a consuming passion: from its first lines of code, Cohen spent more time with Aaron than with any human being, and that accounting held for the rest of his life.
A self-portrait that captures the artist’s cognitive processes to a significant degree and in a dynamic fashion is surely a new thing under the artistic sun, which allows Cohen another major place in Western art, namely as the begetter of profound, even revolutionary, innovation.
Philosopher Alva Noë (2015) argues that our lives are structured by organization. Art is a practice for bringing our organization into view; in doing this, art reorganizes us. If so, Cohen’s work fits the grand artistic tradition this way, too.
What Cohen accomplished seems very difficult for most of the art world to grasp. Since the publication of Aaron’s Code in 1990, digitally manipulated images have become more familiar and have been admitted in some degree to the canon. Art produced by machine learning has also created a modest stir. In October 2018, a machine learning–generated image printed on canvas, called Edmond de Belamy from La Famille de Belamy and created by a Parisian group, sold for $432,500 at Christie’s (Cohn, 2018). But the depth of Cohen’s achievement is still unfathomable to most curators and collectors.
In the 1960s, Cohen’s reputation as a painter in his native London was soaring. By 1966, he was one of five artists who represented Great Britain in the 33rd Venice Biennale, and his work could be seen in important galleries in England and the Continent. Although he played a central role in the London art world of that era, 1968 found him restless, ready for some kind of major change. That fall, he arrived in San Diego, California, with three young children (his first marriage had ended, and he retained custody). He settled down to paint and teach in the newly established visual arts department at the barely decade-old University of California, San Diego, beautifully situated on the coast just north of San Diego in La Jolla.
Cohen was a stocky man of medium height, with a rich rabbinical black beard and graying hair pulled back in a ponytail. Behind his glasses, his dark intelligent eyes seemed portals to an unusually complicated soul. Without hesitation, he could speak on nearly any topic, his language impressively Mayfair (unless he lost his temper, when it slipped into the East End, where he’d grown up). He was also sharp-tongued and dismissive of many of his fellow artists, although he once said to me: “I value less and less in art these days, but what I do value, I value deeply.” He meant Cezanne; he meant Duchamp.
Jef Raskin, later to have a hand in designing the first Apple Macintosh, was a colleague on the visual arts faculty at San Diego. Early in Cohen’s stay, Raskin said almost truculently: I can teach even you how to program. Cohen took it on, thinking it might be as interesting as doing crossword puzzles, one way he passed the time as he mulled a painting.
Cohen had first seen computers in action in a 1968 London show called Cybernetic Serendipity. It was the heyday of “computer art,” when anything that could be digitized, processed, and printed with a plotter ended up on gallery walls. Either computers were very stupid, or people were doing very stupid things with them, he thought.
But by learning to program, he slowly (and in his recollection, independently) arrived at the same insight that AI researchers had from the beginning: the computer is a general-purpose manipulator of symbols, and thus can be viewed as functionally equivalent to the brain.
Cohen conjectured that AI might be a means to test some of his theories about making art. With a program, he could model a theory, watch the output, and then revise the program (or the theory) until the output was right. What did right mean? He believed it to mean the evocation, not the communication, of meaning between the image and viewer. Art was a meaning generator, not a meaning communicator.
With the program called Aaron (his own Hebrew name) Cohen was beginning to externalize knowledge that, until then, he’d held internally, often unconsciously. Aaron knew and followed some general rules about making art on a two-dimensional surface. For example, the program knew how to represent occlusion (one object hidden behind another); how smaller objects at the top of the picture plane appeared to the human eye to be back beyond objects in the foreground. Aaron decided where to begin a drawing, which shapes and how many of them to include, and decided when it was finished. Once a drawing was begun, human intervention was forbidden. Owing to chance elements in the process, each drawing was different from any other; each drawing was an original.
Aaron was autonomous, not in the trivial sense that it could control the movements of a pen, but in the sense that it could invent those movements. It generated images instead of merely transforming them. For Cohen, then, the computer was another artist’s tool, but of a different order from ordinary tools.
Human artmaking is a fluent set of decisions based on the artist’s awareness of the work in progress. A program to model that behavior needed a similar awareness. But in those days computers had no eyes to gaze at a work underway. Cohen wrestled with that problem in various ways, not as a psychologist proposing a model of human perceptual mechanisms, but as an artist, trying to fashion a model of art-making that would prove its plausibility by—what else?—making art. As Alex Estorick (2017) puts it, “Aaron had to learn to see in the dark.” If it had no eyes to see, Cohen would give it the functional equivalent of eyes, an imagination so powerful it could envision a drawing, constantly referring to the drawing’s totality in order to make the next mark on it.
What emerged was an arrangement of nested Russian dolls, Chinese boxes: a hierarchy of levels of conception. At the highest level was the human artist, Harold Cohen, who’d conceived the whole scheme, benignly hovering over the next conceptual level, his computer program, Aaron. Aaron was an entity with some general knowledge about artmaking and the capacity to make artifacts based on that knowledge. Finally at the bottom of the hierarchy (although, paradoxically, always the most visible feature) were the drawings themselves, each unique, unseen before, and not to be repeated. Cohen had vaulted to the plane of meta-artist, having created a work of art—the program Aaron—that itself made art. This was conceptual art of an unprecedented degree: for sheer nerve, Cohen was the equal of his spiritual forebear, Daedalus. Over the years, Aaron would grow to some 14,000 lines of code and be recast in different programming languages.
In my early AI days, Cohen and I often ran into each other at AI conferences, the only nontechies there, though Cohen’s technical knowledge far exceeded mine, and he picked the brains of the AI people cheerfully to help him write his art-making program. By the early 1980s, Aaron was already making abstract drawings of recognized aesthetic value. The artist was unquestionably Aaron—it had learned how to draw from Harold Cohen and drew all the time.
With all its art-making knowledge, Aaron was a kind of expert system, but also what Cohen called “an expert’s system,” the instantiation of everything Cohen knew about art and knew how to tell the computer. The program was becoming a singular expression of the artistic processes of a particular artist’s mind, laid out in executable computer code. Aaron was contingent. It followed general rules, but even knowing those rules, an observer couldn’t predict what the program would do: it moved through such a rich decision tree in the course of making a drawing that, again, no two were ever alike.
In 1983, Harold Cohen was invited to mount a show at the Brooklyn Museum, where Aaron’s drawings were exhibited, and viewers could watch the program make drawings in real time. Aaron’s work was abstract then, with primitives like angles, combs, closed forms, and so on. Part of the excitement about Aaron was that it was a computer program, something just coming to public attention with the popularization of personal computers. This one was making drawings! Most viewers hardly grasped the intellectual claims Aaron could make—or would’ve believed them.
Joe and I went to see that Brooklyn show, thronged with curious viewers, and bought a couple of hand-colored drawings. At the time, Aaron could not color, and Cohen doubted it ever could. (Thirty or so years later, he solved that problem sumptuously.) We invited Cohen home for supper. He was inspiringly articulate about what he was up to, and it was a pleasure to see a New York Times art critic, Grace Glueck, take Aaron seriously and write a sensitive review of the Brooklyn show.
Three years after the Brooklyn show, when I was writing a book about expert systems with Ed Feigenbaum and Penny Nii, Cohen suggested I should next write a book about him and his work.
September 30, 1986:
Harold here for dinner tonight, and I surprised myself a little by saying yes to doing a book about him. But his ideas are fascinating to me, and I don’t think the effort will be great, considering the payoff: my high road to learning all about art.
Unfortunately, by the time I began research for the Cohen book, both the artist and I were in trying circumstances. Cohen’s second marriage had broken up, distressing him deeply. Joe and I had moved from New York City to Princeton, New Jersey, where Joe joined the computer science faculty at the university, but his main job was to run one of the National Science Foundation–sponsored supercomputer centers.
I conceded in my journal that I’d had six grand years in New York, and it was Joe’s turn to do what he wanted. But Princeton was difficult—my life, social and professional, was in New York: I was perpetually taking the hour and a half train ride to the city.
February 17, 1987:
The Cohen project fills my mind. I think Harold has brought me back to my own art. In a sense, I’m using him to learn from. He has truly, importantly—and in a less important but literal sense—taken art where it has never before been.
February 23, 1987:
Ed tells me Ray Kurzweil has made a film with a segment about Harold, gorgeous to look at, but neglecting to mention that the colors were supplied by the gifted hand of Harold Cohen. Ed said this publicly after the film was shown. Kurzweil’s deputy went into earnest conversation with Harold, which Harold later told Ed amounted to: how can we get Feigenbaum to shut up?
February 28, 1987:
Re-reading Telling Lives [an anthology of work by biographers on the art of biography] I have a sudden insight as to why I couldn’t do the Simon biography. Right at the beginning Herb laid down a rule: nothing personal. This was, do not mention my family. I agreed, thinking it could be a book of ideas. But suddenly, ten years later, as I face the problem again, I realize Herb cut away from me what I not only knew how to do best, but also a vital part of the life. That limitation made the task impossible in any real sense. Odd that I never recognized this until now, and publicly and privately blamed myself alone.
Nervously I gave a presentation on Cohen’s work at John Brockman’s New York City Reality Club on March 5, 1987. Afterwards, I wrote in my journal:
As it turned out, the Reality Club presentation was fun, though my own agenda was pushed aside in the uproar over IS COHEN DOING ART? To my astonishment, Joe and Freeman showed up. (John Brockman on the phone this morning: “Who else but McCorduck would have her own private claque consisting of Joe Traub and Freeman Dyson?”) We’d met Freeman on a walk in Princeton a week earlier, and discussed the Reality Club, me saying later to Joe, I hope he doesn’t come and I don’t want you there either. Red rag: Joe cannot resist. Well, they did rough me up, but I gave as good as I got, and found myself enjoying it to a high degree. When John called this morning, it was to say they’d voted me best presentation of the year—an exaggeration, no doubt, but sweet to hear. Dumbfounded to see Benoît Mandelbrot there, but he behaved himself nicely; Hugh Downs next to me, scribbling notes furiously, though probably not for his TV show. Don Straus told me he’d forsaken I. I. Rabi for my talk—uh-oh, I thought.
November 8, 1987:
Heard Larry Smarr give a marvelous talk at the University of Illinois. He’s hired several artists, among them Donna Cox, to turn the rush of info from the Illinois supercomputer into visually accessible forms. Just wonderful, though curmudgeon Harold isn’t impressed. Smarr’s group is exciting, and whether their work is art, heaven knows it’s important science. Harold argues art isn’t in the service of science, but the artists feel, I think, they’re getting a fair return by having access to the supercomputer. The images all that number-crunching produces are theirs to carry forward, and they do. Meanwhile, they’re permitting scientists to see things never before seen—the collision of supernovas, for example. Great stuff.
Joe and I went to London for Christmas that year and were able to see some of Cohen’s work at the Tate. A 1963 painting he’d just sold to them, Before the Event, seemed to be doing then what was more than twenty years later suddenly so fashionable in New York art circles, quoting ideas and icons from science and transmuting them; in this case, replication—signaled by the central image, which was to my eyes, the primal copulation, surrounded by DNA chains, and what looked to Joe like state space diagrams. Ribbony images foreshadowed Aaron (unsurprisingly) and the bold glorious colors were unmistakably Cohen.
Writing about Cohen’s work wouldn’t be easy. I had to educate myself well beyond my college art survey course and the naïve pleasure I took in museums and galleries. Work enough. But I also had to learn exactly what Cohen was doing. At the time, Aaron had turned from abstract to representational art, something the human artist never did. Each picture contained people, shrubs, trees, flowers, and rocks, although how many of each, what kind of each, and where they were placed, Aaron decided as it went along.
Cohen himself seemed moody and often unreachable, in great despair over the breakup of his marriage, over his advancing age, over his lack of recognition for this breakthrough effort, over any number of things. Thanks to the Princeton move, I was hardly my serene self. On March 30, 1988, I wrote in my journal:
The worst moment is when John Brockman yells at me for even considering doing the Cohen book. His reasoning: publishers want books that “jump off the shelves,” Cohen is unknown in NYC and the art world, so only nerds would be interested, and nerds don’t buy art books. It’ll be poison for my future, since I’ll go from being an author who makes money for publishers to an author who doesn’t…yelling all my worst fears, full volume in my ear. I hold my ground, countering that Cohen is ahead of his time, a place I’ve also been; that this is to bring attention to Cohen in the art world (if that matters so much); that my life isn’t dedicated to making money for publishers. Most of all, I need desperately to grapple with ideas again. John is no philistine and has pushed more than his share of cutting-edge ideas in the face of establishment skepticism, even scorn. He admitted later he was only doing his job as an agent. Push it as far as it can go, make it big and important, and it’ll work, he said finally. Which answers the question of whether I focus narrowly or widely. But I was really down. The idea of doing another Machines Who Think—trying to convince editors that the topic is important—shrivels me.
Cohen would swing through the New York area from time to time, and on a ramble through the Institute for Advanced Study woods in Princeton, we agreed that the book should embrace the history of ideas, as wide-ranging as possible. I didn’t tell him what Brockman had said.
In mid-May 1988, Joe and I were having dinner with artist Lillian Schwartz and her physician husband, Jack, and got to musing about why computer art was so relatively stagnant. Schwartz agreed. “It’s the software packages,” she said finally. “They give easy access to artists, but not mastery of the medium. So most think they should go on doing what they’re already doing, only faster and easier, and they’re surprised it isn’t altogether like that. Moreover, they don’t imagine doing new things, locked in as they are to doing the old things ‘faster and easier.’”
A year or so later, when I saw Lillian Schwartz in Utrecht, Netherlands, at an electronic arts conference, she added this insight: the blank canvas presents a fierce challenge to overcome, whereas the computer always has an easy way of beginning: a menu, a mouse, a program that begins and prompts your participation. So not only is the initial challenge lessened, but the continuing process is eased.
Harold Cohen spoke at that same conference, saying user-friendliness is an alienation from the tool. He charged that, by using packaged programs instead of writing their own, artists were evading the use of their own tools. Later, Harold added privately that “us old guys” already knew how to make art before the computer came along, but for youngsters who were just feeling their way, the machine overwhelmed them before they had a chance to find out what art is.
Maybe. Word processing offered some of this same ease to writers, but I didn’t notice the essential part of writing was therefore easier. Of course I was one of the “old guys:” I’d learned to write with pen and paper, a typewriter, carbon paper, erasers; only midcareer with a computer.
In June 1988, Joe and I went back to London, where we met Timothy Cohen, one of Harold’s sons, an artisanal jewelry-maker. I wrote in my journal:
He arrives all dark, handsomely Byronic, with what turns out to be an incisive mind, willing to talk about his father’s work in loving and perceptive detail: the fallowness of the early California years, the necessity of relating the earlier paintings of the Sixties to the work now. Thinks a color machine will be a disaster for Harold financially, in the sense that it will mass-produce the last hand-done thing, an event the art world wouldn’t countenance—the rich will do everything to protect their investments. I agreed, but if you’re on the correct side of history, then all that is a rearguard action.
Timothy Cohen was talking about art as positional goods, a term economists use for objects that are valuable not because they’re one-of-a-kind or inimitable, but largely because other people can’t have them. The art world had been about positional goods for a long time. Aaron, in its sly way, exposed this yet again.
We talked about technology changing the way art is done—oils permitted painting on canvas, which, hand in hand with other historical forces, brought about humanism. The question is what computing will bring about with art. I said I honestly didn’t know.
Joe and I went on to Paris, where we brought the topic up with friends over long Parisian dinners.
What pushes an artist out of doing the usual very well, and into doing the new, the difficult, sometimes revolutionary? Yes, our culture is a bit odd in valuing the new the way we do—you could scarcely imagine conducting a puberty ceremony in a non-Western culture with a whole new take on the masks, say; and the Chinese valued sticking to the old forms. Economic issues: Could Aaron be pirated? Timothy worried about the glut of Aaron drawings: people wanted a signed drawing, not just a drawing. But that could easily be faked—Harold’s changeable signature, or a specific one for Aaron. Then the collectibles might be “early Aaron,” “middle Aaron,” etc. And suppose Harold could endow Aaron with more intelligence than it has now, and it began to develop autonomously, even posthumously? Would each version of Aaron develop differently, given a few statistical differences in the actual employment of the program?
A posthumous Aaron would have its own problems. We don’t desire an eternal late-Verdi-opera composing machine, or a few more Otello-like operas. If we want operas at all, we want those that seem to connect with issues and styles that are now. So art is a conversation among the so-called human verities (themselves ever subject to change), the Zeitgeist, and the expression of an individual artist—all three are necessary. Finally, so much is chance. If you’re lucky, like Bach or Donne, some Mendelssohn or T. S. Eliot exhumes you and champions your work. Or, you stay more or less continuously valuable, as Beethoven and Rembrandt have. Or, you enjoy a flurry of posthumous fame, and then disappear. All very capricious.
July 25, 1988:
Saturday night to dinner at Cathleen and Peter Schwartz’s, where his business partner, Jay Ogilvie, brings Doris Saatchi. We muse on why for the most part computer art hasn’t moved on since the ’60s. Doris, deep in the art world, has several conjectures: that no theory has developed…that much of the market [art buyers] is fundamentally nouveau, uncertain of its tastes, and like the 19th-century Pittsburgh nabobs who built replicas of known architectural masterpieces, the new buyers want the conventional paint-on-canvas, preferably certified by this “new mid-life-crisis career of the wealthy, especially women, called art consultants. Art by the yard.” Also the problems of the poor materials contemporary artists use. She uncrated an Anselm Kiefer and the pile of sand at the bottom of the crate was so large the cat headed straight for it. Dishes keep falling off her Julian Schnabels. What do you do? I asked. Glue them back on, she said.
Aaron raised questions about originality, authenticity, intelligence, the meaning of art, its evaluation, but I began to think of it as also within another of the great traditions of Western art, the representation of knowledge—in this case, the representation of what Harold Cohen knew about artmaking. But Aaron went well beyond that.
Along with the stimulating questions, difficulties arose. What Cohen was telling me in our long interviews (“the tale Harold has created for himself” I called it) was orderly and rational, fair and high-minded, but it also suggested the well rehearsed (no sin, necessarily) and eventually raised more questions than it answered. Over dinner one night I questioned that smoothness. He agreed; felt he was gliding over the same material. He was extremely self-protective, I said, even evasive. “You want to fade out of the book entirely, but that will turn it into a PhD dissertation.” Becky Cohen, his estranged wife, had used a simile: ideas are like parasites, they need a host.
After a few days of testiness between us, the artist said he was ready to try harder. It was, he agreed, one part Brit stiff upper lip, one part not answering the implicit question, only the explicit. “You must ask: wasn’t the isolation awful? And I’ll say yes, I hadn’t remembered, but it was. I’d cut myself off from everything, and at one point thought I’d gambled my entire career and lost. There were years when nothing seemed to be happening: UCSD thought it had hired a big-time painter, when all they got was somebody who’d disappeared into computing.” Becky Cohen had compared it to Jacob wrestling with the angel in the desert, and typical of Cohen: very private, nobody really knew. Except it went on for twelve years.
And what was I trying to do here? Harold was offended by drafts I sent him and couldn’t understand why I’d detected not only paternalism in his relationship to Aaron, but a firm streak of misogyny, which I thought figured into the art. (When the book was finished, Becky Cohen wrote me: “Yes, yes, how did you know? He repelled two wives and a daughter with it!”)
On August 19, 1989, I wrote a letter to Cohen, recorded in my journal:
I aim at grasping the life and the art as a series of intertwined, mutually nourishing patterns. My job is to find those patterns, particularly when they wouldn’t be apparent, and illuminate them, pointing out how the life informs the art, the art informs the life. The task doesn’t involve censure, it doesn’t involve much praise (though this I lapse into from time to time; can’t suppress it). It involves delineation and explication. Period.
First Cohen’s estranged wife Becky Cohen, and then Harold himself had asked me why I wasn’t consulting other experts. I replied in the letter:
The only mechanism I’ve had confidence in is my own observations, coupled with my own interpretations. I assembled the data. I tried very hard to understand it from your point of view; I studied the discrepancies I saw between your point of view and mine. I’ve stepped back again and again to understand it all against the larger culture of which we’re both a part. I have confidence in such a way of working because that’s how I wrote MWT. A casual reader might think I used all those interviews in MWT to check and counter-check. In fact, nothing of the sort. Everybody had his own version of the story, some more intelligent than others, but none of them was particularly satisfactory alone. So I did it myself. In other words, the aggregate of interviews for that book played the same role as my many interviews with one person here for this book. In the end I have to trust my own intelligence.
And then I added by hand: “And be prepared to fail.”
The letter went on:
Meanwhile as I very self-consciously understand it, I am busy fashioning a linguistic construct of your art and life myself. If the maker’s hand is apparent, I am doing it as honestly and dispassionately as I know how. The dispassion doesn’t entirely preclude partiality; I couldn’t imagine spending two or more years of my life on a subject I didn’t really admire: I admire it/you, you know that. I say it once again in case it got by you. It’s a different personality that spends its life on a topic it ultimately wants to trash (though such biographers exist—curious). That the book isn’t unalloyed valentine—well, my gift is to love profoundly, not blindly.
I was glad to put it all into words at last.
Joe had decided Princeton was unwise for him after all, and Columbia welcomed him back. We began the process of gutting and remodeling a dilapidated apartment half a block away from where we’d first lived on Riverside Drive. For some months, Joe lived in and worked out of a hotel room near Columbia, while I stayed on in the Princeton house. I was deeply grateful to have in Princeton my oldest and dearest friend, Judith Gorog. I spent many happy dinners surrounded by her children, and then, once they were tucked in, further into the night with Judith and her Hungarian husband, István. They both loved good talk. They eased what would otherwise have been months of deep loneliness.
When the New York apartment was finished in late 1988, a grand wall beckoned for a Cohen painting, which we bought. Cohen stopped on his way to Europe to uncrate and stretch it, plus another for my study. The colors were astonishing, even for Harold Cohen.
Two Men on Edge stretched across the wall and dominated the room. One of my neighbors, herself a painter, came up for tea soon after we moved in. A likable woman, she lived quietly and poured all her considerable passions into her paintings. Before this massive picture, she murmured that she felt disquieted by it. Had she formed an opinion ahead of time? After all, it was “by a machine.” She finally offered that it was “not quite felt,” one of those weasely phrases that say nothing. Too intellectual? Too perfect? Nothing else to say, so I’ll fall back on “not quite felt”? I remembered the woman I’d heard at the Art Institue of Chicago, telling us how she felt about physics.
Over the years, others would gaze at the painting admiringly, until we told them a computer made it. You could watch them reconsidering on the spot. It wasn’t quite done by computer. It was Harold Cohen, the meta-artist, who had done it indirectly. Aaron the program was responsible for the actual image. At that point, Aaron couldn’t do color, and so the image had been colored in oils by Cohen’s gifted hand.
Writing Aaron’s Code was difficult; selling it was harder. I pitched editors one by one. They loved the questions the book raised; they quailed at the expense of an art book by someone unknown to them. Although the issues seemed enormous—What is art? What is thinking? What if a machine really makes art?—they resisted. Machines Who Think all over again. A point made later by Arthur I. Miller in his 2014 book, Colliding Worlds, never entered my mind: that the art establishment in 1988 was as anti-science as the humanities.
And then the publisher of Machines Who Think, W. H. Freeman, made a decent offer, and my heart was lifted.
The manuscript was in press by July 1990, and I wrote in my journal:
The adult in me expects attack from the people who hate what Harold is doing but see me as convenient scapegoat, and don’t mind including me (one can hardly imagine a review saying “A lovely book about a subject unworthy of it”); by people who are open to or even like what Harold is doing, but hate an intruder on their art/crit turf. I can’t win, really, so the pleasure is in the process, and we get on with the next project.
The book came out in September 1990, and as for the many ways the critics might bash me, I needn’t have fretted. The book was barely noticed. Herb Simon sent me a thoughtful, generous, and detailed review of Aaron’s Code that would appear in some distant future in Computers and Philosophy. New Scientist informed me it was going to review the book in April, although I never saw that review. Art & Antiques asked me to do something for them based on the book. Jon Carroll, who for many years wrote an amusing, perceptive column in the San Francisco Chronicle, wrote a kind and appreciative review for the online forum The WELL, which Stewart Brand ported over to a private conference that he knew I was more likely to read. I was deeply grateful.
I withdrew from the experience of writing Aaron’s Code depleted, sad, and above all, deeply worried about my own instincts. Had John Brockman been right? The book’s release certainly felt like a dead loss on every level—personal, professional, emotional, intellectual. I wondered how long it would take for me to feel whole again. Yes, I’d learned about art, but what I’d learned I’d mostly taught myself.
When I saw Cohen at a book signing in the late fall of 1991 at the University of California, San Diego, I pursued something new with him: Was Aaron a complex adaptive system? This was a term—a whole set of terms and concepts—I’d learned in September and October that year during a deeply nourishing stay at the Santa Fe Institute, an independent think tank devoted to the sciences of complexity. The Institute was intimate enough so that you’d puzzle over a concept and walk out to find an open door where someone—above all, Stuart Kauffman (the theoretical biologist) but also Chris Langton (the originator of artificial life, A-Life), Brian Arthur (the economist), and many others, certainly including the physicist Murray Gell-Mann—would drop everything and patiently explain your puzzle to you, keep talking over lunch if you still didn’t get it or if you just wanted to keep talking.
Much of my problem with the Aaron program, I’d begun to see, was the struggle to create a vocabulary for what Aaron was and did. But at the Institute, those terms and concepts already existed: they were precise, descriptive, and in daily use in the sciences of complexity and nonlinear systems. A complex adaptive system—the phrase I wish I’d known—was a system that began with simple rules, whose multiple layers emerged into more complex behavior, yet had no central control or leader. Such systems communicated internally both between layers and between elements of layers. Such systems changed their behavior—adapted to improve their chances of success—through learning or evolutionary processes. Aaron, blithely making its drawings, could claim countless kissin’ cousins all over—in economics, physics, biology (the human brain, for one), meteorology, and many different fields.
I had lunch with Murray Gell-Mann, the Nobel Laureate in physics who knew complex adaptive systems down to his toes. On this sabbatical year of Joe’s, we’d gone from a few months at the Santa Fe Institute to three months at CalTech, where Gell-Mann was on the faculty. He listened to me and nodded. Yes, Aaron was exactly a complex adaptive system, at least as it executed each drawing. Its status at the system level was dicier, but Gell-Mann cautioned me, “it’s very much a matter of degree in these things.”
I exhaled. I continued preparing a talk on Aaron as being “in the spirit of” complex adaptive systems. Gell-Mann had told Joe that complex adaptive systems were far more important than the quark, a subatomic particle he’d hypothesized, whose existence was only confirmed much later and for which he’d won the Nobel Prize.
Joe and I left Pasadena after New Year’s 1992 and moved on to Munich, Germany, where Joe was now a recipient of a Distinguished Senior Scientist Award from the Alexander von Humboldt Foundation. That sabbatical year, first in Santa Fe, then in Pasadena, at last in Munich, restored me to myself.
In the late 1990s, Cohen cracked the color problem—Aaron now chose its own colors, and they were dazzling. Aaron put colors side by side that the human meta-artist wouldn’t have dared, yet the results are deeply satisfying. Cohen wrestled instead with issues of intentionality, responding to the demand we humans make of art that it not only exhibit a human touch, but that its meaning can be found in its intentionality. Thus Harold Cohen went to work on a new painting Aaron had made, perhaps changing some of the shapes, more often changing some of the colors and textures. He wrote: “It has not merely re-opened my dialog with the program, it has redefined the relationship upon which that dialog has been based.” He elaborated on that in 2011: “The whole of my history in relation to computing really has had to do with a change from the notion of the computer as an imitation human being to the recognition of the computer as an independent entity that has its own capacities which are fundamentally different from the ones we have” (Estorick, 2017).
Maybe we’ll all find ourselves there one day, when the world is full of intelligent artifacts. We’ll begin our dialogue. And listen carefully to hear the artifacts reveal their intentions and ours. Once more, Harold Cohen was an early arrival at a place where the rest of us will eventually follow.
When I walked into a spacious and serene laboratory at the MIT Media Lab in Fall 2013, nearly twenty-five years after the publication of Aaron’s Code, I saw a stretched canvas leaning against a table, its face hidden. Because art adorns the halls of MIT (and outdoor spaces between them), I assumed the canvas was something waiting to be hung. But after a while, Kim Smith came back from lunch and got to work on another canvas on the wall. A trained artist, she was working in collaboration with Sep Kamvar, himself trained as an artist, but also an MIT computer scientist, who’d coded the art-making program. Artifacts here are a collaboration between program and humans—an artist, in Smith’s case, or museum visitors, in the case of Kamvar’s exhibit at Skissernas Museum in Lund, Sweden, a few years earlier. The program’s instructions are both constraining and flexible, so that the finished piece has a clear structure, yet at the same time expresses the individual aesthetic preferences of the participants who contribute. “Since each step depends on previous steps,” says the museum’s exhibit catalog, “the result is a dynamic, collaborative piece, authored collectively by the artist [the program] and the museum visitors” (Kamvar, 2012-2013).
After a quarter of a century, people were ready to consider the computer as at least a partner in artmaking, if not an artist in its own right. Start-ups sell screen art. Some artists predict that screens will be the dominant medium “like canvas was for centuries,” says Yugo Nakamura, a founder of one of those start-ups (Wortham, 2014). Aaron’s work exhibits first on the screen, so would need no adaptation to this new world, this new kind of viewer, accustomed to screens instead of canvas.
Robbie Barrat, a Stanford researcher, took the machine-learning approach to generating paintings by AI. He fed a few thousand examples of images of landscapes into his machine-learning software until it learned how to create landscape paintings (Muskus, 2018). You might think Barrat’s approach is a kind of high-level copying. However, because the software is exposed to thousands of images, it’s really synthesizing, not copying. Similarly, human artists assiduously expose themselves to thousands of pictures as they’re learning to make art. (Once in the 1980s I wrote an unpublished essay “Why do artists go to art museums but scientists don’t go to science museums?”)
Harold Cohen died quietly at work in his studio on April 27, 2016, aged 87. By then, he’d lived to see digital arts programs spring up at most major universities and art schools. Google had even established an Artists and Machine Intelligence Program, which led to an AI-based (deep learning) artwork by artist Refik Anadol to inaugurate the centennial season of the Los Angeles Philharmonic. Described as “a collage” of artifacts from the Philharmonic’s history, the data on which the AI artwork is based is “millions of photographs, printed programs and audio and video recordings, each one digitized, microcrunched and algorithmically activated to play in abstract form across the building’s dynamic metal surface” (Rose, 2018).
The big questions that Harold Cohen’s Aaron first raised in the 1970s linger, not yet fully answered, if they ever can be.
The experience with Cohen’s work changed me. In the mid-1990s, I played around with something I called “swarm stories,” self-organizing stories, stories that told themselves, never twice in the same way. I tried hypertext stories, but the software was so buggy it crashed my computer again and again. Technicians took six months to discover the cause of the problem. While other writers such as Michael Joyce stuck with it, I stopped, too frustrated. But the ideas behind this software foreshadowed video games as we know them now.
- Noë further argues that “technologies are organized ways of doing things. But this equivalence has a startling upshot, one that no one has noticed before. Technologies carry a deep cognitive load. Technologies enable us to do things we couldn’t do without them—fly, work in a modern office place—but they also enable us to think thoughts and understand ideas that that we couldn’t think or understand without them.” In that sense, AI is a technology as well as a science. ↵
- Recall early expert systems, where a knowledge engineer evoked knowledge from the heads of experts and turned it into executable computer code. ↵
- The miniaturization of computer components has dramatically changed the ambience of computer laboratories over the last fifty years. These days they can honestly be described as serene—although intellectual excitement is anything but. ↵