2

1.

In my lifetime, the foundations have been laid for a capacious and elegant structure whose completion will take many generations. Like a medieval cathedral, or better, the great Hagia Sophia, it will be a new temple of holy wisdom. The structure will subsume the many instances of intelligence wherever it’s found—in brains, minds, or machines; in cells, trees, or ecosystems—under general principles, perhaps even laws. Already this structure has begun to shelter and illuminate new definitions of intelligence as it’s slowly and meticulously formed from observation, experimentation, modeling, and example. It may even finally produce an authentic metric for intelligence, which, more than a century after intelligence testing began, still eludes us.

This ambitious new effort aims to discover the laws of intelligence the way Newton discovered the laws of motion. Before Newton, no one quite saw the commonalities among a stroll in the park, the turbulence of a river, the winds, the tides, the circulation of blood, the rolling of a carriage wheel, the trajectory of a cannon ball, or the paths of the planets. Then Newton found the underlying generalties that, at a fundamental level, explained and connected them (and so much more). Varieties of intelligence may be even more abundant than varieties of motion, but the fundamental laws that underlie them, should they be found, will be simple and will elegantly subsume that infinite variety.

Computer scientists have begun to call this edifice computational rationality, a converging paradigm for every kind of intelligence (Gershman et al., 2015). The structure is inspired by the general agreement that intelligence arises not from the medium that embodies it—whether biological or electronic—but the way interactions among elements in the system are arranged. Intelligence begins when a system identifies a goal (I want to go to the movies; I need to learn analytic geometry), learns (from a teacher, a training set, its own experience or that of others), and then moves on autonomously, adapting to a complex, changing environment.[1] Or you might imagine intelligent entities as networks, often arranged as hierarchies of intelligent systems—humans certainly among the most complex, but congeries of humans even more so.

Three core ideas characterize intelligence. First, intelligent agents have goals, form beliefs, and plan actions that will best reach those goals. Second, calculating ideal best choices may be intractable for real-world problems, but rational algorithms can come close enough (satisfice is Herbert Simon’s term) and optimize the costs of computation. Third, these algorithms can be rationally adapted to the organism’s specific needs, either off-line through engineering or evolutionary design, or online through metareasoning mechanisms that select the best strategy on the spot for a given situation (Gershman et al., 2015).

Our unfinished—our barely begun—grand structure of computational rationality is already large and embraces multitudes. For example, biologists now talk easily about cognition, from the cellular to the symbolic level. Neuroscientists can identify computational strategies shared by both humans and animals. Dendrologists can show that trees communicate with each other to warn of nearby enemies, like bark beetles (“Activate the toxins, neighbor!”) or admonish the children (“Not so fast, sapling”).

The humanities are comfortably at home in this structure, too, although it’s taken many years for most of us to see that. And of course here belongs artificial intelligence, a key illuminator, inspiration, and provocateur.

To grasp this fully, we must begin by abandoning old beliefs. One held that only humans could embody real intelligence (or strong intelligence in the phrase of philosopher John Searle). Artificial intelligence, no matter what it achieved, was different, and therefore lesser—weak intelligence.[2]

We must also let go of the old belief that intelligence resides solely in an individual cranium. This is a hard re-set for anyone who’s grown up in Western culture, which has traditionally emphasized individual intelligence over its collective nature.

Not surprisingly, some people object to the cognitive, the computer, and the neurosciences exploring and replicating the mind, a patch thought to be uniquely human. To this opposition, philosophers are the mind’s sole interpreters. In the daily Columbia University Spectator, an undergraduate complains that he’s okay with reading Kant and Hume on the nature of mind as required by the Core Curriculum, but why is the science of mind off in some science ghetto? Why can’t he read the new findings about the mind alongside the speculations of Kant and Hume?

Why not indeed?

Fanciful maybe, but think of intelligence as a continuum. At one end of the continuum are simple cells figuring out what they need to survive to avoid self-destruction. At the other end are humans exhibiting wide-ranging if not entirely general-purpose intelligence over many different kinds of situations and manipulating symbols through storytelling and making images, in ways no other organisms seem to do.

A bacterium doesn’t think about seeking energy from whatever source powers its metabolism in a thoughtful logical way. It just goes for those specialized energy bars in its surroundings. A cheetah doesn’t think, “Yum, would that critter make a nice dinner? Should I invite those tedious people next door?” The cheetah automatically identifies (extracts the features of) prey and pursues it as fast as possible. We’ve called this mere instinct, but in fact it’s a kind of intelligence at work, perceiving, recognizing, and quickly acting on those perceptions and recognitions.

Similarly, machine learning (ML) quickly perceives and recognizes patterns in large amounts of data. Specifically, machine learning is an array of algorithmic, statistical, and mathematical techniques that can improve automatically through experience. ML relies on enormous data sets to find patterns and explore nuances. Among its varieties are supervised learning, unsupervised learning, reinforcement learning, deep learning, and neural nets—these last two brain-like, but not brains. Thus ML could be said to correspond to the intelligence of certain organisms, from simple cells to complete animals, paralleling what we call instinct in such organisms.

As remarkable as they are, ML applications are narrow. They cannot move from domain to domain and fail if initial conditions change even slightly. Humans are still needed to label the initial patterns that the algorithms evoke (cat, melanoma, paper shredder, road obstacle). We now acknowledge that the algorithms and statistical methods ML uses are not neutral. Human beings with cultural biases, conscious and unconscious, construct them. Thus, commonplace assumptions (sometimes false) and deep human prejudices of the moment are baked into the novel patterns ML teases out of big data.

Yet in some simplified way, ML mimics some of the functions of the organic brain. MIT’s Tomaso Poggio, an eminent researcher across neuroscience and computation, reminds us that the recently successful algorithms behind AlphaZero, now the global Go champion, and Mobileye, a vision-based collision-avoidance system for drivers, are based on two algorithms originally suggested by discoveries in neuroscience: deep learning and the associated techniques of representation, reinforcement learning, transfer learning and other techniques.

At the other end of the intelligence continuum is the kind of symbolic cognition that humans exhibit, which is slow and analytical (over seconds, minutes, even hours and days), abstract, logical, and heuristic (based on rules of thumb and knowledge other humans, such as teachers, or texts, or experience have provided). The part of AI that corresponds to that kind of human intelligence is sparsely populated with applications. Andrew Moore, former dean of the school of computer science at Carnegie Mellon calls AI “the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.” Until recently changes over time.

Yes, humans exhibit both kinds of intelligence, what psychologist Daniel Kahneman calls thinking fast and thinking slow, because humans evolved from the end of the intelligence continuum we share with all organisms.[3] At the moment, ML applications (thinking fast) however narrow, abound, and will proliferate, it seems, forever. Symbolic cognition applications (thinking slow) are few and far between although artificial intelligence had its birth in those applications.

More than one AI researcher has recently told me that ML has pretty much run its course in terms of research. “Breakthroughs” celebrated almost daily in the media are really new applications of ML, that however brilliant and useful, cannot move between domains, and it bears repeating that they fail if the initial conditions are even slightly changed. ML also elides the embarrassing and deeply consequential fact that researchers often cannot explain the inner workings of their mathematical models. Because they lack rigorous theoretical understanding of their tools, Ali Rahimi, a well-known machine-learning researcher, said deep-learning researchers are working like alchemists instead of scientists (Naughton 2018).[4] This is not to slight how these narrow applications can still have significant effects (especially see Chapter 30 on China and the United States.).

On the other hand, MIT’s Patrick Winston has said that symbolic cognition has particular characteristics. It can merge two expressions to make a larger expression without disturbing the two merged expressions. This aspect of symbolic cognition allows humans to build complex, highly nested symbolic descriptions of classes, properties, relations, actions, and events. Winston and his colleague Dylan Holmes (2018) write: “With that ability we can record that a hawk is a kind of bird, that hawks are fast, that a particular hawk is above a field, that the hawk is hunting, that a squirrel appears, and that John thinks the hawk will try to catch the squirrel.” Although other animals might have internal representations of some aspects of the world, they seem to lack complex, highly nested symbolic descriptions.

So symbolic cognition may be the next research frontier, perhaps a return to Good Old Fashioned AI, or GOFAI as it’s known, but brought up to date with improved technology (Somers, 2017). Or that new research frontier may be something altogether different.

The totality of intelligence is variegated, collective, distributed, even emergent. Understanding and knowledge are enacted only within a larger system. Nothing resides or is born solely in a single human’s head.

The first inkling of this I got was from a young scientist (his name lost to me) sometime in the early 1970s. We were strolling beneath the eucalyptus trees on the Mills College campus in Oakland, and he was trying to explain the systems approach to intelligent behavior. He didn’t use that phrase; he may not have known it. You and I think of ourselves as intelligent, he said, but we didn’t invent the language we speak. No matter how brilliant we are, we didn’t invent much of anything, compared to how much we rely on the inventions and innovations of countless others in the past and present.

I stopped. I knew at once he was right. For centuries, Western thought has been strongly biased toward celebrating the individual, his consciousness, creativity, insight, or brilliance without mentioning the milieu this consciousness, creativity, insight, and brilliance finds itself in, draws upon, and recombines to create novelty.

The assumption that intelligence is the property of the individual alone is so foundational in Western thought that hardly anyone thinks to question it, at least in the First Culture, whose literary tradition I learned. The Second Culture, science and mathematics, does a better job of balancing the credit between those who came before and the work of the individual by referring explicitly to a chain of precedents in the form of citations. Yes, some gifted individuals move it all forward, sometimes brilliantly. But they do so inside, relying upon a system that they alone didn’t invent.[5]

As AI research fills out the nearly empty, slow-thinking end of the intelligence continuum, the symbolic part, I believe that the structure of computational rationality—the principles, the laws of intelligence—will at last be revealed.[6]

2.

A history exists of all this, a human story about the invention of artificial intelligence by a handful of brilliant scientists who understood that computers could exhibit what we call intelligence, if only they—scientists and machines—worked at it. At the time, the idea of artificial intelligence was audacious, a bit loony, and the stuff of science fiction, not science.

The earliest researchers were not all men. Margaret Masterman, a former student of Ludwig Wittgenstein, established the Cambridge Language Research Unit at Cambridge University in 1955 (though not officially a part of the university). The unit pursued automatic translation, computational linguistics, and even early quantum physics. Thus her efforts were contemporary with those of Allen Newell and Herbert Simon, who are generally credited with creating the first working AI program. Masterman’s work and that of her associates had pioneering importance in machine translation, but linguistics and machine translation were soon parted from core AI. (Why language wasn’t considered symbolic baffles me.) Until someone writes a seriously revised history of AI, correcting in some ways my own Machines Who Think, Masterman won’t get the credit she deserves.[7]

So AI, as first imagined, a field that operated at levels of symbolic human intelligence, counts its founding fathers as all American men. This was much the result of post-World War II United States prosperity. Alan Turing, a brilliant Englishman, had certainly foreseen the possibilities of computer intelligence and even designed, though didn’t program, a primitive chess-playing machine. He proposed “the imitation game,” which famously came to be called the Turing test. A set of human judges must conduct a freewheeling conversation (in text), the kind of viva voce beloved by Oxford and Cambridge, with respondents who might or might not be computers concealed from the judges. By the human qualities these conversations exhibited, the judges were to decide whether respondents were computers or a humans.[8]

Turing was prevented from realizing what he was sure computers could do not only by British post-WW II national austerity, but also by British peevishness and factionalism. (Manchester? you can hear the London boffins say to each other, as they clutch scarce postwar British research funds. Manchester? Really?) Finally, Turing was hounded to a premature death by British laws that criminalized his homosexuality and drove him to suicide.[9] No amount of subsequent pardons and regrets can change this or compensate for the loss.

Perhaps even before Turing, certainly contemporary with him, the German engineer Konrad Zuse had seen the possibilities of computational intelligence in the late 1930s. But the Nazi government disregarded his lovingly hand-built constructions—a series of working electro-mechanical computers set up in his indulgent parents’ Berlin living room. The apparatus was moved to Bavaria during the war and eventually carted off as war booty to Switzerland. After World War II, Germany was forbidden to dabble in electronics for at least a decade.

For a long time, the Soviets were bound both fiscally and ideologically. Ed Fredkin, then at MIT, once explained to me how computer programming was taught in the USSR. “It was like their swimming mandate,” he said. “Everyone must know how to swim. Unfortunately, a desperate shortage of swimming pools made this impossible. So people were taught ‘dry swimming.’” I see Fredkin leaning against a dark granite wall on West 116th Street in Manhattan, miming how to swim on dry land: he stands on one leg, kicking out the other, arms waving, a lampoon of the breaststroke. “Same with Soviet programming. Not enough computers for people really to learn. Dry programming.” These circumstances provided precious little room for innovation, never mind the development of artificial intelligence. (Or, as a Soviet scientist once put it to Ed Feigenbaum, then at Stanford: “Who allows you to do this?”)

These days AI is a thoroughly international endeavor and has been for decades. The Chinese, for example, intend to be world leaders, and the Japanese already are. Not incidentally, some of AI’s most prominent scientists are women, making hash of an early accusation that the men creating AI were victims of womb envy.

To return to the possibly skewed AI foundation myth: the field’s four founding fathers, John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, stand as the four apostles (or horsemen of the apocalypse, depending on your point of view) of a reality we can all now see, a reality we all now inhabit. They saw this reality from the beginning. Yes, they were all Americans, but they would’ve been geniuses anywhere. The United States was wealthy enough, and its government leaders sufficiently visionary then, to allow their genius to flourish.

Thus one kind of AI was born through the brains and hands of a small brotherhood of scientists, all of them acquainted with each other, custom-crafting every program, laboring to make it all work on the primitive machines of their time. This story is partly about those people, most of them only spirits in my memory, who conceived that grand dream, that inevitability. They labored in what was then scientific isolation—often, scientific derision. AI was their way to understand human intelligence, possibly other intelligences, and they pursued it with glorious joie de vivre.

I won’t have much to say here about the technical aspects of AI, which I wrote about at the field’s dawn, in Machines Who Think, and which are ably described in several later excellent histories, textbooks, and survey articles. But from time to time, I’ll look in on a line of research today, partly because one application or another intrigues me, and partly to bring the story up to date.

For each example I cite, please bear in mind that many similar research efforts are underway around the world. A full survey of AI today would need a study of encyclopedic proportions. To repeat Larry Smarr: this is no longer just a few programmers cobbling together Lisp programs. The whole world, every one of us, is at work on AI. We’re all contributing, all doing our part, each time we go online; use our smartphones, credit cards, or social media; pass through automated toll booths; stream a movie; watch TV; you name it. We’re all—if you worry—complicit.

Inevitably too, this story is about the people who felt threatened and were angry with me for being not only beguiled, but also sanguine. A pattern of what I call Dionysian eruption (after Nietzsche’s distinction between the Apollonian and the Dionysian) characterizes the generally Apollonian history of AI. These Dionysian outbursts have often been ferociously passionate against AI, but sometimes, with equal passion, for it.

And this is about me, and my own journey through it all, as fascinated spectator, as accidental emissary between the Two Cultures of the humanities and the sciences. I’ll say how it looked along the way and what I’ve learned over the years about thinking machines. I’ll say why, as a humanist, I was drawn to AI, and where my intuition led. I’ll say a little bit about myself so you can assess your narrator.

One thing I’ve learned is that humans can be serenely triumphal about extending our natural faculties of vision (eyeglasses, microscopes, telescopes); or locomotion (horseback, automobiles, everyday jet travel, space probes); or communication (writing, publishing, telephones, Skype), without ever being accused of tempting fate for our ambitions.

But extend our natural faculties of thinking? Illicit, sinister, blasphemous, hubristic. Anyway, impossible. (It’s difficult to entertain that last notion any longer, but it obsessed many otherwise intelligent people for decades.) Reasons both obvious and subtle exist for all this, as you’ll see.

Living in the exponential of AI, my great good fortune has been to watch most people evolve from jokey scorn to loud frustration that their computers and phones aren’t smarter. I’m with that. Though I was its historian, kept its baby book, after some years I turned away from AI to other interests. For a while, I didn’t pay much attention. When I turned back, intrigued by new programs, the moment was gravid.

AI’s many subfields, such as machine learning, pattern recognition, vision, robotics, or natural language processing, once hived off like Protestant sects (and with some of the same moral indignation) but might begin to pull together to complement, interpenetrate, and amplify each other’s purposes.

In these new ecumenical creations, human-level intelligence, or something even better, is thinkable.

3.

But let me be clear. An encounter with the Other has always brought with it very great uneasiness, especially when it concerns intelligence outside the human cranium. Western literature is full of this disquiet, from the Ten Commandments (“You shall not make for yourself a graven image, or any likeness of anything that is in heaven above, or that is on the earth beneath, or that is in the water under the earth…”) to Frankenstein to Neuromancer to the daily news. With this disquiet I sympathize. Every grownup knows that technology giveth and technology taketh away. With AI, we aren’t even far enough along on this path to be able to weigh the balance. Although I have misgivings, I take the long view and like to imagine what might be a better world if humans provide themselves with intelligent help.

All right, even intelligent computer overlords, in the words of Ken Jennings, the champion human Jeopardy! player who lost decisively but honorably to Watson: “I for one, welcome our new computer overlords.” Deadpan, he alluded to an episode of The Simpsons, which probably borrowed it from Arthur C. Clarke’s Childhood’s End. (Watson would’ve got all that; I didn’t.) We might all welcome intelligent overlords who—that—might save us from so much human folly.

At the very least, they’ll bring us a fresh point of view.

But the strangest part of this sixty-year story is that for decades, I couldn’t make otherwise intelligent and well-educated people believe that this could be important.


  1. Cognitive, computer, and neuroscience work closely with each other, but AI, a branch of computer science, is the only field that attempts to build machines that will function autonomously in complex, changing environments. As a consequence, AI has made rigorous the study of intelligence wherever it appears. In the earliest AI research, this overarching paradigm of the nature of intelligence was implicit, but not conspicuously self-evident.
  2. At The AI Summit conference in 2014, leading AI researchers used these two phrases (with the same precision as the philosopher, which is to say, not much at all). At first I took it as irony. Then I thought they’d seized these terms the way that gay people reclaimed “queer” as in-your-face defiance of critics. No, my ahistoric friends had no idea where the phrases had come from, found them useful, and employed them innocently. When I told this to philosopher Daniel Dennett, who’s had some spirited public exchanges with John Searle, he just groaned. But apparently the phrases are here to stay until they’re better defined or revealed as nonsense. (I adopted the once derisive term “artificial intelligentsia” for the title of this book because it amused me.)
  3. We now know human thinking is strangely and strongly affected by the composition of human gut flora and fauna, so my gastroenterologist and I are in lively discussions about whether machines, lacking guts, will ever be able to think like humans. Perhaps supplying machines with guts and the appropriate biome is the missing link to human-like thinking in machines. Perhaps that’s a terrible idea. Thanks to Jonelle Patrick for raising the question to me in the first place.
  4. I’m taken aback to hear AI related to alchemy once again, as it was in the 1960s. I’ll spare my readers the essay I could write about how science evolves. But Rahimi is correct that these mysterious applications are being used in real life right now without deep understanding of how they work. So was aspirin mysterious for years after it was deployed, but somehow AI does seem more momentous than aspirin.
  5. This is now explicit in books like Sloman, Steven, & Fernbach, Philip. (2017). The Knowledge Illusion: Why We Never Think Alone. New York: Riverhead.
  6. I’m grateful for discussions with Edward Feigenbaum to clarify my own intuitions about the intelligence continuum.
  7. Thanks again to Edward Feigenbaum for bringing Masterman to my attention.
  8. The Turing test lacks refinements but contests are held annually, with the rule that thirty percent or more of the judges must agree on the “humanness” of a respondent. In Summer 2014, for the first time, a third of the judges agreed that Eugene Goostman was a charming, maybe typical, 13-year-old Ukrainian boy, who liked hamburgers and candy and whose father was a gynecologist. However, Eugene was a program put together by a team led by Russian Vladimir Veselov and Ukrainian Eugene Demchenko. As professionals scoffed, two of the scientists who conducted this experiment wrote a long clarification for the Communications of the ACM (April 2015) regarding numbers of judges, judges’ knowledge, and quoted Turing: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s?” Moshe Y. Vardi, editor-in-chief of the journal responded tartly: “The details of this 2014 Turing test experiment only reinforces my judgment that the Turing test says little about machine intelligence. The ability to generate a human-like dialogue is at best an extremely narrow slice of intelligence.” Not so negligible, I’d say. As we’ll see, in the next few years, human/machine conversation became much more sophisticated.
  9. Lively questions have grown up around the official finding of suicide. Turing had completed his humiliating “chemical castration” sentence. Although he’d lost his security clearance, he was engaged in important, non-secret research, and to his friends, seemed happy. He was known to be careless with the cyanide he was using in experiments. Thus his mother believed his death was an accident. Others have suggested he might have been murdered: he sat on some of the biggest secrets of World War II and, because of his homosexuality, was vulnerable to blackmail and other pressures. Lest he succumb to blackmail, removing him might be convenient. This seems farfetched because his homosexuality was no longer a secret.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

This Could Be Important (Mobile) Copyright © 2019 by Carnegie Mellon University: ETC Press: Signature is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book