But suppose AI’s future is something else? Kevin Kelly, the founding editor of Wired magazine and a perceptive observer of technology for more than four decades, wrote in Wired:
The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here. (Kelly, 2014)
Five years after Kelly’s predictions, this is about how AI seems.
Cheap parallel computation, big data, and better algorithms have brought us here, says Kelly. Google, for example, uses our daily searches to train its computers. The neural network model of computing suddenly has specialized chips (originally invented for games) that can do in a day what traditional processors needed several weeks to compute. Big data provides what’s needed for computers to train themselves (although we’ve already seen the built-in problems with big data). Better algorithms have been developed over the last few decades to take perceptions from the lowest to the highest and most abstract levels of machine cognition—deep learning. But we must remember that present machine learning works only in a single domain, and only where an objective answer exists. It cannot cross domains; it cannot work at all if the initial conditions change even slightly.
Kelly goes on to envision such kinds of AIs as “nerdily autistic,” dedicated exclusively to the single job at hand, whether that’s driving a car or diagnosing and curing disease: focused, measurable, specific. “Nonhuman intelligence is not a bug, it’s a feature.” A new form of intelligence will think about manufacturing, food, science, finance, clothing, or anything else, differently. “The alienness of artificial intelligence will become more valuable to us than its speed or power.” Kelly’s observation recalls my journal entry of November 3, 1974: “I’ve come a long way from the time when I took offense at the idea of computers writing novels. Now I think I’d welcome a new form of intelligence to live in parallel with us.”
Kelly’s skepticism about a general-purpose machine intelligence is shared by William Regli, in 2017 the acting director of the Defense Advanced Research Projects Agency:
The fact is, despite enormous individual engineering advances in recent years, we remain woefully inadequate when it comes to the art of design—the enigmatic and still largely unautomated process of synthesizing multiple elements into final products. (Regli, 2017)
In late 2018, Ed Finn, the founding director of the Center for Science and the Imagination at Arizona State University repeated—perhaps unwittingly—what John McCarthy once called “the literary problem,” that our stories about future AI conform to literary conventions, contain heroes and villains, and the villain is nearly always AI, which prevents us from thinking seriously about a collaborative AI future, already here. Why a zero-sum competition? Finn asks. He wants to see holistic thinking about AI, bringing together science fiction writers, technologists, and policy makers.
This book has been about humans, not machines. Humans were always my main interest. As it happens, AI’s coming of age, if not yet its full maturity, has paralleled my own life. It gives me pause to think I’ve been acquainted with AI from the time it was a cozy fraternity of a few to now, when AI is in nearly every corner of our lives. So this book is not only a quest saga, but a coming of age story, of both a scientific field and of a naïve young woman, now slightly wiser, decidedly older. I was an undergraduate in the humanities, who bumped into AI early in its life and mine, had long conversations with its begetters, and warmed to their enthusiasm and optimism. I’d spend much of my life pulling on the sleeves of serious thinkers, trying to tell them that this—artificial intelligence—could be important.
I’ve offered a personal story here because, as I said at the outset, it’s the particulars that illuminate: personalities, friendships, enmities, chance, context. To grasp these early times, abstractions wouldn’t do. The scientists who created AI, the scientists who push it forward, drew me to write about them, to stand as witness: they were and are brave, intellectually daring women and men, the early ones attacked and derided who, unfazed, went about changing the world. They deserve to be remembered as more than names of awards or carved on buildings.
You’ve seen that the future of AI is sometimes conceived as a wise Jeeves to our mentally negligible Bertie Wooster selves. “Jeeves, you’re a wonder.” “Thank you sir, we do our best.” Watson, the Guardian Angel, Maslow, and its helpful bretheren want to be our car drivers, our financial and medical advisors, our teachers, our long-range planners, our colleagues—not our masters. This is an appealing picture, the human race riding effortlessly into the future in the slipstream of its own intelligent machines.
As one task after another falls to machines, we’ll ask ourselves what human beings are, Kelly says. “The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.” (2014)
No, this is the continuing but newly refreshed task of the humanities, and it has already begun. As a teenager, I didn’t ask who I was. I knew. I just didn’t understand why the world didn’t like or accept that. That’s how I see any new definitions of us: accommodation to and illumination of our infinite variety.
We can’t now say what living beside other, in some ways superior, intelligences will mean to us. Will it widen and raise our own individual and collective intelligence? In significant ways, it already has. Find solutions to problems we could never solve? Probably. Find solutions to problems we lack the wit even to propose? Maybe. Cause problems? Surely. AI has already shattered some of our fondest myths about ourselves and has shone unwelcome light on others. This will continue.
The future. It’s been easy to resist writing breathless scenarios. Nothing ages faster nor makes the prophet seem so time-bound. As Jack Ma, the co-founder of the Chinese online service, AliBaba says, “There are no experts for the future. Only experts for yesterday.”
When people ask me my greatest worry about AI, I say: what we aren’t smart enough even to imagine.
You might also recognize in all this ferment the two customary opposing views about AI—a catastrophe or a welcome blessing—an early theme from my own Machines Who Think: what I’ve called the Hebraic and the Hellenistic views of intelligence outside the human cranium. The Hebraic tradition is encoded in the Second Commandment: “You shall not make for yourself a graven image, or any likeness of any thing that is in heaven above, or that is on the earth beneath, or that is in the water under the earth.” We fear entertaining god-like aspirations, of calling down divine wrath for our overweening, illicit ambition. The Hellenistic view, on the contrary, welcomes (with cheer and optimism) outside help, the creations of our own hands—not that the dwellers in Olympus and their progeny didn’t have problems.
We already have a bitter taste of the dark side of AI. Russian bots and other software simulated human influencers and interfered with the U.S. national elections in 2016; our telecommunications and social media apps know our lives in granular, even embarrassing detail. The Chinese government, along with the Chinese army, runs deep learning algorithms over the search engine data collected about the users of Baidu, the Chinese equivalent of Google. Every Chinese citizen receives a Citizen Score, to determine whether they can get loans, jobs, or travel abroad (Helbing et al., 2017). China is selling these systems to other countries. With all of us under surveillance, whether by our government or by firms, whether by manipulative individuals or scheming terrorists, how the economy and society are organized must change fundamentally. Kai-Fu Lee says we need to rewrite the social contract (2018). We do. Certainly we need to talk.
Let us talk too about the grand ideas in the Western tradition. What is thought? What is memory? What is self? What is beauty? What is love? What are ethics? Answers to these questions have up to now been assertions or hand-waving. With AI, the questions must be specified precisely, realized in executable computer code. Thus eternal questions are being examined and tested anew.
From the beginning, pioneering researchers in the field expected the machines would eventually be smarter than humans (whatever that meant), but they saw this as a great benefit. More intelligence was like more virtue. These early researchers were firmly in the Hellenistic tradition. They believed—and I do too, if you haven’t guessed—that if we’re lucky and diligent, we can create a civilization bright with the best of human qualities: enhanced intelligence, which is wisdom; with dignity, compassion, generosity, abundance for all, creativity, and joy, an opportunity for a great synthesis of the humanities and the sciences, by the people who specialize in each. Herb Simon liked to say that we aren’t spectators of the future; we create it. A better culture, generously life-centered, ethically based yet accommodating infinite human variety, is a synthesizing project worthy of the best minds, human and machine.
We long to save ourselves as a species. For all the imaginary deities throughout history who’ve failed to save and protect us from nature, from each other, from ourselves, we’re finally ready to substitute the work of our own enhanced, augmented minds. Some worry it will all end in catastrophe. “We are as gods,” Stewart Brand famously said, “and might as well get good at it (1968).” We’re trying. We could fail.
Win or lose, we’re impelled to pursue this altogether human quest. Some mysterious but profound yearning has led us here from the beginning. This is the deep truth of our legends, our myths, our stories. (It wants some explanation. This isn’t exactly the joy of sex.) The search for AI parallels our innate wish to fly, to roam over and beneath the seas, to see beyond our natural eyesight. The quest takes us out of the commonplace, along a dark and perilous way, beset with tasks and trials, a collective hero’s journey that all humans must undertake.
The tasks and trials we already see include the destruction of whole business models, the transformation of work (and thus for many, life’s meaning), and faster-than-thought applications with unforeseen consequences. We face a possible, if unlikely, subjugation to the machines; a possible, if unlikely, destruction of the human race by AI. These seem to me remote, but trials we can’t yet foresee will surely emerge. We hardly know how to meet the trials we can see. I quoted Herb Simon above: “We aren’t spectators of the future; we create it.” But often he also slightly misquoted Proverbs: “If the leaders have no vision, the people will perish.”
For years I had these calligraphed words framed above my desk, a gift from my husband: “And wherefore was it glorious?”
I knew the rest of the passage by heart:
Not because the way was smooth and placid as a southern sea, but because it was full of dangers and terror, because at every new incident your fortitude was to be called forth and your courage exhibited, because danger and death surrounded it, and these you were to brave and overcome. For this was it a glorious, for this was it an honourable undertaking. You were hereafter to be hailed as the benefactors of your species, your names adored as belonging to brave men who encountered death for honour and the benefit of mankind.
These are the words of the dying Dr. Victor Frankenstein, near the end of Mary Shelley’s essential novel, Frankenstein. He cries out to a ship’s crew that, during a hunt for the Northwest Passage, has been paralyzed with terror by the menacing ice. Yes, the words reflect ironically on his repudiation of his own creation of an extra-human intelligence. The deeper urgency, I believe, is his, and our, struggle to be brave, as we go where we must.
When the calligraphy and the rest of the passage it stood for was above my desk, I meant it for my own writing life, for my struggle to tell the world honestly, without exaggeration, about artificial intelligence. It can stand now for the human race’s struggle to get the best from AI while curbing its dangers.
AI challenges and melds both art and science, and every other human resource. We’ve created something in our own image that might eventually surpass us, possibly destroy us as a species. With our grand, conspicuous, and shameful failures, maybe we deserve no better. But I’m still an optimist. Digital, yes; but humanities. We’ve never quite fallen out of love with ourselves, and it’s been a great advantage. We might learn to collaborate with our smarter selves.
When I asked Marvin Minsky what his hopes were for AI, he replied: “That it step in where humans fail.” Fair enough. I’d like AI, this once-in-human-history phenomenon, to enlarge our aspirations. The opportunity so far has often been squandered on relative trivialities, at least in the commercial sector. I long for us all to treat AI as the sacred trust it really is.
Wherefore was it glorious?
We’ve begun. Let us continue.
- Kelly elaborates on these points in a later essay, “The AI cargo cult: The myth of a superhuman AI” (Retrieved from https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62). Its main points are that intelligence is not a single dimension, and thus “smarter than humans” is meaningless, although dimensions of intelligence are not infinite. Humans do not have general-purpose minds and neither will AIs. Emulation of human thinking in other media (e.g., wetware) will be constrained by cost. Finally, intelligence is only one factor in progress. ↵
- From Exodus 20:4, King James Version. ↵
- The same division is evident in biological enhancement of human faculties. Some fear this very much; others think it would be a benefit. The combination of much smarter humans and much smarter machines is something to think about. ↵
- Helbing, et al. should certainly be one of the texts we talk about. ↵