When I think of AI, I think of HAL.
HAL 9000 is the computer in *2001: A Space Odyssey*. HAL runs the systems on *Discovery One*, the spacecraft sent to Jupiter to investigate a mysterious signal. Along the way, one thing leads to another and HAL ends up trying to kill his human crew. He fails and is undone by crew member David Bowman who unplugs HAL’s systems—killing the computer while it sings a childhood song.
This moment is generally seen as poignant.
The fantastic literature and folklore of the cultures of planet is awash in strange non-human entities. There are angel and demons. There are rock spirits and house ghosts. There are dragons and leviathans, unicorns, manticores, chupacabras, and the Baba Yaga. In the stories, humans must fight against, bargain with, hide from, or protect them.
Some of these beings are great creatures of terrible power. Others are small fragile things that flit in and out of the edges of reality. Some make their way into our world while others command vast realms of their own which we may only ever visit briefly.
Ask Jeeves launched in 1996. These were the wild early days of the search engine era, a full two years before Google would incorporate. The Jeeves promise was that you could use natural language to query the web instead of the halting Boolean language of AND, OR, and NOT that typified its competitors. Ask Jeeves’ mascot was a cartoon butler, modelled after the discreet valet from *Wooster and Jeeves*. In the stories, Jeeves is a quietly efficient employee, catering to his employer’s every whim and rescuing him from his many poor life choices.
As it happened, Ask Jeeves’s whimsical (slow, unreliable) interface lost out to Google’s (fast, effective) white page with a text box. Straight forward clarity beat out a pleasant personality. Today, Jeeves is gone. The twist in our story is that Google has launched Google Assistant. Like the fictional Jeeves, Google Assistant can answer your questions, remind you to hit the gym, warn you about bad weather, or book a reservation.
The link between conversational interfaces and intelligence runs deep in the history of AI. The Turing Test is about a machine pretending to be a human over what we would now recognize as a chat interface. A human judge has two conversations, one with a person, one with a machine. They send text messages to one another—the machine is trying to pass as a person. Turing suggests that the distinction between passing as a person and being a person is essentially irrelevant. This approach, says Turing, “has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man.”
Shared stories are one of the tools we can use to understand, interrogate and push ideas of what’s possible. Stories help people deal with unthinkable or unimaginable futures by making them thinkable and imaginable. They can act as maps to help us explore terra incognita by offering landmarks to navigate by.
The way they work this magic is through the suspension of disbelief. Stories are like conjurers. They ask us to briefly stop worrying about whether what happens in them is possible. Looking for a bit of fun, we readily agree. Within that window of opportunity, stories weave together a world and events that *feel* plausible. When we come out the other side, suddenly, new things might seem possible after all. They are tricky things, stories.
The objects, societies, and events depicted in popular fictions become touchstones and metaphors. People are constantly comparing things that really happen to things that happened in stories. People are remarkably good at this.
HAL and computers like HAL show up all over the place in the stories of science fiction. There is a kind of template for them: polite and helpful intelligences that can hold a conversation, manage complex systems, serve our whims and—on occasion—go wild and try to kill everyone. Except for that last part, these fictional AIs act as templates for real artificial intelligence as well. Siri, Alexa, Cortana, Watson, and Google Assistant all try to get the conversation part right, to say nothing of the hordes of chatbot start ups trying to grab their piece of the funding pie.
One danger of stories is that they can become so familiar that they mask the ways reality doesn’t fit with them. The map can obscure the territory.
You can walk through sliding doors and think, “this is just like Star Trek,” even though—in a lot of really important ways—it is *nothing* like Star Trek.
The most persistent aspect of the AIs of stories that is different from the AIs of reality is that the AIs of stories are individuals. HAL is not networked. HAL continues to function despite being cut off from all contact with earth. HAL stops functioning when Dave starts unplugging its brain. Siri is just the opposite. Siri *can’t* function when Siri is disconnected from the network. Siri does not live in your phone. Smash your phone and Siri loses none of its power or cognition.
Your phone is a mask that Siri wears that helps Siri feel more like a person that you can talk to. A person that you can confide in.
One of my favourite ads is an Ikea ad about a lamp. Take a moment to watch it.
If you can’t watch video, here’s a summary: using clever camera work and music, it depicts an old lamp being abandoned by its owner, left on the street in the rain to stare forlornly at the windows as a new lamp takes its place. And then a man walks into the frame.
“Many of you feel bad for this lamp,” he says. “That is because you are crazy. It has no feelings and the new one is much better.”
The thing that computers do better than anything else is copy things. As soon as you get one AI, you get as many as you want. More than you want, probably. As soon as you shut one down, it’s just a matter of booting from the backup. How would you even tell the edges of each personality?
Maybe it would be better to think of an AI as being like LUCA, the Last Common Unique Ancestor which may have been one world-spanning superorganism that was nothing like the walled off uni- and multi-cellular creatures we see around us today.
Did you know that some AI makers are hiring poets to make their products more relatable? I want you to think about that while thinking about how you were manipulated into feeling empathy for a lamp.
Turing says that he thinks his test draws a sharp line between the physical and intellectual capacities of people. I think he is wrong. Building a machine that can pantomime the speech patterns of a person is mistaking a mask for a face. You can see it in Siri who has a vocabulary that vastly outstrips mine and yet who is unable to follow the simplest threads of conversation.
Indeed, the whole reason that conversational interfaces are worth building is that they offer access to intellectual resources that far outstrip our own. It would be a terrible failure if Siri’s memory was as limited as mine. Siri would be useless if it kept time as poorly as I do. The whole reason I talk to Siri is that I hope Siri will be smarter than me about some things.
An AI’s inhumanity is the selling point.
Jeeves from *Wooster & Jeeves* is a model of discretion, observant of all of his charge’s many foibles and indiscretions but tight-lipped about them. Jeeves from Ask Jeeves is the friendly face of a sprawling surveillance network, pulling queries into a data warehouse and using them to learn and grow and become more intelligent.
If Jeeves were to whisper the details of Wooster’s life to his colleagues or the NSA, we’d understand that to be a betrayal. When Ask Jeeves does it, it’s a fundamental part of how it works. It is the same with Siri, Cortana, Alexa, and Google Assistant.
It’s not their fault. They can’t help it. They were made that way. They are creatures of the network.
AI makers and service providers seem to be hellbent on spending the next few years or so installing conversational interfaces and non-human intelligences into every nook and cranny of human life. There is nothing so simple that it can’t have a microchip embedded into it, along with the capacity to talk to a bunch of other microchips and take orders from them.
I think that if we are going to get used to living in society with these machines, different myths and metaphors are needed to act as shorthand for what and who they are. The machines that we are inviting into our home are not people. They are monsters.
There is nothing too alarming about this. The heroes of old stories have lived alongside monsters for millennia—sometimes fighting with them, sometimes negotiating, sometimes tricking, and sometimes being tricked. The ones who survive such encounters are often either particularly strong and brave, or fast and clever, or just very good at following unhuman rules and customs.
The pleasant manners or a virtual assistant are beguiling, much as the pleasant manner of a particularly polite dragon. But if you are to take tea with a dragon, you must never forget that on the ground they are large, clumsy, and prone to causing accidental damage. It’s not their fault. It’s that your home was not built for dragons. And they were not made for your home.
This feature was written exclusively for Digital Asia Hub. For permission to republish or for interviews with the author please contact Dev Lewis.