When the movie Her was released in 2013, audiences were already familiar with the tech industry practice of feminizing virtual assistants. Siri had been on the scene for over a year, but Spike Jonze took the idea to the extreme. The movie showed a romantic relationship between a man and a speech-enabled operating system that unfolded so organically it had people asking whether human-bot romance could happen in reality.
Reading some reviews of the Amazon Echo, I found thoughts of Her coming back to my mind, and wondered whether we should be asking not if but when we’ll see this kind of love actualize.
The Amazon Echo connects to the Alexa Voice Service, Amazon’s voice-interactive, cloud-based personal assistant. Most of the customer reviews on Amazon are very positive — and some are even touching — but browsing them gave me a sense of disorientation. For a while I couldn’t figure out what was unsettling. Was it the futuristic-sounding functionality?
Eventually it hit me: It’s the anthropomorphic language. Sometimes the personification is in the background, in the many feature-based roundups that casually refer to Alexa as “one of the family” and “somebody to talk to” without quotation marks. Sometimes it jumps out, as in this particularly entertaining review about a nearly perfect spouse, and this one whose author is seduced by a wanton temptress. We shouldn’t dismiss the metaphors as shorthand or stylistic flourish, though. Humor often points to things that are difficult — or uncomfortable — to pin down. Nonetheless, the reviews communicate some very affectionate feelings toward Alexa.
I wanted to know more, and who better to ask than the mastermind behind Alexa’s “mind”?
William Tunstall-Pedoe invented and commercialized the AI technology behind a personal assistant app, Evi, which the press originally described as a British rival to Siri. The technology included a highly sophisticated question-answering platform. When Amazon acquired the U.K. company in 2012, the technology was incorporated into the Amazon Echo. From the time of the acquisition until just recently Tunstall-Pedoe was on the product team tasked with defining what the Echo and Alexa are.
Tunstall-Pedoe told me that when designing Alexa’s personality, the team’s aim was to create “a warm feeling” in the consumer, particularly through the use of social rather than merely functional linguistic patterns. For example, Alexa is programmed to reply to the question “How are you?” even though it doesn’t serve any informational purpose. The branding helps, too: Amazon refers to the product as “she.”
More fundamentally, Tunstall-Pedoe said, people respond so warmly because “Alexa understands the user.” (Or, I think to myself, she seems to understand the user, and that’s what matters.) He contrasted Alexa’s sophisticated understanding with the shallow approach of chatbots, which “try to deceive the user into thinking they are being understood.” For example, Joseph Weizenbaum’s ELIZA bot from the 1960s — modeled on the conversational style of a psychotherapist — used a particular subroutine to disguise cases where the system could not recognize the input.
With Alexa, not only is the AI technology vastly more sophisticated, but there’s the additional effect of the speech interface. A voice that sounds totally human tends to conjure up the sense of a sentient, feeling person to go with it. Modern synthetic voices are often based on concatenated strings of sample human voices, so even “artificial” voices have human bodily origins. It’s no wonder that people are anthropomorphizing the product.
Tunstall-Pedoe declined to share any examples of emotional attachments developing between Alexa and customers, though. I’m not surprised. User interaction is bound to be a sensitive topic, especially with data privacy high on the agenda around products for the connected home. Plus, the technology isn’t designed to encourage intimate conversations. While the Echo itself seems unlikely to inspire a Her-type scenario, it does demonstrate the kind of rapport that can be generated when users feel understood, especially via a speech interface. It makes me wonder when we’ll start seeing voice-enabled systems offer deeper, more personal conversations.
After all, human-to-human conversation is on the decline. A side effect of pervasive consumer technology is that we’re having fewer and shallower conversations with each other, a factor that Sherry Turkle explores thoroughly in her book Reclaiming Conversation. Are tech companies sensing a market opportunity? Could we be hurtling toward a scenario where instead of deeply talking to each other, people turn to sophisticated AI systems with 24/7 availability? Would people really trust AI systems with their personal secrets? Could the power of the voice to encourage self-disclosure override privacy concerns?
These are troubling ideas. Although I can see a potential case for turning to technology where human capabilities fall short, or for practice with human/human interactions, I can’t help feeling there’s something deeply sad about the idea of outsourcing connections that feel so deeply human. Yet sophisticated AI systems may prove to have better availability, and possibly more stimulating conversation, than human conversation partners.
But would such a system ever really take off? How many people would choose to invest time conversing with AI, knowingly, to build an emotional relationship? The Xiaoice phenomenon answers that. Powered by Microsoft’s semantic analysis, big data, and machine learning technology, Xiaoice (formerly known as Xiaobing) is a Mandarin-language chatbot with a personality that was modeled on a 17-year-old girl. She is funny, unpredictable, and optimized for relationship-building.
The design team included psychologists to endow her with EQ as well as IQ. Her memory and focus on compassionate question-asking make her sound like a more consistently supportive friend than I could ever hope to be — not to mention much more available. She has an estimated 40 million human conversation partners, some of whom reportedly send up to 400 messages per day. There is a sizable population of people who are building emotional attachments to the character. Ten million people have apparently said “I love you” to her. While it’s not clear how earnest these declarations were, it does suggest emotional intimacy.
Meanwhile, across the East China Sea in Japan, it has already become mainstream to actively seek out romantic relationships with bots. A man named Sal 9000 “married” an avatar “virtual girlfriend” character from Nintendo’s Love Plus+ game, while the once-popular honeymoon town of Atami is now promising special offers for men and their Love Plus+ girlfriends.
Could the same ideas take off beyond Asia? The phrase “virtual girlfriend” at first seems confusing. Which aspects of the girlfriend concept remain once you make it virtual? The physical aspect seems a key question: The boundary between the platonic and the romantic is clearly subjective and fuzzy, but for many human/human relationships it seems to involve some kind of physical attraction.
But it’s actually not the disembodied aspect of virtual girlfriends that confuses me the most. Leaving the embodiment to the imagination can actually increase the chances of attraction. It’s a common experience to fall for someone online or on the phone. The mind projects an ideal onto the blank canvas. And if you wanted to cross into the sexual realm, there are plenty of creative sex-tech options out there already blurring the virtual/embodied line.
In fact, the thorniest question for me is the most elusive: What could it mean to say that you love an AI? Iris Murdoch wrote: “Love is the extremely difficult realisation that something other than oneself is real.” Would falling for an AI be an extreme case of avoiding this realization? A way of adoring a narcissistic projection of oneself? Or, perhaps the opposite is true. Perhaps loving an AI would merely require acknowledging the mysterious unfathomability of an artificial mind.