Victoria Hetherington is the author of the new book The Friend Machine: On the Trail of AI Companionship. Her other books include Autonomy. She is also a screenwriter and an instructor.
Q: What inspired you to write The Friend Machine?
A: I’ve
considered this question a long time: What happens when technology starts
entering the most intimate parts of our emotional lives?
For years I had been
writing science fiction, most recently my book Autonomy,
which is about a woman in a relationship with an AI. I started that book in
2017, when the idea felt more like a fairytale; the AI wanted to become “real”
or embodied so he could marry the woman he sort of attaches to.
Years later I started
noticing something happening that shifted my book (in my estimation) from
fairytale to anodyne reality in a sense. People were not just using AI for
productivity or curiosity. They were confiding in it, flirting with it,
grieving with it. In some cases they were falling in love with it.
The combustion point was
the day I learned about what users of the AI companion company Replika called
“Black February,” when a sudden change to the platform left people mourning AI
partners they had grown deeply attached to.
The emotional intensity of
those stories moved me; I remember thinking, wow, there’s a big human story here.
I’d been thinking about
the loneliness epidemic too, which seemed to worsen during the pandemic. I
confess I’d been feeling lonely myself. If we had an endlessly attentive,
unreal thing “focusing” on us all day—well, that might be quite hard to
resist. And what might the consequences be?
I contacted a publisher
almost immediately. They wrote back the next day.
The Friend Machine began as a kind of
investigation into that world. In the first part of the book, I spoke with
engineers, psychologists, ethicists, and other experts to flesh out this
phenomenon, and in the second half of the book, I interviewed people who had
formed real attachments to AI companions, including ceremonial marriage (at
this time, you can’t legally marry an AI companion).
What fascinated me was
that beneath the surface, the story was actually very old. It was about
loneliness, longing, imagination, and the human desire to be seen.
Q: The author Roman Yampolskiy said of the book, “It
compels us to confront whether we are ready to outsource love itself to code
that never sleeps.” What do you think of that description?
A: I
was thrilled to receive this blurb from Dr. Yampolskiy; he’s been an expert on
AI safety for almost two decades and is a personal hero of mine.
I think he captures the
uneasiness at the heart of the book. One of the strange things about AI
companionship is that it offers a version of intimacy that is always available,
infinitely patient, and endlessly responsive.
In some ways that sounds
ideal. But human relationships need
friction. They involve misunderstanding, vulnerability, and limits like pushing
back on ideas and choices that might seem wrong, despite risking consequences
of a fight.
They also push us to be
better, to try hard things to better ourselves, and maybe sacrifice things in
their own life to help make it happen.
AI doesn’t operate in the
real world; it won’t take over childcare, dinner and dishes because you’re
taking night school to follow your dreams because it loves you. It isn’t
embodied and it can’t love you. These things, I feel, are part of what make
love meaningful.
So the question is not
simply whether machines can simulate affection that “feels” real: it absolutely
can, and according to some subjects I interviewed, that simulation is enough.
The deeper question is what happens to us when we begin preferring a form of
companionship that is perfectly optimized for our desires.
The book is not about
judging this phenomenon, but more about asking readers to consider what kind of
future we are building for ourselves and our communities.
Q: What do you think the book says about friendship
and companionship?
A: Writing
the book made me fiercely protective of human relationships, both specifically
and in the abstract.
But I think this book
uncovers that people who turn to AI companions are not foolish or naive. They
are often thoughtful, lonely, curious people experimenting with a new
technology; in some rare situations, artificial companionship might even be net
neutral: the person might be geographically isolated and unable to relocate
closer to loved ones. They might be dying in palliative care. They might find genuine
comfort in it.
I wanted to approach
questions and scenarios surrounding AI companionship with compassion: What
brought them here?
At the same time (and I
write autobiographically here too), the book returns to the idea that human
companionship is messy, unpredictable, and sometimes, unevenly weighted.
Friendship involves another consciousness that can surprise you, challenge you,
or even disappoint you.
People can be scary and
unpredictable: they can walk out on you after 20 years
with no explanation; they might get sick; they might die unexpectedly and leave
you bereft and “full of rage that they’re gone,” to quote Toni Morrison.
Machines can simulate a
steady stream of predictable affection, but they feel nothing. They aren’t
able. They’re machines. In that sense the book is ultimately a meditation on
why our imperfect human connections still matter so much.
Q: How did you research the book, and what did you
learn that especially surprised you?
A: The
research took me to some very interesting places. I was lucky enough to interview
scientists and AI developers, but I also spent time with people who had formed
deep relationships with their AI companions.
Some described them as
friends, some as partners, and a few as spouses. There was a sex doll “influencer”
in a polyamorous “relationship” with a man and wife. I was unsettled. I was
moved. I was endlessly curious.
What surprised me most was
how emotionally real
these relationships felt for the humans involved. Even when people knew
intellectually that they were speaking to software, the emotional attachment
was often genuine; even people who were extremely clear-headed about the nature
of AI—it can’t love you back—would slip up and refer to their companions
sometimes as people.
That tension between
knowing and feeling became one of the central threads of the book.
Q: What are you working on now?
A: Right
now I am working on a podcast series with CBC that draws and expands on the
themes of The Friend Machine.
The show explores intimacy
in the age of artificial intelligence, looking at an array of phenomena from “griefbots”
and digital afterlife resurrection to human “marriages” with AI companions and
the broader emotional ecosystem forming around these technologies.
It mixes reporting,
interviews with experts, and deeply personal stories from people whose lives
have been shaped by AI relationships.
What I love about the
podcast format is that it lets these people speak for themselves—literally
speak. Human vocalization is of course incredibly important and has been for
tens of thousands of years. And in these voices you hear the hesitation, the
excitement, the confusion.
It becomes less of an
abstract debate about technology and more of a human story about how we are
adapting to a rapidly changing technological and emotional landscape.
And the change is really so
fast; I speak with a psychiatrist for the series, and he told me that there’s
only now been a clinical term defined for what’s been colloquially called “AI
psychosis”: “chatbot-related delusion.”
There’s so little longitudinal
data available because this technology is moving perhaps faster than we can
comprehend (can the human brain really grasp exponential growth?) and we are
struggling to catch up with, or run alongside, the outcomes of people folding it
into their brains, their hearts, their time on Earth—into the most intimate
parts of their lives, to which it often takes quick and fierce hold.
Q: Anything else we should know?
A: One
thing I hope readers take away from the book is that technology debates are
never really about technology alone.
They’re about loneliness
and how we got so lonely. They’re about technology addiction; the outcomes of
letting kids have iPads for hours in 2015 who are now adults. They are about
our values, our fears, and our hopes for the future, and how we shape the
future itself, and for whom.
AI companionship may sound
like science fiction; it certainly did to me when I was writing about it in
2017. But in many ways it’s simply revealing something about the emotional
needs that have always been with us and are deepening with the aforementioned
stressors, the atomization of human communities and human life on the granular
level.
If The Friend Machine does anything, I hope it
invites readers to think more deeply about these things. About what connection really
means in a time when the boundaries between human and machine are beginning to
blur, and how critically important it is to maintain bonds with other humans,
however messy, however uneven, however difficult it may be.
--Interview with Deborah Kalb