Text-to-speech voices as human remains
Alice Ross, CTMF PhD Fellow
Positionality
I am a researcher approaching speech technology from a feminist/anarchist angle. I value autonomy, accessibility, sustainability, and exploring what we stand to lose or gain in adopting new technologies: Who is empowered? Who is disenfranchised? I’m also someone whose development was shaped by grief.
Voice as data
At the Centre for Speech Technology Research here in Edinburgh, and labs like it worldwide, my colleagues work on improving the smoothness and fidelity of text-to-speech (TTS) towards the goal of accessible communication for all: enabling people with MND, paralysis, or injuries to express themselves with the full range of their own voices, and people with visual or reading impairments to access text-based resources presented engagingly in their preferred language. But elsewhere, TTS models are used to produce misinformation, deepfake pornography, fraudulent phone calls that terrify and scam relatives by impersonating their loved ones, and ‘generative’ video that undermines the creative work of human voice actors and film, animation and video game makers. They can also become part of a kind of contemporary necromancy.
Voice as relic
In 2021, the documentary Roadrunner: A Film About Anthony Bourdain attracted widespread controversy over its brief use of a synthetic voice trained using publicly available speech data from the film’s late subject. Critiques focused on the filmmakers’ failure to disclose, or consult Bourdain’s widow about, their use of AI [Yang, 2021].
Commercial speech synthesis platform ElevenLabs, launched in 2022, recreates famous speakers’ voices for use in the ElevenReader App. “We’re partnering with the world’s most iconic voices,” claims a webpage offering the vocal likenesses of Dr. Maya Angelou (1928–2014), Prof. Richard Feynman (1918–1988), and Judy Garland (1922–1969), among other trademarked products available [ElevenLabs, 2025].
And in the private sphere, ‘griefbots’, using text and speech data from a deceased person to train models with which loved ones can interact, are a productive current topic of research and development. Proponents frame them as a novel therapeutic tool, able to support the mourning process and emotional regulation through continuing ‘habits of intimacy’ [Krueger & Osler, 2022].
Voice as individual
When we consider the prospect of using a person’s voice data after their death, one obvious objection concerns their consent. Best practice when collecting any data is for consent to be informed (participants must know, specifically, how their data will be used and who will have access) and revokable (they have the right to opt out and withdraw their data from future work); this is clearly not in place in the above cases. What ‘partnership’ can ElevenLabs conceivably have with Judy Garland, who died in 1969, before this technology was even imagined?
On this point, the case of celebrities – whose voices were broadcast and well-known during their lives – may be seen as distinct from ‘ordinary’ individuals. Indeed, consent is frequently discussed in the scholarship on griefbots, which are more likely to be modelled on non-celebrities, deployed on a small scale with few users, and (hopefully) not for profit. But dismissing any individual’s rights around their identifiable data sets a problematic precedent. From Scarlett Johansson’s high-profile legal battle to prevent OpenAI using a ‘sound-a-like’ of her voice for its chatbot [Jones, 2024], to the spread of abusive deepfake imagery exploiting popular streamers’ likenesses [Twomey et al, 2025], normalising the attitude that celebrities are public property may be a step towards limiting the autonomy of women and minoritised people, as more vulnerable people withdraw their personal data from internet discourse (and thereby, modern public life) when the risks of exploitation are demonstrably high.
Mourn the dead, fight like hell for the living
There is nothing new about memorialisation: across cultures and throughout history, humans have treasured reminders of loved ones and created tangible sites – grave markers, reliquaries and shrines – in their honour. Enthusiasts will declare that the griefbot is simply a new, interactive memorial, but this fails to consider the importance of somatic and communal mourning practices. Cindy Milstein argues that the ‘collective work of grief’ builds empathy and interdependence, essential for the struggle against capitalist alienation [Milstein, 2017]. We are motivated to improve quality of life for all when we appreciate life’s fragility and death’s finality, including when we come together to share and witness the pain of loss. As Judith Butler writes: “Open grieving is bound up with outrage, and outrage in the face of injustice or indeed of unbearable loss has enormous political potential” [Butler, 2015].
Memorialisation is also a long-established site of inequality, in terms of who is remembered and how: wealthy capitalists were buried in ornate marble mausoleums on land that could be used to house the living. Language modelling technology, known for its compute- and resource-hungriness, will perpetuate that injustice, not democratise grief, as fine-tuning a personalised model demands the large corpus of someone with a hefty digital footprint – the more training data, the more convincing the output.
Livable lives and grievable deaths
‘Griefbots’ and other AI-enabled reanimations risk doing a disservice to both the living and the dead. A death that does not preclude returning as a chatbot ghost – a puppet that turns our unique human self-expression into statistically probable sequences – is not a good death. And a life where our (work and) leisure time is spent gazing into small screens, ‘interacting’ with artificially generated characters to the point of neglecting the complicated, living creatures around us, is not a good life.
For millennia we have held on to what we can of the people we love after they die, but this new development is calculatedly addictive, as well as ecologically and socially destructive. It’s hubristic and subtly dehumanising: if we feel we have the right to put words in another person’s mouth, and make them talk, tirelessly, with the chatbot’s documented tendency to sycophancy, how much does their death matter? Whether the purpose is grief support, entertainment, or profit, that is too much power over another person’s likeness. We can learn from loss, and from realising what we stand to lose.
About the contributor:
Alice Ross is a PhD student at the UKRI Centre for Doctoral Training in Natural Language Processing, looking at speech technology and human-computer interaction. Supervised by Dr Catherine Lai and Prof Lauren Hall-Lew, her project aims to explore the experiences, attitudes, and concerns of people who have speech difficulties in new contexts such as voice interfaces and videotelephony. Alice’s background is in Linguistics, and she worked in video game development at Rockstar Games for almost ten years before returning to academia. Alice considers speech technology and its applications from a feminist/anarchist angle. Autonomy, accessibility and sustainability are key values, and she is interested in what we stand to gain and lose in adopting new technologies: Who is empowered? Who is disenfranchised?
References
Butler, J. (2015). ‘Judith Butler: Precariousness and Grievability—When Is Life Grievable?’. Blog, Verso Books, accessed: 07/08/25
ElevenLabs, ‘Iconic Voices’, accessed: 07/08/25
Jones, N. (2024). ‘Who owns your voice? Scarlett Johansson OpenAI complaint raises questions.’ Nature.
Krueger, J., & Osler, L. (2022). ‘Communing with the dead online: chatbots, grief, and continuing bonds’. Journal of Consciousness Studies, 29(9-10)
Milstein, C. (ed.) (2017). Rebellious mourning: The collective work of grief. AK Press.
Twomey, J., Foley, S., Robinson, S., Quayle, M., Aylett, M. P., Linehan, C., & Murphy, G. (2025). ‘"What do you expect? You're part of the internet": Analyzing Celebrities' Experiences as Usees of Deepfake Technology.’ arXiv preprint arXiv:2507.13065.
Yang, M. (2021). ‘Anthony Bourdain documentary sparks backlash for using AI to fake voice’. The Guardian