In tears, after listening to their deceased loved ones, several women admitted their astonishment at the situation. To show television The anthill. “It was super real and I needed it, I really needed it,” sobbed a young woman in front of the cameras. “The voice is amazing, very happy with the experience,” another woman added, wiping away tears. Pablo Motos’ program took advantage of artificial intelligence to recreate, from real audio, the voices of the dead. Not only did they reproduce it, something very simple to do and which caused problems with misinformation due to the use of these deep fakes with the voice of Joe Biden or the leader of the British Labor Party, Keir Starmer. The generated audios asked questions that suggested to the participants – “Are we still having conversations?” » – in this “real experience”, as the program calls it, which immersed itself in prime time in an emerging market, that of recreating the deceased using artificial intelligence (AI). Psychologists warn that this can interfere with the natural adaptation to grief and make the most painful phases chronic.
The death of a loved one is like losing a part of yourself. This is the source of many difficulties with emotional well-being and many people would do anything to alleviate this feeling of overwhelming loss. Even talk face to face with your loved one, if possible. It Sounds Like Science Fiction, But Businesses Like Hereafter, subsequentlyStoryFile and Replica They do it and there is nothing supernatural about it. Using interviews and other content, they create digital versions of deceased individuals to interact with the living, whether through chat, voice or video. In China, this activity is already growing, with several companies claiming to have created thousands of these digital personalities or the robots ghosts Some even claim they can do it. with only 30 seconds of audiovisual recording of the deceased.
The American History file interviews people throughout their lives via video, asking a series of questions about key experiences such as their childhood, marriage, or greatest challenge, plus any others the interviewee decides to add. Based on the responses and through the use of artificial intelligence, a conversational video is generated with which children, parents, friends and relatives can interact in the future. According to what they indicate, around 5,000 people have already created profiles on the platform. The cost of the service varies between 40 and 450 euros, depending on the number of questions you want to include. They also offer a free trial.
Stephen Smith, co-founder of StoryFile, explains that the company was born ten years ago with the aim of preserving the memory of Holocaust survivors. But it was at the end of 2021 that the platform became what it is today, where anyone can record videos with a webcam from home or in the studio.
The co-founder emphasizes that the platform does not invent content, but rather “recovers something pre-recorded”, already existing. But it is possible to go further and add information from other formats. “We did this using the conversational archive methodology. That means using content from the person’s life, like a video where we can clone the voice and then have them say things that they’ve said in their life. For example, you can use an email and then have it read. If someone wants this to happen, it’s possible,” he told EL PAÍS via video conference.
The danger of becoming addicted
Perhaps the most worrying element is that some people could become dependent, even addicted, to speaking with virtual avatars, because they generate a false sense of closeness to the dead, as the Antena 3 program has shown. female volunteers spoke directly to the voice – “I’ll tell you…”, “I miss you” – as if this synthetic recreation were her grandmother who had died a year earlier.
“At first there is relief. But then, an addiction, a dependence arises,” warns José González, psychologist specializing in grieving processes. “If AI literally reproduces what we were, there is a great danger of chronification, especially in very intense relationships. It’s easy to indulge in this fantasy that he’s not dead. This can cause this freezing in the denial phase,” he continues.
The expert, who has worked with more than 20,000 bereaved people over 25 years, agrees that conversational videos can be useful for maintaining memories, telling anecdotes or conveying information between generations with emotion. Also when it comes to reproducing some of the techniques implemented in consultation to resolve outstanding issues, which could not be resolved through discussion. “I ask a few questions about the connection with the deceased person, for example ‘what I liked most about you’ or ‘when you disappointed me the most’. With these answers, the mourner writes a letter and reads it in an empty chair,” he describes. According to him, AI could be applied to dynamics like this, in time, provided that it is closely supervised by a professional.
González emphasizes that there is also a risk linked to what is expressed in these recordings. Goodbye messages can be very powerful and help ease the suffering because it is the time when you tell your family how much you love them, this frees them of guilt and makes grieving much more bearable. However, without expert supervision, even the best intentions could backfire. “Imagine that I am the father of an only daughter and I say to her: ‘I leave you with the vital objective of taking good care of your mother.’ “It can be very nice, but it can also be a phrase if the person who would be the mother is extremely ill,” he illustrates. This is when a professional would recommend that the father speak differently to avoid creating emotional charge… And if there is no control, the likelihood of misunderstandings increases.
An ethical problem
How faithful can an avatar be? Who does it belong to? What type of data can be used for its creation? These are just some of the questions that arise around this topic. For Gry Hasselbalchethicist at the European Research Council, the implications extend to an existential sphere: “Any technology based on the fact or idea that it can compete with humans raises the question of what it means to be human, what are our boundaries. and if it is possible to use it to exceed a limit.
Hasselbalch, who is also co-founder of the Danish Think Tank DataEthics.eu, believes that the proliferation of avatars of the deceased represents a dilemma that goes beyond data, consent or who has rights. “It could change the identity of humanity and the human being, because it challenges the very idea of mortality,” he says.
Among several potential problems, the AI ethics expert highlights the possibility of a tool that collects not only a deceased person’s social media content, emails and mobile messages, but also their social media habits. research on the Internet. This could reveal hobbies or interests unknown to the person, from a passion for an animal or sport to, in the worst case, a dark secret.
If artificial intelligence combines this information with other elements that make up your identity, but places more importance on certain aspects, it could result in the creation of an avatar or robot that does not look at all like you. what this person was like in real life. This is a scenario in which “control would be lost,” he warns. And it’s not the only one. “How easily could you be manipulated if a loved one you miss tells you to vote a certain way or buy specific things? We don’t know what companies will emerge behind this,” he reflects.
“Deepfakes” and “copyrights”
One of StoryFile’s clients is the late Sam Walton, founder of the giant Walmart. “We work with your business file. We reviewed many hours of material, transcribed his speeches, his videos and created 2,500 answers to questions he had answered throughout his life with the exact same words he used,” he describes. The result was a digital recreation featuring Walton’s face, voice and a life-size hologram. Can this be very realistic? “People who knew Sam get misty-eyed because of his realism,” says company executive Alan Dranow. The businessman’s family had agreed to this production, but other famous saw their faces and words recreated by AI without mutual agreement.
This is the case of the American comedian George Carlin, who died in 2008, and whose voice and style were cloned for the creation of the podcast “George Carlin: I’m Glad to Be Dead”. posted on YouTube at the beginning of January. Last week, a lawsuit was filed in a Los Angeles federal court demanding that Dudesy, the company behind this project, immediately remove the audio special. His daughter, Kelly Carlin, had previously criticized the production, in which a voice synthesis of the artist comments on the current episodes. “My father spent his life perfecting his craft through his humanity, his brain and his imagination. No machine will ever replace his genius. These AI-generated products are ingenious attempts to recreate a mind that will never exist again. Let the artist’s work speak for itself,” he said during the platform.
According to StoryFile, the service that incorporates the most advanced of this technology is aimed only at a restricted group. “We do not offer it as a product on our website at the moment, but rather for private customers. We do not want our technology to be used to create a deep fake from someone else,” Smith adds.
However, there are alternatives that do this. The company HeyGen, for example, lets you generate videos with voice cloning, lip syncing, and speaking styles. If you don’t look very carefully, it is almost impossible to notice that it is an artificial creation. Although the platform is presented as a solution for personalizing and translating content in the business world, in practice it can be used for any such purpose: saying goodbye to a loved one or using it to generate money.