AI clones Australian radio host for 6 months (and nobody notices)
The future of AI replacing humans in what have been seen as very human jobs was revealed last month when Australian Radio Network (ARN), was outed for failing to tell its listeners that the new host for their CADA station’s 11 am to 3 pm weekday shift was Thy, an AI-generated creation.
The show, called Workdays with Thy, offers a four-hour mix of hip hop, R&B, and pop, and started six months ago. CADA’s website used an image of a woman of Asian appearance to depict Thy (right).
ARN, the media company behind KIIS, home of Australia’s most expensive, complained-about and censured radio show, Kyle and Jackie O, confirmed to the Financial Review that Thy was an AI clone, based on an ARN finance department employee who agreed to have her voice copied by AI text-to-speech company Eleven Labs, and her image used for marketing.
The issue was first reported on 14 April when journalist Stephanie Coombes had a tip-off and, after some digging, declared she was unable to find any other digital presence or backstory for Thy.
“But perhaps the strangest thing about Thy is that she appears to be a young woman in her 20s who has absolutely no social media presence. This is particularly unusual for someone who works in the media, where the size of your audience is proportionate to your bargaining power in the industry.
There are no photos or videos of Thy on CADA’s socials, either. It seems she was photographed just once and then promptly turned invisible.
It’s not just Thy’s lack of social media presence which is unusual. It’s her lack of presence… anywhere.
It seems very odd that CADA hired a new ethnically-diverse woman to their youth station and then just forgot to tell anyone.”
ARN project leader Fayed Tohme subsequently acknowledged the use of AI to create the voice of Thy, writing in a since-deleted LinkedIn post that Thy “sounds real” and has real fans, despite not being a real person.
“No mic, no studio, just code and vibes,” he wrote in the post, which was shared by Mediaweek. “An experiment by ARN and ElevenLabs that’s pushing the boundaries of what ‘live radio’ even means.”
ARN said in a statement, that it was “exploring how new technology can support great content” and improve its output.
“We’ve been trialling AI audio tools on CADA using the voice of Thy, an ARN team member,” the statement read.
“This is a space being explored by broadcasters globally, and while the trial has offered valuable insights, it’s also reinforced the unique value that personalities bring to creating truly compelling content.”
There are currently no rules against the use of AI in broadcast content, according to the Australian Communications and Media Authority (ACMA).
This issue prompted some questions.
Firstly, that Thy had been broadcasting for six months and nobody seemed to have suspected that ‘she’ was not a human – what does that say about listeners’ expectations of (or attention to) a radio host?
Secondly, ARN does not have one diverse person hosting any of its shows across any of its three major stations (KIIS, GOLD and CADA). Mediaweek requested ARN provide details on their diversity quota but the company chose not to respond. What does it say about the state of hiring in this country that one of the highest profile employers in the Australian media preferred to create a diverse hire artificially rather than hire a diverse human? As Startup Daily wryly noted, “ARN declares it’s ‘an inclusive workplace embracing diversity in all its forms’, and that appears to include Large Language Models.”
Thirdly, ARN wasn’t saving much (or any) money (at least for now), but what happens when humans are much more expensive than a cloned AI alternative? As Stephanie Coombes said, “You could pay someone $30 an hour to do that job. Assuming they sat in the studio for the entirety of the shift, we’re talking about $35,000. It’s a paltry amount of cash.” An employer still preferred AI even when the cost differential was not substantial (if at all). Mediaweek reported, “Despite several attempts, the company is yet to respond to our request for clarification on whether or not the finance employee was paid for her participation.”
Finally, what’s the future for creative talent in the arts if such human-replacement tactics are happening now, not even three years after the release of ChatGPT? As Australian Association of Voice Actors vice president Teresa Lim said on LinkedIn about ARN’s ‘partnership’ with ElevenLabs, “Does it involve any voice data coming through the network being included in any other cloning? Or being replicated into new products without our consent? Are we missing out on any rightful compensation for content being created from an AI version of our voices? Worse still, are any hybrid voices of ours being created without our knowledge? Are our voices being sold off without our knowledge or consent?”
As The Guardian prophetically headlined, on 13 November 2022, just two weeks before the launch of ChatGPT, “When AI can make art – what does it mean for creativity?”
Related blogs
Could Hollywood writers have pointed the way to a sensible AI future?
AI powers its way into recruiters’ jobs with GPT-4 (and its cousins)
Would an AI-generated script from your database be an advantage or an embarrassment?
Proprietary data integrity quickly becomes a gamechanger for recruiters