Why I have cloned Nigel Farage
Applied Stories' artistic director Fin Kennedy explains why he created spoof AI-assisted podcast The Farage Files, to draw attention to the dangers of AI voice cloning
I’ve always been interested in technology because of what it can tell us about identity, and therefore the human condition. Both are perennial interests for playwrights like me.
Early in my career, my breakthrough play How To Disappear Completely And Never Be Found was an existential thriller about how, in gaming modern systems of ID documentation, anyone can generate new, fictional identities, but at the expense of ending up trapped somewhere between life and death oneself. Charlie becomes Adam, who becomes an unidentified corpse on a mortuary slab. (Needless to say, I was not a very happy young playwright.)
In my series The Good Listener, for BBC Radio 4, the increasing sophistication of the surveillance systems available to government spy agency GCHQ, triggers questions of loyalty and personal morality in the conflicted central character of senior analyst Henry Morcombe (sensitively portrayed by Owen Teale).
These themes have taken a more fantastical form in some of my work for young people, including The Dream Collector, in which a school group of teenagers has to grapple with the implications of their discovery of a machine which can read and broadcast one’s dreams to an audience, like a movie.
Since starting Applied Stories in 2021, a digital production company making place-based audio drama, I’ve committed full-time to exploring how technology, in particular recorded audio drama, can be harnessed in creative community projects to elevate community casts and celebrate specific localities and their people.
Another project in Luton, Museum of Stories: Bury Park added geolocation technology to the mix, co-creating a new app containing a community-led walking tour, with true stories pinned to the locations where they happened.
Like everyone, I was captivated by ChatGPT’s release in 2022. My first instinct was to see how good it was at playwrighting (spoiler: awful) or to give it comedy prompts like ‘Macbeth in the style of a modern day romcom’, and post the results for us all to laugh at. I was smugly reassured that human script writers were not in danger any time soon.
But many felt differently, with Hollywood writers in particular winning a historic victory after a long strike, securing pledges that studios would not ask writers to improve AI-generated work, or edit human work with AI, among other concessions, while professional actors secured important rights around the use of their voice replicas in gaming and other media.
(Prose fiction is another matter, with Jeanette Winterson admitting to being genuinely moved and impressed by an AI short story generated by Open AI’s new creative writing model.)
Comedy aside, ChatGPT seemed exciting and genuinely useful at first, particularly for trimming down over-long sections of (my own) funding applications to the required length. Or searching online for complex, scattered information.
But then I started realising how much it hallucinated. One particularly sensitive example came when I was asked for a quote for a new audio installation by The Holocaust Library in London.
I was sent some first hand testimonies from middle-aged Holocaust survivors which were all in German. I emailed the library then for speed asked ChatGPT to translate, which it’s supposed to be able to. I noticed that the English translations seemed a bit short; I was expecting the opposite, as German is full of long words and phrases. Then I noticed that the Library had actually also sent me English translations already. When I compared the two they were completely different and, when challenged, ChatGPT admitted to having made the whole thing up without reference to the German original at all!
ChatGPT is known for this. Apparently, it ‘hallucinates’ complete lies around 30% of the time - a deal-breakingly high level, even for personal matters. I asked it for an itinerary for a three day visit with children to the Isle of Arran, and it confidently recommended visits to attractions which turned out not to exist (but not before I had told my subsequently disappointed kids). I’ve also been thoroughly depressed by its widespread adoption among students to cheat at assignments, and how difficult this is for teachers in the schools I work in.
But by then I was already becoming interested in other forms of AI.
I still ended up feeling conflicted. Shouldn’t we have paid a photographer/designer for this? We did commission a designer for the main project image. But for the individual dreams, for the amount of work involved to create such specific and wild imagery (two dolphins chatting on a country lane?) we simply didn’t have the budget.
What we needed was also far too niche to find a usable, rights-free image online. So AI image generation was a bit of a godsend on a small scale community project like that. But if I was a graphic designer, I would be getting worried.
I recall working for Mulberry School as a writer-in-residence, back in the days when it was just one school rather than a multi-academy trust. The debates we would have about the flyer image for each year’s Edinburgh Fringe play could go on for weeks. Once, we agreed to change the image after a photo shoot and editing process, costing the school a significant amount. Now, I could generate four new images every few seconds and tinker with them endlessly, for a small monthly subscription.
For Letters To Our Daughters, I wanted something warm, youthful, and feminine, with an east London vibe. I still really like the project image we ended up with, which we used for the e-flyer to recruit schools. But it was vetoed by the school as the front cover of the resulting book of poems we published, in favour of a piece of student artwork. Not only was this more in keeping with the spirit of the project, the head of the Trust also said it gave her something to talk about when mentioning the project publicly, and handing out copies. I can imagine those conversations going less well if asked about an AI image, and for a school which puts creativity at the heart of the curriculum it was absolutely the right decision.
My mind was first truly blown by the vastness of AI’s capabilities when I discovered Google’s Notebook LM.
This extraordinary tool allows the user to upload a set of lengthy texts on any given subject, and then interact with their research by asking questions of it via a chat interface, identifying patterns or rooting out hidden facts. But that isn’t the most impressive part. Another button generates a highly convincing pair of AI voice hosts in conversation about your chosen topic, for anything up to 30 minutes. The resulting podcast is genuinely listenable and informative. I’m always researching obscure subjects either for play research or personal interest, and it has transformed my car journeys to be able to generate my own podcast about, say, the legal stages of a deportation process, or government reports about UFO sightings, all from uploaded links and PDFs found across the web.
But wait - a fully voiced audio podcast in mere seconds? Just consider for a moment the vast amount of data and processing involved in that synthesis, not just ingesting and understanding the written material, but writing a coherent script, and generating the many trillions of data points required for fake AI voices to speak it… And that’s just the readymade ones. AI’s processing capacity is truly mind boggling.
Then I discovered you can clone any real person’s voice with just a 30 second clip. Play HT is the platform I use, there are many, but their interface looks just like the lines of dialogue of a script. What’s more, once it’s written, the voices will render it in any language you like. New features are being added all the time, including options to add emotions through various sliding scales. Membership starts at five dollars a month for some of the most powerful technology on earth.
I was instantly captivated by this powerful new toy. It was like having actors of my choosing living inside my laptop who could voice the lines I write as soon as they’re written - something you’d previously have had to wait months for, in a development reading or scratch night, if you were lucky, if not then in the rehearsal room or recording studio for the first time.
Arguably great for writers, but not surprisingly, organisations like actors’ union Equity are up in arms, and rightly so.
Their long-running campaign Stop AI Stealing The Show contains horrifying real-word examples, such as longstanding audiobook actors being let go by publishers, because the company can simply clone the actor’s voice from the last novel they read. In another case, an actor discovered his voice was being used without permission to narrate Venezuelan government propaganda.
Astonishingly, this is all completely legal, or rather not yet illegal. Equity’s examples had little legal recourse, in a system which protects images of you, but not audio. It turns out that we do not own our own voices. The law has not yet caught up.
You can keep up with the debate about this in real time by listening to Equity-backed podcast The Last Human Voice, in which Equity audio committee chair Marcus Hutton follows the latest developments in tech and AI law around the world, with expert guests.
There is some interesting thought leadership about AI going on in UK academia, courtesy of the UK Research and Innovation (UKRI) who fund Bridging Responsible AI Divides (BRAID), a 6-year national research programme around responsible AI use, based at the University of Edinburgh, and whose recent ‘community gathering’ at the Lowry centre in Salford I attended last month.
It was a long but fascinating day, attended by interested parties from across academia, tech, arts, and media to listen and debate AI and Society, Countering AI Harms, Regulation and Rights and Public Media and Democracy.
It started with a panel about police data collaboration, and the risks of automated face detection for law enforcement ‘baking in’ racial biases. We are on the cusp of AI being used behind the scenes everywhere, to automate everything from benefits decisions to immigration statuses. But is this technology really ready to take on such high stakes functions?
I learned a lot at the BRAID event, including some more benign examples. AI is undoubtedly going to be put to good use in the heritage and archives sector; its ability to ingest, analyse and organise at speed make it great for clearing backlogs of uncatalogued records. It also holds great promise for greater efficiencies in green energy and other highly specialised technical uses.
But for every example like this, I heard another unsettling one. Consider the granular way that AI is being developed to parse and predict human emotions, by companies like Blueskye AI. The uses to which this can be put range from useful to sinister.
One speaker suggested that the question we should be asking is not ‘is society ready for AI?’ but ‘is AI ready for society?’ Such a powerful new technology should surely be introduced gradually and carefully, rather than in the free-for-all we are experiencing.
Last time a technological disruption was widely and quickly adopted, we ended up with social media straining our social fabric and threatening democracy, and the law struggling to play catch up.
An afternoon panel touched on AI cloning of people, of human beings, and there are interesting cultural differences. In Japan, the national broadcaster Ryukyu Asahi has embraced the technology, deliberately cloning visual and voice avatars of all its main presenters, for national emergency purposes, because they say those are the faces and voices its citizens know and trust on warning systems for natural disasters. They can also make announcements in multiple languages, for the country's many tourists and overseas residents.
It would be interesting to know what rights over their AI avatars the human presenters negotiated with their employer.
The latest thinking is that a new legal concept of a ‘personal digital twin’ is required, which its human originator would own, control and licence. A model already exists to protect digital twins of key infrastructure.
But in our slow-moving legislative system it could be many years before this concept is extended to the person.
The tech companies argue that AI voice cloning will revolutionise customer services interfaces of various kinds, and make promotional podcasts and videos quicker and cheaper to produce en masse.
But it’s just this sort of automation which multiplies the risk.
Imagine a malign individual, organisation or state cloning a world leader announcing a nuclear strike, then being able to release it in multiple languages simultaneously. At the very least it would be a way to manipulate markets.
More prosaically, many more banks are using AI voice verification in telephone banking, which is supposedly safer than vast data dumps of passwords and other sensitive information, which is vulnerable to hackers. But only last year a presenter for BBC You and Yours managed to get into both her bank accounts with a cloned version of her voice. Already this seems like a gaping vulnerability.
Leaving aside the losses to human customer services agents and voiceover artists, any AI gains in cost-cutting for employers are surely outweighed by the nefarious uses of this technology, which is already being used by scammers and political influencers.
In 2024 alone, Keir Starmer, Joe Biden and Sadiq Khan were all targeted by unknown fraudsters, who created inflammatory deepfake voice clips timed to cause political damage.
In the case of Sadiq Khan, the faked remarks, purportedly prioritising pro-Palestinian marches in London over Armistice Day commemorations have all the hallmarks of a far-right hit job, and nearly caused serious disorder.
Meanwhile, David Attenborough’s voice has been found narrating far right propaganda videos in the US.
And that’s just pre-recorded voice fakes.
With live AI speech-to-speech voice cloning, a scammer can use your voice to call your parents to say you’ve been kidnapped, or mugged, and desperately need money transferred… The scammer can speak and respond to questions in real time, but the voice at the other end is yours.
At a political level, this can cause havoc if extended to the voices of powerful officials, as has happened in just the last few days with Secretary of State Marco Rubio.
I don’t mean to sound a naysayer about AI. In other fields like medicine, its superpowers in pattern recognition are genuinely game-changing in all kinds of ways. As Notebook LM shows, our capacity to research anything is set to explode. But with AI voice cloning in particular, I am struggling to see what this technology is for, or why we need it at all.
There's an interesting example from Venezuela, where honest journalism has become impossible under dictator Nicolas Maduro. Journalists in hiding are reporting news the regime wants hidden via AI presenter avatars, so as not to put themselves at risk.
But this is a rare example of a social good from AI cloning of the body and voice. And anyway it involves fictional avatars, not clones of human beings.
Overall, I’ve come to the conclusion that there aren’t really any constructive applications for AI voice cloning. The voice is so uniquely personal, it's a bit like being able to steal someone's soul. And because almost no one even knows it is possible, we remain uniquely vulnerable to manipulation in the audio domain. When are we going to see the first military order given with a cloned voice?
To try and package some of my concerns up into a creative project, last year I took the opportunity to pitch BBC Radio 4 with an idea for a new series.
I Can Make You Say Anything was an idea for a five part audio drama set amid Keir Starmer’s re-election campaign in 2029. Inspired by real life events, in the drama he is plagued by a series of anonymous voice clips, purportedly surreptitious recordings of Starmer swearing at staff and being a boorish bully behind closed doors. With the party PR machine working overdrive to prove the clips are AI fakes, a young staffer investigates, because if she can trace the source back to a rival party it’s political dynamite… Or, do they have a mole in the team, and the clips are real? It's no longer possible to tell.
But the real twist in the tail is that in order to make the drama, we wouldn't just clone Starmer’s voice, but use AI voices for every character in the show. It would be an experimental radio play, the first without using any actors at all, harking back to the now legendary BBC Radio experiment in the 1970s, Andrew Sachs’ radio play without words, The Revenge.
Every actor and director I know thinks I'm an absolute monster for even considering this idea. For the record, my intention was to showcase the power of what is now possible, and kickstart a public debate via our national broadcaster, by doing it once. I certainly wouldn't have wanted to start a trend!
The BBC are not a risk-taking organisation, and to assuage my producer’s concerns I suggested reaching out to both Starmer’s team and actors’ union Equity, to propose a one-off partnership on this particular project, to warn about the dangers of this technology, and the need for regulation.
After some weeks of back and forth - they did briefly seem to entertain the idea - the BBC perhaps predictably didn't go for it. Quite apart from the political risks, and the upset to actors, it turns out there is a blanket ban at the BBC for using AI for anything, ever, because of its main function as a news organisation.
While this is understandable, it means there is no space for creative experimentation with AI at our national broadcaster, which seems a pity.
It would be interesting, for example, to bring long-dead voices back to life - something which is already happening with presenter Michael Parkinson, with the full blessing and involvement of his estate. But via a private production company.
I have an idea I want to explore about whether it's possible to clone the voices of different animals from their baahs, bleats and moos, and see if we can hear what a cow or a pig speaking English might sound like (the experiment didn’t seem to work on Play HT, but I’m going to keep trying.)
Longer term, I’m working up a live stage show for national touring, a live onstage demonstration of AI voice cloning, in which the voices of members of the audience are (voluntarily) mixed into a live soundscape in which they have no control over what they say.
In the meantime, and having said all that, I have to say that I think I have found one positive use of AI voice cloning after all.... and that's our old British faithful - COMEDY! In particular, political satire. No longer do you need to be Rory Bremner to interview the rich and powerful. All you need is a laptop and a $5 dollar per month subscription.
It turns out that some people’s voices clone exceptionally well. It's usually those with highly pronounced or distinct voices or accents, deep and rich works best. I have been experimenting with a range of those in the public eye, and discovered that by far best of them all is the silver tongue and dulcet tones of one Nigel Farage.
In order to run my experiments, I naturally had to write some text for each voice to say. And, as sometimes happens in dramatic writing, characters start to leap off the page.
Keir Starmer was droney and apologetic, Kemi Badenoch was bombastic and angry.
David Attenborough sounded eerily good, but at the same time with such an iconic voice, which feels kind of sacred when it's saying your words. I didn't want to mess with it too much.
But one character in particular stole the show. Before I knew it, the Alan Partridge-esque Mr Farage had come alive before my very ears, answering back and scolding me for doing this at all, while desperately wanting to use the show to appeal to his man-crush Donald Trump.
The whole thing took a few days. My fellow artists will recognize the spell which took hold. But once I was released from the trance, the result appeared to be three episodes of new podcast series The Farage Files, a podcast series of me in conversation with a voice which sounds very much like a well known far right politician.
It's a genuinely interesting playwrighting challenge, to test your skills now and then by trying to write a character so completely unlike oneself. It's even an exercise I set for my writing students sometimes. My intentions are sincere when I say this is the best way for me to try to understand Farage’s point of view. They're saying he might be the next Prime Minister, so I think we'd better start. Maybe if he got the job, he could make regulating AI voice cloning one of his priorities?
As any Musical Director in theatre will tell you, characters often express themselves best through song. So I've used AI to write the music and make him sing as well. Yes - you can take a spoken clip and feed it into a machine which makes it sing.
Obviously it isn’t Nigel Farage. It’s Farridge. It isn’t really me either. I call him the Finterviewer. Both have taken on a life of their own, like any fake identity. (I still get emails from Adam, and sometimes Charlie.)
Just to be clear: I think AI voice cloning should be banned. It’s a dangerous technology which in the wrong hands could be lethal, while the regulatory environment, never mind wider society, is nowhere near ready for it.
All this is moving so fast. Video cloning appears to be here already, yet we haven’t even started the debate about audio. My feeling is that we are naturally more imbued with scepticism of visual imagery due to decades of Photoshop and image manipulation on our phones. But we are uniquely vulnerable with audio fakes, where our natural scepticism is non-existent, simply because it's never been done before, so most people don’t even know it’s possible.
I’m on a mission to change that.
As a culture-maker, it’s my instinct to bring to life the issues of the day in as entertaining a way as I can, as a means of starting the debate. As drama’s function, that goes back to the ancient Greeks.
I couldn't interest the BBC so doing it independently is the next best thing.
So please have a listen, and hopefully a laugh, then let me know what you think - and tell your friends.
You can leave a comment below, or email thefaragefiles@appliedstories.co.uk, or @appliedstories on social media.
And Nigel? Sorry old pal. Someone had to be the guinea pig. You simply had the best voice.
But who knows - perhaps you will turn out to be exactly what is needed to drive this issue up the political agenda.
Nigel Farage could yet save us all.
The Farage Files is now available free on SoundCloud and all major podcast platforms.
Please note the series contains adult content making it unsuitable for under 16s.
If you’d like to donate to hear more, including exclusive personalised audio gifts, check out our Crowdfunder.











