Only a few seconds of sampling is enough to create an AI copy of a person’s voice, and researchers are unsure why they are so intelligible.
From the Journal: The Journal of the Acoustical Society of America

WASHINGTON, April 21, 2026 — Synthetic voices are increasingly a part of our lives, from digital assistants like Siri and Alexa to automated telemarketers and answering machines. With the expansion of generative AI, a new type of synthetic voice has been developed: voice clones, which can recreate a facsimile of a person’s voice from only a few seconds of recorded speech.
In JASA, published on behalf of the Acoustical Society of America by AIP Publishing, a pair of researchers from University College London and the University of Roehampton evaluated the intelligibility of humans and voice clones. They found that voice clones are easier than humans to understand in noisy environments.
Voice clones differ from traditional synthetic voices in the amount of sampling they require. Synthetic voices like Siri require a voice actor to spend hours in a recording booth. In contrast, a voice clone can be made from as little as 10 seconds of speech, significantly expanding the number of potential voices as well as the number of potential applications.
Researchers Patti Adank and Han Wang specialize in studying human perception of unclear speech and were fascinated by the idea of machine-replicated speech. A key question they were looking to answer was just how easy voice clones are for the average person to understand. They suspected that voice clones would simply be poor representations of actual human voices and that people would struggle to understand them. What they found could not be more different.
“I thought initially that voice clones would be less intelligible because they were unfamiliar,” said Adank. “I found they were up to 20% more intelligible, which was quite shocking. A small part of our paper is talking about that experiment, and then a large part is me and my collaborator frantically trying to find out what it is that makes those voice clones more intelligible.”
The duo initially presented volunteers with human voices and voice clones, asking them to rate their intelligibility. After finding that voice clones were consistently rated easier to understand, they repeated the experiment with elderly volunteers to determine if being hard-of-hearing alters the effect; with American volunteers — the original cohort was British — to judge if the accent plays a role; and with a filter designed to mimic cochlear implants. In every case, voice clones emerged victorious.
After examining over 100 acoustic measurements, Adank believes the only way to solve the mystery is to work with collaborators who specialize in text-to-speech systems to adapt an existing open-source cloning system.
“I am now going to try and recreate [the effect] by studying how synthesizers work and how they use digital signal processing to generate those voices, just to get a bit of a handle on this,” said Adank.
###
For More Information:
AIP Media
+1-301-209-3090
media@aip.org
Article Title
Authors
Patti Adank and Han Wang
Author Affiliations
University College London and the University of Roehampton