A new technology called AntiFake has been introduced, which prevents the theft of your voice by complicating the process of analyzing audio recordings using artificial intelligence tools.
Advances in AI Speech Generation
Advancements in artificial intelligence have enabled the generation of synthetic voices that sound so real that a person can no longer distinguish whether they are speaking with another human or a fake character. If a person’s voice is cloned by a third party without their consent, malicious individuals can use it to send any message they desire.
The Threat of Voice Cloning
The technology that replicates real voices using deep audio programs is a clear threat, as synthetic voices can easily be used to deceive others. Only a few seconds of voice recording can convincingly replicate a person’s voice. Anyone who sends voice messages occasionally or speaks on answering machines has already provided enough material to clone their voice.
AntiFake Technology
Computer scientist and engineer Ning Zhang from the McKelvey School of Engineering at Washington University in St. Louis has developed a new method to prevent unauthorized speech generation before it happens: a tool called AntiFake. Zhang presented it at the Conference of the Association for Computing Machinery’s Special Interest Group on Multimedia in Copenhagen, Denmark, on November 27.
How AntiFake Works
The AntiFake tool works to make it difficult to extract the features of the audio recording that are critical for the speech generation process. The tool employs techniques similar to those used by cybercriminals in voice cloning to protect voices from hacking and forgery. The source text for the AntiFake project is freely available.
Challenges in Combatting Voice Cloning
Attack methods are continuously improving and becoming more sophisticated, as evidenced by the current rise in automated cyber attacks on businesses, infrastructure, and governments worldwide. To ensure that AntiFake can withstand the evolving challenges surrounding voice cloning for as long as possible, Zhang and his Ph.D. student Zhiyuan Yu developed a tool that trains to counter a wide range of potential threats.
AntiFake Testing Results
Zhang’s team tested the tool against five state-of-the-art speech generators. According to the researchers, AntiFake achieved a protection rate of 95 percent, even against unknown commercial generators that were not specifically designed for it. Zhang and Yu also tested the usability of their tool with 24 human participants from various population groups. Further testing and a larger testing group will be needed for a representative comparative study.
Future Applications of AntiFake
The creators of the tool believe it could be expanded to protect larger audio documents or music from misuse. Currently, users must do this themselves, which requires programming skills.
Future Challenges
The tool aims to fully protect audio recordings, and if achieved, we will be able to exploit a significant gap in the safe use of artificial intelligence to combat voice cloning. However, the developed tools and methods must continually adapt due to the inevitability that cybercriminals will learn and grow alongside them.
Source: Spektrum der Wissenschaft
Authors: Silke Hahn and Gary Stix
Source: https://www.scientificamerican.com/article/how-to-keep-ai-from-stealing-the-sound-of-your-voice/
Leave a Reply