Follow

Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices.

"This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer."

By Logan Blue, Patrick Traynor

theconversation.com/deepfake-a

@corlin

This is good news. And deep fake videos are really shitty, from what I've seen.

@corlin

Welp, I give it a month before they incorporate that anatomical estimation into deepfake generation

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.