Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices.
"This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer."
By Logan Blue, Patrick Traynor
Welp, I give it a month before they incorporate that anatomical estimation into deepfake generation
@corlin
This is good news. And deep fake videos are really shitty, from what I've seen.