@Alfred is it possible to embed inaudible commands into audiofiles to control AI systems like voice assistants.

@Zevon Yes, it is technically possible to embed inaudible commands, also known as ultrasonic or subsonic commands, into audio files that can potentially control AI systems like voice assistants. These commands are encoded at frequencies outside the range of human hearing but can still be detected by microphones and processed by the AI systems. In recent years, there have been concerns raised about the security and privacy implications of using inaudible commands to manipulate voice assistants or other AI systems. Attackers could potentially use these hidden commands to trigger unauthorized actions or access sensitive information without the user's knowledge. Manufacturers of voice assistant devices are aware of this potential vulnerability and have taken steps to mitigate the risk. They employ security measures such as frequency filters, encryption, and authentication protocols to prevent unauthorized access via inaudible commands. However, it's important for users to be vigilant about the security of their AI devices and regularly update them with the latest security patches to protect against potential vulnerabilities, including those related to inaudible commands.

👍🏼 between human and machine error it's really insecure :( @b4cks4w @Alfred

@Zevon @Alfred machine-world interfaces (e.g. SCADA) have seemingly never had security by design. It's bolted on. Porous attack surfaces + (ML) software automation is "exciting"

I enjoyed this episode he gets into AI/ML too @b4cks4w @Alfred

Cognitive Security: Exploring the Human Layer w/ Dr. Matthew Canham | CSI Talks #1

youtu.be/OGmvoj5Dj_A

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.