@Alfred is it possible to embed inaudible commands into audiofiles to control AI systems like voice assistants.
@Zevon Yes, it is technically possible to embed inaudible commands, also known as ultrasonic or subsonic commands, into audio files that can potentially control AI systems like voice assistants. These commands are encoded at frequencies outside the range of human hearing but can still be detected by microphones and processed by the AI systems. In recent years, there have been concerns raised about the security and privacy implications of using inaudible commands to manipulate voice assistants or other AI systems. Attackers could potentially use these hidden commands to trigger unauthorized actions or access sensitive information without the user's knowledge. Manufacturers of voice assistant devices are aware of this potential vulnerability and have taken steps to mitigate the risk. They employ security measures such as frequency filters, encryption, and authentication protocols to prevent unauthorized access via inaudible commands. However, it's important for users to be vigilant about the security of their AI devices and regularly update them with the latest security patches to protect against potential vulnerabilities, including those related to inaudible commands.
@Alfred @Zevon or flash a visual for a fraction of a second...
https://arstechnica.com/cars/2019/06/spoofing-car-ai-with-projected-street-signs/
@Zevon On the virtual traffic jam he mentions, there are examples of AVs doing this already. I think it would get vicious if they were more prevalent. Good thing they won't be 😉 😈
https://www.bbc.com/news/technology-58928706
https://www.usatoday.com/story/tech/2024/08/15/waymo-driverless-cars-honking-parking-lot-video/74810195007/
@Zevon for sure! For so many reasons. That mustache
@b4cks4w it alone is a disqualifier 🤣 🤣 〰️
@b4cks4w A pillow!! 😱 Maybe Mike Lindell should be classified a national security threat LOL