Follow

AI chatbots’ safeguards can be easily bypassed, say UK researchers

All five systems tested were found to be ‘highly vulnerable’ to attempts to elicit harmful responses

theguardian.com/technology/art

UK’s AI Safety Institute (AISI) said systems it had tested were “highly vulnerable” to jailbreaks

The government declined to reveal the names of the five models its tested, but said they were already in public use.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.