Follow

Prompt injection refers to a technique where users input specific prompts or instructions to influence the responses generated by a language model like ChatGPT.

threat actors mainly use this technique to mod the ChatGPT instances for several malicious purposes

gbhackers.com/hackers-compromi

An independent security researcher recently developed and launched a new tool “promptmap” that will enable users to test the prompt injection attacks on ChatGPT instances.

github.com/utkusen/promptmap

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.