Prompt injection refers to a technique where users input specific prompts or instructions to influence the responses generated by a language model like ChatGPT.
threat actors mainly use this technique to mod the ChatGPT instances for several malicious purposes
https://gbhackers.com/hackers-compromised-chatgpt-model/
An independent security researcher recently developed and launched a new tool “promptmap” that will enable users to test the prompt injection attacks on ChatGPT instances.