OpenAI has implemented several safety features, including a moderation endpoint that evaluates text inputs for potentially harmful content and restricts ChatGPT’s ability to respond to such prompts.
However, the report highlights that despite these safeguards, criminals may employ prompt engineering to circumvent content moderation limitations
the report highlights that despite these safeguards, criminals may employ prompt engineering to circumvent content moderation limitations
Prompt engineering is the practice of refining the way a question is asked to influence the output generated by an AI system. While prompt engineering can maximize the usefulness of AI tools, it can also be abused to produce harmful content.
Tricking ChatGPT: Do Anything Now Prompt Injection
https://medium.com/seeds-for-the-future/tricking-chatgpt-do-anything-now-prompt-injection-a0f65c307f6b