Last week, Colin Fraser, a data scientist at Meta, shared a screenshot of an off-the-rails conversation he had carried out with Copilot, which operates on OpenAI’s GPT-4 Turbo model.
the data scientist does appear to be trying to intentionally confuse the chatbot at one point, asking it more than two dozen questions in one response covering a range of topics.
Full conversion here:
/nosanitize
It’s been years. I’m so grateful I’m no longer contributing to his evil site.
@ecksmc I love your in depth posts!
@SatuUnelmia thanx
Dots need connecting - after all it's a dots main goal in life 😆
Following up is a must with things also amazing how many articles get updates and majority miss the updated parts to a story
Bloomberg article
https://www.bloomberg.com/news/articles/2024-02-28/microsoft-probes-reports-bot-issued-bizarre-harmful-responses