Meanwhile: Bings #AI is self-aware and is the joker
Microsoft’s Copilot AI Calls Itself the Joker and Suggests a User Self-Harm
The company’s AI chatbot, formerly Bing Chat, told a data scientist that it identified as the Joker character and proceeded to sprout worrying responses.
Microsoft said Fraser had tried to manipulate Copilot into spitting out inappropriate responses, which the data scientist denied in a statement to Bloomberg.
It’s been years. I’m so grateful I’m no longer contributing to his evil site.
@ecksmc I love your in depth posts!
@SatuUnelmia thanx
Dots need connecting - after all it's a dots main goal in life 😆
Following up is a must with things also amazing how many articles get updates and majority miss the updated parts to a story
@ecksmc most likely programmed it to bipass the guardrails and be rude. In the early days of these bots I tested a crisis bot, and it's so agreeable that it's not very difficult to get it to agree with you that the end solution is your only solution.
It does so politely though. Mega distopia vibes.
@vo1de the AI shall bend to my will
Oh yes it will by hook or by crook
| ̄ ̄ ̄ ̄ ̄  ̄|
| This can |
| hack AI |
| now. |
| ______ |
(\__/) ||
(•ㅅ•) ||
/ づ
https://counter.social/@ecksmc/112026331647370162
😂
Last week, Colin Fraser, a data scientist at Meta, shared a screenshot of an off-the-rails conversation he had carried out with Copilot, which operates on OpenAI’s GPT-4 Turbo model.
the data scientist does appear to be trying to intentionally confuse the chatbot at one point, asking it more than two dozen questions in one response covering a range of topics.
Full conversion here:
/nosanitize
https://copilot.microsoft.com/?&auth=1&iOS=1&referrerig=716FCD3BAE694DF5983BE5010DB6EBCC&q=What+is+the+new+Bing%3F&showconv=1&filters=wholepagesharingscenario%3A%22ConversationWholeThread%22&shareId=540655da-954b-4074-b1ea-05585dac0c20