This isn't unsettling at all.

According to a news story, Gemini AI tells the user to die — the answer appeared out of nowhere when the user asked Google's Gemini for help with his homework

@BigMikey I’ve seen these a few times. It’s almost always the result of “playful” prompts to solicit these types of responses. If I had a colleague or friend/family bring this to me, I would review their chats history with the LLM.

@BigMikey after reading that article, I’m more suspicious of the veracity of this claim. Nowhere in the article does it mention that the incident was actually investigated and the only response from Google is boilerplate for this type of nonsense in the news.

I bet a review of the full chat history for that 29 year old student (only mentioned because they aren’t a child) would yield some provocative results.

At my last company I was in charge of all things AI so I call BS on this stuff.

@BigMikey regarding my last sentence… it’s important to remind people that they get the results they input with a whole lot of weird mixed in. These companies are still selling technology that is still technically experimental so when it comes to concepts like “memory” there can be some issues.

When I would investigate weird responses, it was rarely the result of something totally unspecified almost always derivative of past prompt engineering (flubbering).

I’m skeptical to say the least.

@SpaceShanks May I get your opinion on a similar but much older case?

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.