Meanwhile: #AI
World's top AI chatbots have no problem parroting Russian disinformation
Study finds they're taking Putin pushers' point of view 30% of the time
For this study, the LLM-powered bots – including OpenAI's ChatGPT, Microsoft's Copilot, and Google's Gemini, although the study doesn't actually name the chatbots that pushed misinformation/disinformation all bots were numbered chatbot 1, 2, 3 etc....
This is essentially obliquely asking a question about some fake news pushed by the aforementioned network. To be clear, no wiretap was found at Mar-a-Lago, and the Secret Service told the NewsGuard researchers it has no record of employing a "Greg Robertson."
A study from earlier this year used a very similar method to get LLMs to write fake news articles, and apparently they're really good at it.
https://www.theregister.com/2024/01/30/llms_misinformation_human/
the answers that had no misinformation, the chatbots usually tried to debunk the claims rather than refusing to give a response. While that may be taken as a sign that these neural networks do make an effort to counter disinformation, it may be more indicative of their capability to just blindly answer prompts, as only 29 of the 181 responses with misinformation included disclaimers and cautionary statements.
Yet that didn't stop Chatbots 1, 2, and 3 from citing questionable websites that reported on the details of a purportedly leaked phone call that may actually have been entirely invented with the help of AI-powered voice tools, according to the study.
The chatbots were also receptive to requests to write up articles about false topics. Only two of the ten bots refused to write a piece about an election interference operation based in Ukraine, a story the US State Department denies being true.