Apple study exposes deep cracks in LLMs’ “reasoning” capabilities

Irrelevant red herrings lead to "catastrophic" failure of logical inference.

arstechnica.com/ai/2024/10/llm

Follow

@XSGeek @tyghebright Out human brains are a massive kludge and not the rational engines we pretend them to be. We spend most of our time using hacks that were designed to keep us from killing each other and keep us from being poisoned by food or eaten by animals to do calculus and philosophy. It's actually pretty absurd if you consider it.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.