Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Irrelevant red herrings lead to "catastrophic" failure of logical inference.
@XSGeek @tyghebright Out human brains are a massive kludge and not the rational engines we pretend them to be. We spend most of our time using hacks that were designed to keep us from killing each other and keep us from being poisoned by food or eaten by animals to do calculus and philosophy. It's actually pretty absurd if you consider it.