Follow

"Apple study exposes deep cracks in LLMsā€™ ā€œreasoningā€ capabilities
Irrelevant red herrings lead to `catastrophic` failure of logical inference."

Yesterday I saw that Apple researchers believed this study showed LLM don't actually "Reason".

By this standard... almost anyone who is MAGA doesn't reason either.

But that checks out, I always thought MAGAt's had to not be truly human

cc @matuzalem

arstechnica.com/ai/2024/10/llm

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.