And for and folk interested in so-called , here's an excellent paper released this month on the limits of LLMs:

"We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data. When we add a single clause that appears relevant to the question, we observe significant performance drops [across all models.]"

arxiv.org/pdf/2410.05229

@MLClark

this is funny, hilarious in fact.

as destructive as postmodern deconstruction is, it has revealed the fact that language is ambiguous, meaning is fluid, multiple—and sound, effective reasoning cannot be reduced to computational processing.

what's happening to the model is a comic metaphor for the way "reasoning" works for far too many Homo sapiens using the same methods prescribed as artificial intelligence. look around, it's everywhere today.

hah, hah, a 1000 times, hah🤧

Follow

@MLClark

if you just stick to words and letters, and do not go by the deeper meaning,

that will be like the immature leader, sustaining casualties;

If you can make a deep search of the meanings and principles and contemplate along with the text,

then you are like a mature person leading an army of methods.

I Ching.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.