Follow

And for and folk interested in so-called , here's an excellent paper released this month on the limits of LLMs:

"We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data. When we add a single clause that appears relevant to the question, we observe significant performance drops [across all models.]"

arxiv.org/pdf/2410.05229

@MLClark nods. They are incapable of fluid reasoning, which is an essential component of human logical thinking.

@MLClark

this is funny, hilarious in fact.

as destructive as postmodern deconstruction is, it has revealed the fact that language is ambiguous, meaning is fluid, multiple—and sound, effective reasoning cannot be reduced to computational processing.

what's happening to the model is a comic metaphor for the way "reasoning" works for far too many Homo sapiens using the same methods prescribed as artificial intelligence. look around, it's everywhere today.

hah, hah, a 1000 times, hah🤧

@MLClark

if you just stick to words and letters, and do not go by the deeper meaning,

that will be like the immature leader, sustaining casualties;

If you can make a deep search of the meanings and principles and contemplate along with the text,

then you are like a mature person leading an army of methods.

I Ching.

The reason is the neural nets are currently limited to 2 dimensional neural nets. They have to have 3 or more dimensional neural networks to achieve proper AI complex functionality.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.