Tuesday! 🎉

Today, I look at uncanny similarities in the panic around LLMs & other forms of labour theft that were normalized in other industries (journalism and teaching) many years ago.

The common denominator?

Why, it's our old friend, corporate exploitation of vulnerable workers! These systems have always been ready to use the promise of simplifying labour to rewrite industries in their favour.

Don't miss the forest for the trees.
open.substack.com/pub/mlclark/

@GlytchMeister

Large language model - the more technical term for so-called AI like ChatGPT. A lot of folks use the term LLM because AI, while a convenient shorthand, comes with a lot of baggage and feeds into misunderstandings of what the technology is and can do.

@MLClark

I like that. I think we need to have a distinction between what a layperson thinks of when we say AI (post-singularity sentient and sapient) and what we actually mean (a program that is slightly more flexible than “if, then” but is only slightly better than a large number of chips with typewriters.

ChatGPT is, at best, a honkin’ bunch of rats that are trained to press buttons for food, which is only slightly better than lots of apes banging randomly on keyboards.

@GlytchMeister

There's a huge discourse among tech folk around this terminology. Large language models is the safest - but that's just one subset of "machine learning", which many in the tech world suspect also gives laypeople the idea of a conscious mind *doing* the learning.

Either way, lots of nifty linguistic philosophy all tangled up in our latest crises. :) That said, I'm disappointed by many in my sci-fi community, who keep mistaking Silicon Valley hype cycles for robots out to get us.

@MLClark

It’s be easier to communicate about this shit if laypeople had basic understandings of psychology (at least as far as Pavlov and Skinner Box) and compsci (at least as far as what a transistor does and what binary is and why computers use it).

But, yknow, it’d be easier to communicate anything if everyone knew more about that thing, so this is just kind of a “no shit, Sherlock” statement. :/

Follow

@GlytchMeister

Agreed.

The theory behind this kind of computing is very old. Roger Penrose wrote on this in The Emperor's New Mind in 1989. In 1950, Alan Turing published a (shitty) rejoinder to Ada Lovelace's assessment of an analytic machine's limits in the 19th century.

None of this is new.

What makes this so tedious is how little collective memory we have, and how quickly people fall for hype. (Ergo my newsletter offering historical context for today's fear of stolen labour via LLMs!)

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.