As someone who, unlike those at Future of Life and other longtermism groups, actually understands how Big Data AI models work, I want to giggle at their demands that LLM research stop and even more so at Yudkowsky's belief that we're on the precipice of superhuman intellect.

LLMs might be interesting as a cognitive model, but the greatest threat they present is in enabling us to harm ourselves with disinformation.

These longtermists should pay more attention to real issues in the *now.*

Follow

@lenaoflune
Hm. They can be weaponised for targeted harassment too. Happy fun time!

@ckkyro They sure can. Thankfully they're too expensive right now for the typical bot farmer to run (those GPU-enabled VMs ain't cheap, and the A100 ones needed for fine-tuning are even more expensive), but that won't be the case forever.

The real risk of LLMs to humanity is what they will let us to do to ourselves. We're more than capable of destroying ourselves with the tech -- no "AI alignment" will stop that.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.