I'm not an AI doomer, but this article definitely gave me pause:

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
tinyurl.com/ycj5zj7x/

@leeloowrites Read that too. Before I get to worrying about the long term effects there are ethical issues right now and the fact that Gebru and Hinton are out at Google and [the whole ethics team afaict] are out at Microsoft is concerning:
"Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities." --Gebru
dair-institute.org/blog/letter

@b4cks4w Yes, I've been following Gebru for quite a while and I completely agree, but I see the concerns as all part of a comprehensive whole, not one vs the other. Regulating the industry is needed as soon as possible to address exploitation, bias, misuse and to stem the long-term impacts.

Follow

@leeloowrites Oh sure, didn't mean to imply these things are opposed.
C/should we get started right away on transparency, credit, etc. and would that help us get a handle on impacts? Like, I need to understand what sources and methods chatGPT used and didn't (transparency) before I know if I can use its work (impact).

OTOH people are using this stuff NOW so I guess you're right, we need everything all at once.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.