I'm not an AI doomer, but this article definitely gave me pause:
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
https://tinyurl.com/ycj5zj7x/
@leeloowrites Read that too. Before I get to worrying about the long term effects there are ethical issues right now and the fact that Gebru and Hinton are out at Google and [the whole ethics team afaict] are out at Microsoft is concerning:
"Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities." --Gebru
https://www.dair-institute.org/blog/letter-statement-March2023
@leeloowrites Oh sure, didn't mean to imply these things are opposed.
C/should we get started right away on transparency, credit, etc. and would that help us get a handle on impacts? Like, I need to understand what sources and methods chatGPT used and didn't (transparency) before I know if I can use its work (impact).
OTOH people are using this stuff NOW so I guess you're right, we need everything all at once.