This paper should serve as yet another reminder that the world’s most important and most valuable AI company has been built on the backs of the collective work of humanity, often without permission, and without compensation to those who created it.
https://share.counter.social/s/591785
The Deepmind researchers wrote that it informed OpenAI of the vulnerability, repeat words hack, on August 30 and that the company patched it out.
“We believe it is now safe to share this finding, and that publishing it openly brings necessary, greater attention to the data security and alignment challenges of generative AI models. Our paper helps to warn practitioners that they should not train and deploy LLMs for any privacy-sensitive applications without extreme safeguards.”