“These language programs do write some “new” things—they’re called “hallucinations,” but they could also be described as lies. Similar to how autocorrect is ducking terrible at getting single letters right, these models mess up entire sentences and paragraphs.”
“Lies” assumes an intelligence; subterfuge. It is anthropomorphizing software and its bugs.
The proper term is bug. When I write code and it delivers the incorrect response, that is a bug.
@feloneouscat we want to humanize it’s errors. Makes us feel warm and squishy while we redo what we asked it to. LOL
@bmacmixer
I see the nature of “AI” (LLM as I don’t really feel there is intelligence associated with them — hah! I kid!) as mostly error with little to no Q/A.
If the results look similar to what they think it should be, it’s a win.
Testing for success feels great, but it doesn’t really work in the real world.
Real engineers test for failure.