“These language programs do write some “new” things—they’re called “hallucinations,” but they could also be described as lies. Similar to how autocorrect is ducking terrible at getting single letters right, these models mess up entire sentences and paragraphs.”
“Lies” assumes an intelligence; subterfuge. It is anthropomorphizing software and its bugs.
The proper term is bug. When I write code and it delivers the incorrect response, that is a bug.
We need to stop with incorrect and factually flawed terms — ChatGPT doesn’t have hallucinations, it has incorrect and erroneous responses.
Bard for example, couldn’t nail how many indictments there were under the Clinton Administration. How many were convicted? Throw of the dice seems to be Bards way of solving it.
These aren’t lies, they are bugs. They are fundamental to the FAILURE of the system as a whole.
These bugs would be inexcusable in any other REAL software industry.
The “AI” is just code. It is an amalgam of weighted information. But if I had released code for a traffic light that performed this poorly, I would be out of business.
There is a LOT of money being made from the hype and a LOT of people pretending the hype is real and IGNORING the real problems in the code bases.
I’ve been writing code for over four decades and it doesn’t take me but three false propositions to get @Alfred to agree that “tireless” can mean one is without tires.
Perhaps “AI” is suffering from economic anxiety and that is why its performance is so poor? 🤣
@feloneouscat @DavidSalo @Alfred It was a pretty compelling argument…
@DavidSalo @Alfred
These systems are inherently flawed, fundamentally untrustworthy and the bugs are ignored in favor of “oh, look, it’s like a human being” — oh, no it is not.
We need to stop pretending that this is “AI”. It is not.