“These language programs do write some “new” things—they’re called “hallucinations,” but they could also be described as lies. Similar to how autocorrect is ducking terrible at getting single letters right, these models mess up entire sentences and paragraphs.”

“Lies” assumes an intelligence; subterfuge. It is anthropomorphizing software and its bugs.

The proper term is bug. When I write code and it delivers the incorrect response, that is a bug.

theatlantic.com/technology/arc

We need to stop with incorrect and factually flawed terms — ChatGPT doesn’t have hallucinations, it has incorrect and erroneous responses.

Bard for example, couldn’t nail how many indictments there were under the Clinton Administration. How many were convicted? Throw of the dice seems to be Bards way of solving it.

These aren’t lies, they are bugs. They are fundamental to the FAILURE of the system as a whole.

These bugs would be inexcusable in any other REAL software industry.

@feloneouscat we want to humanize it’s errors. Makes us feel warm and squishy while we redo what we asked it to. LOL

Follow

@bmacmixer

I see the nature of “AI” (LLM as I don’t really feel there is intelligence associated with them — hah! I kid!) as mostly error with little to no Q/A.

If the results look similar to what they think it should be, it’s a win.

Testing for success feels great, but it doesn’t really work in the real world.

Real engineers test for failure.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.