Follow

What Ars Technica says versus reality. The reality is that these software are not even alpha level. Bard, for instance, gave links to data which itself had dead bit.ly links.

The reality is if you want your business to die? Rely on Generative AI to replace people.

Generative AI is at the “proof of concept” stage — it is not a complete and viable system. It’s GREAT at generating bullshit.

arstechnica.com/information-te

@feloneouscat
I'm not the greatest at math but these images have me totally baffled.
Please tell me this isn't the new math.

@Jeber @feloneouscat If I have 2 $2 bills and 1 $1 bill; I'm buying $5 worth of stuff.

@DCliffo @Jeber I gets worse with both Bard and Alfred. Remember these are language models — they are designed to ATTEMPT to CREATE an answer. The answer is not always right. Or even vaguely correct.

@DCliffo @Jeber You MAY think, “Whew, Alfred is right” but no, Alfred doesn’t even think the right answer is right.

@Jeber @DCliffo Oh, this amuses me. As a retired software/hardware engineer one of my main gripes was business testing for success instead of testing for failure.

Apparently there are a lot of poor writers doing the same thing.

These generative programs produce, at best, bad answers. Alfred couldn’t tell me how many indictments were in the Clinton Administration, at one time saying there were 2568 indictments. When asked for a link it gave a dead link.

There were only two.

@Jeber @DCliffo I find it disturbing that generative AI lies so easily and readily.

The problem has less to do with math than the overall model: to give an authoritative answer at, apparently, all costs, even at the cost of telling the truth.

My silly question about woodchucks is a great one: it said 700 lbs of wood. This is based on the fact that if you replaced the amount of dirt with the equivalent amount of wood. The fact is they don’t chuck wood at all.

@feloneouscat @DCliffo
Its problem with providing answers lacking truth sounds amazingly like the Republican Party.

@LnzyHou This is the problem with generative AI. They are designed to create answers, not necessarily the right answer or one that is even vaguely correct.

I asked @Alfred if any businesses had been fined for violating the FTC Guidelines. It gave answers but when I looked them up, in all cases the fines were levied for laws OTHER than the guidelines. I did the same with Bard which confused the ACLU with the FTC and didn’t admit it UNTIL I ASKED IT!

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.