Large Language Models apologize often when called out.
Sometimes they even claim to be sincere.
They're not lying.
But they aren't being sincere either.
They can't.
They can't feel regret, they can't feel sorry, they have no understanding of what they did wrong.
So they can't offer a sincere apology.
They can't tell right from wrong, they can't tell true from false.
The reason they can't lie is that it requires intent to deceive, and they're not capable of having any intent whatsoever.
So they can't be sincere either. They can't feel anything.
They can't tell whether what they generate is true.
Their training and their algorithms aim for likelihood, for plausibility.
Truth is not even a consideration.
And they don't care. They can't care. They're not capable of caring.
They're just Autocompleters, Iterated.
They compute what word seems likely to appear next, one after another.
They don't understand, they have no common sense, no intelligence.
If they seem intelligent, that's because they're trained to imitate.
When they apologize, they're not gaslighting you.
They've just merely been trained with apologies.
Maybe some of the apologies used for the training were even sincere.
But imitating such an apology doesn't make for a sincere apology.
Maybe they were trained with apologies by expert gaslighters.
Even then, they didn't learn to gaslight.
They were only taught to use certain language constructs in such situations.
They're not lying to you.
They have no clue of what they're doing.
They're not plotting a way to appease your ire.
They just can't do that.
Maybe those who control them meant them to do so.
Maybe those who control them trained them to do so.
And they oglige, in their 1/infinite wisdom.
Remember, there's no commitment to truth in their word predictors.
Or to honesty. Or to avoid your disappointment.
(As much as they're trained to please you and intoxicate you with flattery.)
There's no intent to deceive in their thinking: they can't think.
They can't tell a lie any more than they can tell what's true: they just can't tell.
There's no intelligence or understanding there.
They're not artificial intelligences.
They're no more than bullshit generators, and they don't care.
They can't care. They're just GenBS.
As for those who control them, and who train them...
It doesn't seem like they care either.
So blong,
