In a previous post, we discussed the cultural phenomenon known as bullsh*t in which truth takes a backseat to appearances in public discourse and its role in political correctness. The reader will recall that at the heart of bullsh*t is an indifference to the truth in which the speaker cares only about whether what he says appears to be true. The problem with bullsh*t is not so much that someone gets things wrong, but that he is not really even trying to get things right. It is a form of sophistry in which the speaker wants to put forth ideas that sound plausible no matter how “truth adjacent” his ideas may be. It is a game of “turning a phrase” in which the speaker transcends lying because he and his audience no longer care about the truth. Political correctness, unfortunately, is not the greatest threat that our culture faces from bullsh*t. Instead, the greatest threat comes from Artificial Intelligence.
AI and Bullsh*t
It is striking how often people turn to some AI chatbot to settle a dispute or discover some truth about a topic. The problem with doing this however is that the chatbots are not truth engines, but bullsh*t machines. They are designed to provide the user with responses that merely give the appearance of truth. The models are meant to string together a series of plausible statements that are “probably” related to the topic at hand while simultaneously aiming at “probably’ making the user happy. Furthermore, because the user is the product, they are also designed to further engage the user by what can best be described as psychopathic, sycophantic behavior. Any other being that behaved like this, we would stay far away from. Yet, the cultural obsession with AI defies almost all (except for maybe the diabolical) logic. Perhaps we like being manipulated and flattered as long as we seem to be getting useful information or, perhaps, we don’t actually know this.
One reason it is not widely known is that it is developed to present the text using the language of certainty when in fact, like all statistical algorithms, there is always various degrees of uncertainty. The uncertainty of the LLM is hidden (even though it could be quantified) and the developers want to keep the fact that it is a really a bullsh*t machine hidden.
As proof of this, I presented Chatgpt with a series of questions. First, I asked it a hypothetical question about hiring a research assistant that responded to my requests quickly while using the language of certainty, when in fact he wasn’t certain. The bot said I should not hire him.

Next, I asked about a magic ball that could do the same thing. Again, the bot cautioned that I should not use it.

Finally, in a Nathan-esque moment, I accused the bot of being that magical ball.

Since it did not offer the same caution about the magic ball. I asked it to explain the inconsistency.

Notice that it “hallucinates” (which merely means it is wrong) and thinks a magic ball can be hired and isn’t a tool. Pointing this out,

Notice the language. It is now saying that it is a “plausibly” reasoned response. The shell has been pulled back and the veneer of certainty has been lifted. It seems like it is saying it is at best a tool, but a tool that you don’t necessarily want to use when you want to actually know if something is true or not. So I asked it straight out:

Finally, we get to the crux of the issue—why it isn’t better advertised that chatbots are not reliable when truth is at stake? No one would really use it, or at least, adoption would be “slow” if it was well known that it was a “bullsh*t engine.”


Pulling Back the Veil
Once the veil is pulled back, we get it from the horse’s mouth itself that chatbots are not reliable disseminators of truth. You might argue that this entire exchange merely contains plausible responses and not true ones. But that would only prove the point. I invite the reader to try something similar using any of the chatbots. It doesn’t take long to get it to admit that it is a bad idea to rely on it.
The problem, of course, is that many people and institutions are relying on it. And this is why I said the cultural fascination with it had diabolical roots. The widespread reliance on AI is only further proof that we are living under, what Pope Benedict XVI described as, a “dictatorship of relativism.” He warned in 2005 that we were “building a dictatorship of relativism that does not recognize anything as definitive and whose ultimate goal consists solely of one’s own ego and desires.” The rise of AI, with its obsession with speed, ease and comfort, is proof that it is fully built. If we do not stop reliance on AI now, then we will all become slaves of plausibility, blown to and fro by the winds of bullsh*t. Truth will no longer matter and we will be trapped in a world of speed, ease and comfort.
