This is part of the series Quid ex Machina, which examines deep, under-explored, and Unseen impact of AI on humans and our societies. See here for the rest of the series.
Today’s theme song is Packard Goose by Frank Zappa.
Information is not knowledge.
Knowledge is not wisdom.
Wisdom is not truth.
Truth is not beauty.
Beauty is not love.(Frank Zappa, Packard Goose)
The Truth Engine
Let's indulge ourselves in a quick thought experiment. Imagine that tomorrow, OpenAI releases a truth engine called “GoodGPT" – it can tell truths from falsehoods and lies, and separate fiction from fact. GoodGPT is right, all the time. Ask any question, only the correct answer will follow. Search for anything, and only the most relevant results come up – no clickbaits, conspiracy theories, or disinformation, ever. When you give prompts, GoodGPT details out how to navigate life, like breaking ice with a date, and getting your dream job. With GoodGPT’s help, your ideal life comes true: a loving spouse, some adorable kids, and a fulfilling career.
Then, one night, you find out that GoodGPT disagrees with you on something really important. It says that you should sacrifice your beloved first-born in an elaborate ritual, because that’s what will free the child from an intolerable future misery, and also eradicate all of humanity’s sufferings.
What would you do?
You already know that GoodGPT produces objective truth, and it disagrees with your fundamental, core values. What's going to happen? Are you going to:
A) sacrifice your first-born, because GoodGPT has always been right? Or
B) pay no heed, and change your mind about GoodGPT as the truth engine? Or
C) ask GoodGPT to test your faith in truth again, but with a different truth?
If it existed, GoodGPT would be far superior to any current language AIs. Google’s Bard, OpenAI’s chatGPT, and Microsoft’s Sydney all spill wrong information, give non-existent academic references, and leave some users heartbroken. But here’s my point:
Language AIs and their flaws are actually features, not bugs – they woke us up from from our collective indifference to truth. There is no truth engine, nor should there be one. The only “truth engine” is ourselves.
In the coming parts, I’ll share how I’ve come to these conclusions.
Word-spinning is Not Truth-seeking
You’ve played with chatGPT, so you know it’s good at producing grammar-perfect sentences, and sounding superbly confident. This is because today’s language AIs are, in their essence, word-spinners. They drink in an ocean of textual data, and are then trained to search for patterns in that ocean. After lots of data and lots of training, these language AIs acquire great proficiency in assembling sentences that are statistically probable to happen. That’s why their outputs sound like plausible human talk rendered with much assurance — these language AIs have really seen it all.
But there is a big gap between “statistically probable” and “truthful”.
“The moon is made of green cheese.” is a statistically probable sentence, but untrue.
“When the light turns red, cars must stop.” is a statistically probable and truthful sentence.
And if you are an enterprising user of language AIs, you could trick them into asserting either sentence as an objective truth.
Note that I didn’t use the convenient verb “lie” to describe language AIs asserting untruthful information. This is because “to lie” has a special meaning, and language AIs are not that special.
“To lie,” you must have some belief, some backbone: you think you know the truth, but for whatever reason, you choose to say something else that is untrue. As philosopher Harry Frankfurt put it:
“It is impossible for someone to lie unless he thinks he knows the truth. … A person who lies is thereby responding to the truth, and he is to that extent respectful of it…. for the liar, it is correspondingly indispensable that he considers his statements to be false.”
(Harry Frankfurt, On Bullshit)
So when liars lie, they actually show respect towards the truth: they know they are lying, and they know there is a difference between truth and falsehood.
But language AIs don’t have a true vs false line item in their training budget. They are in the business to spin together words that are statistically most likely to occur – an auto-complete on steroids, if you will – and they bear no relationship to either truth or falsehood.
There is a dedicated term for smart word-spinning, without any concern for truth or falsehood: bullshit.
For the bullshitter… he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.
(Harry Frankfurt, On Bullshit)
Swapping in “it” for “he”, Frankfurt describes today’s language AIs to perfection. Whatever they do, their purpose is to string the next X number of words into probable sentences. Whether the sentence is true or false? It doesn’t matter.
So, language AIs spew BS, but why do we care? Everyone lies from time to time, some people BS more than others, so what’s wrong with some software producing more BS?
Nothing wrong. On the contrary, I think we need to thank these language AIs: they woke us up from our collective indifference to truth. And that’s why we care.
We’ve been Indifferent to Truth for a Long Time – Until Language AIs’ BS Woke us Up
Of truth we know nothing, for truth is in a well.
(Democritus)
I must disagree with Democritus here. Of course, for millennia, philosophers have labored and wrestled to define the notions of truth and falsehood. But in our commonsense way, we do have a practical understanding of what truth is. We all know what it means to “tell the truth” about things we are certain of: the moon is not made of green cheese; red light means “stop” for cars. On the flip side, equally clearly, we also know how to lie about those things, and what falsehood means. It’s very simple.
But if social media in the past near-decade was any indication, we live in a strange time when truth doesn’t seem to matter to many people. High-profile members of our society seem to be content – or even intentionally – spitting out lies and BS, while the rest of us have numbed ourselves into a cavalier indifference towards truth. Fake news? Fabrication? Disinformation? Falsehood? Who cares? Life is too short; BS artists and liars are too many.
The indifference towards truth prevailed, until language AIs came along. Suddenly, everyone seems to take an interest in what’s true vs. false in responses from these machines. Did chatGPT correctly count? Did Sydney fabricate an academic citation? Oh, and Google’s Bard gave a wrong answer in the product demo! Very soon, I observed a consensus emerge: language AIs can answer any questions you throw at them, just make sure to fact-check those answers and figure out what’s true.
I find this fascinating: machines’ errors aroused human’s desires for truth. In the depth of language AIs’ BS, we are reminded that we do care about truth, and we need to know a lot of truths to survive and thrive.
You and I use truths to navigate the hazards of life: what not to eat, how to dress for the weather, what to feed young children. On a grander scale, truths hold a society together: red light does mean “stop” for cars. Errors, BS, lies, and ignorance, be it produced by humans or AIs, hurt all of us. If it weren’t for those language AIs’ initial BS, our unconcern for truth might have lasted much longer.
Now let’s imagine an alternative scenario. Suppose from the get-go, chatGPT always produced correct answers to all questions after our fact-checks. How long would it be, before we stop fact-checking for its BS and errors? 200 times? 500? 1000?
Gradually, because we believe this machine always tells us truths, when would we start to entrust it with bigger and more vital questions?
After we stop fact-checking, were it to change from truth-saying to BS-giving mode, when would we find out?
When this machine produces truths that disagree with your core values, what would you do? Do you keep faith and offer what truth demands (option A), do you doubt the machine’s truth-saying power (option B), or do you defer your choice and hope a different “truth” would turn up (option C)?
We are the Truth Engine
With their BS, today’s language AIs have made us care about truth again. But with their perfections, the imaginary truth engines would make BS and fabrications the least of our concerns – because these engines would control every aspect of our lives.
Just because truth engines can and do tell the truth, it doesn’t mean they can’t be instruments of evil or questionable deeds. Our thought experiment in the beginning shows: their truths could win us in honest trifle matters, and then betray us in random moments of great consequence.
And when those moments come, we’d have relinquished our choice-making agencies and responsibilities to truth engines for so long, that we are paralyzed from making choices – because we could no longer discern right from wrong, and good from evil. We as human beings, would become a tool to be used to reach a certain end, where some machine-dictated “truth” awaits.
In other words, if truth engines were ever invented, it would be a bad idea to use them for value judgement and moral choices. We, as human beings, should not outsource our human values, integrity, and responsibilities to machines, no matter how mighty they seem to be.
So where does this leave us? Up until now, we have only touched on “material truth,” the kind of truth that occupies the majority of our practical understanding about what truth is. But there is also “moral truth,” the kind of truth that lurks in our minds, and lays the foundation of our lofty pursuits. Obtaining information, seeking knowledge, gathering wisdom, creating beauty, and celebrating love – none of these would matter, if we neglect moral truths, because we’d all turn into skeptics, nihilists, and cynics, who find no joy or meaning in anything at all.
Truth engines could perhaps tell us all the material truths, but only humans harbor moral truths.
It is up to us to uphold the difference between right and wrong.
It is up to us to guard the lines between good and evil.
It is up to us to stay vigilant of today’s language AIs, tomorrow’s truth engines, or any human BS artists, whose glib verbal fluency may trick us into giving up our integrity and responsibilities.
I know this sounds corny, but I still feel the urge to say it:
The only truth engine we have, for now and perhaps ever, is ourselves.
Thank you for reading Earthly Fortunes. If you like it, please share it. Subscribe for free to hear more about the earthly fortunes: dovetails, variety of life, time, and the Unseen of AI.
I’d love to hear your thoughts about AI and human society! Let me know in the comments, DM me on Twitter or Instagram, or just reply to the email!
"I know this sounds corny, but I still feel the urge to say it:
The only truth engine we have, for now and perhaps ever, is ourselves."
Um, no. That's definitely not corny. I found myself nodding along to so much of what you were saying. This was a BANGER of an essay.
I'll be coming back to read it again because it was such a joy for my brain 😌
And digging the Zappa song
I remember sitting in a lecture about morality and the law. (My memory doesn't serve me well, but here it goes).
The professor lectured about natural law, and how rationality derives from morality and conscious, which only humans can experience. I've been asking myself "are AI language models rationale?" And I know the answer is no, but it's been difficult for me to explain why this is.
Your thought experiment helps me find the words. Language AIs recognize patterns. But it's up to us humans to exercise our agency to question the patterns, to recognize the lines between good and evil. I still cannot give an eloquent answer, but I'll use my own brain instead of ChatGPTing the words ;)