We all know people who seem impervious to logic. No matter how cogently our carefully reasoned statements are presented, they double down on their error. Unfortunately, the face of such a person can likely be found in your bathroom mirror, since overconfidence and confirmation bias are almost universal human tendencies. Garrison Keillor’s mythical town of Lake Wobegon, where all the children are above average, evokes a smile because we know that such a thing is impossible, even though our own children are very definitely above average in every respect. Surveys consistently show that the vast majority of people think they are better drivers than 80% of those on the road today.
And that’s just overconfidence. Add in our tendency to listen carefully to those who agree with us and dismiss those who don’t. Then layer on top of that our habit of actively seeking out information that confirms what we already believe, and closing our ears to the other kind. It is no wonder our lives are sprinkled with bad decisions and erroneous actions.
One of the promises of the AI revolution is that we will all have an unbiased, totally rational assistant in our pockets to steer us back to safety when our little ship of life is in peril of foundering on the rocks of irrationality. Just ask ChatGPT for advice and assistance whenever we are in doubt, and we will soon have the waypoints back to truth marked out for us on a map. Putting aside the fact that we don’t seem to be in doubt often enough anyway, it seems that for some reason, human-generated artifacts tend to have human biases.
An important part of intelligence is to know when you know something and when you don’t. I’m sure of my children’s birthdays, less so about where my keys are right now. They’re not lost, probably on my bedside table, but I’m not sure. Sometimes, there is no unique right answer to a question; several approaches are valid, and sometimes we imagine we know things that are inherently unknowable.
As with humans, so it is with AI models. In a recent experiment, investigators set up debates between models, sometimes between identical models. The debates were on complex, uncertain topics, and the models were asked both to produce opinions and express confidence in those opinions. Our dear little chipset friends acted just like people - they expressed an irrationally high degree of confidence in the face of uncertainty. Models with opposing opinions both estimated (I was careful not to say “thought”) that they had a significantly higher than 50% chance of being right.
What is more, the models grew more confident when faced with adverse arguments presented by other models. Each round of the debates saw them dig in a bit more. Contrary information was somehow processed as confirmation. “I’m right, my opponent is therefore wrong: this new data coming from my opponent clearly must show how wrong it is”. Technically, this is adverse Bayesian drift. The name is not important, but the concept is vital, so please stay with me as I tell a little story.
Thomas Bayes was an 18th-century mathematician and Presbyterian minister who thought about probability as a strength-of-belief question (epistemic confidence) rather than a frequency of occurrence. He said that we ought to have more confidence that something is true as more confirming evidence accumulates. Modern statisticians and computer scientists have built a towering edifice on this thought, in part because it has the huge advantage of not requiring a bunch of data at the beginning. You can start with a question in your mind, estimate whether you think it is true or not, and then revise your opinion as more data comes in. This step-by-step method is exactly how computers work, so it is no wonder data scientists are attracted to it.
But is it reliable? Let’s think back to the question of my keys. Are they indeed on my bedside table, as I suppose? Of course, I could get up and look, but I’m busy typing - however, from my desk, I can see the kitchen island through my study door. The kitchen island is another favorite place for me to drop my keys, and they are not there! So it is now more probable that they are on the bedside table, according to theory.
But what if my original thought had been, not that they are safely on the bedside table, but that I have lost them, as indeed a tiny part of my brain wonders? -After all, it’s happened before. If they are indeed lost, they won’t be either on the bedside table or the kitchen island. And look! They are not on the kitchen island - confirmatory (but not decisive) evidence that they are indeed lost, and therefore less likely to be on the bedside table.
An unsolved (so far) problem in AI is that the inference models don’t know when they are going wrong. Is it because Bayesian inference is path-dependent, affected by the order in which information is digested? I do not know, but I do know that animals, including humans, can have doubts - nagging feelings that our assumptions and strong beliefs are incorrect. We can sort-of-believe two contradictory things simultaneously, playing with them in our minds, consulting our feelings as we do so. (For that is how humans think in part, by continuously consulting our feelings, but that is a story for another day)
When I was a boy, an old man taught me how to catch catfish with a bit of dried venison on a knotted string - no need for a hook, once a catfish gets a tasty morsel in its mouth, It will never let go- you can haul it out of the water with its jaws clamped about the string.
As fly fishers know, never in a million years could you catch a trout that way - a trout is quick to revise its opinion of a fly, any unnatural behavior on the part of the pretty little thing covering the hook, and a trout is instantly elsewhere. Whatever intelligence is, most would agree that the trout exhibits it to a greater degree than the catfish.
By all means, let us capture the benefits of AI. But also let us be a little less like the boy in the picture, not wishing even to hear things that might contradict our cherished hopes and beliefs, and embrace the true intelligence that comes with doubt. Natural, not artificial intelligence. Intelligence passed to us from our parents, that we, in turn, pass on. How to behave in the face of uncertainty and doubt - and how to live well in a world that we can never know completely.