What should I believe?
As it turns out, that question has a right answer.
It has a right answer when you’re wracked with uncertainty, not just when you have a conclusive proof. There is always a correct amount of confidence to have in a statement, even when it looks like a “personal belief ” and not like an expert-verified “fact.”
Yet we often talk as though the existence of uncertainty and disagreement make beliefs a mere matter of taste. We say “that’s just my opinion” or “you’re entitled to your opinion,” as though the assertions of science and math existed on a different and higher plane than beliefs that are merely “private” or“subjective.” But, writes Robin Hanson:
You are never entitled to your opinion. Ever! You are not even entitled to “I don’t know.” You are entitled to your desires, and sometimes to your choices. You might own a choice, and if you can choose your preferences, you may have the right to do so. But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie. [...]
It is true that some topics give experts stronger mechanisms for resolving disputes. On other topics our biases and the complexity of the world make it harder to draw strong conclusions. [...]
But never forget that on any question about the way things are (or should be), and in any information situation, there is always a best estimate. You are only entitled to your best honest effort to find that best estimate; anything else is a lie.
Suppose you find out that one of six people has a crush on you—perhaps you get a letter from a secret admirer and you’re sure it’s from one of those six—but you have no idea which of those six it is. Your classmate Bob is one of the six candidates, but you have no special evidence for or against him being the one with the crush. In that case, the odds that Bob is the one with the crush are 1:5.
Because there are six possibilities, a wild guess would result in you getting it right once for every five times you got it wrong, on average. This is what we mean by “the odds are 1:5.” You can’t say, “Well, I have no idea who has a crush on me; maybe it’s Bob, or maybe it’s not. So I’ll just say the odds are fifty-fifty.” Even if you’d rather say “I don’t know” or “Maybe” and stop there, the answer is still 1:5.
Suppose also that you’ve noticed you get winked at by people ten times as often when they have a crush on you. If Bob then winks at you, that’s a new piece of evidence. In that case, it would be a mistake to stay skeptical about whether Bob is your secret admirer; the 10:1 odds in favor of “a random person who winks at me has a crush on me” outweigh the 1:5 odds against “Bob has a crush on me.”
It would also be a mistake to say, “That evidence is so strong, it’s a sure betthat he’s the one who has the crush on me! I’ll just assume from now on that Bob is into me.” Overconfidence is just as bad as underconfidence.
In fact, there’s only one possible answer to this question that’s mathematically consistent. To change our mind from the 1:5 prior odds based on the evidence’s 10:1 likelihood ratio, we multiply the left sides together and the right sides together, getting 10:5 posterior odds, or 2:1 odds in favor of “Bob has a crush on me.” Given our assumptions and the available evidence, guessing that Bob has a crush on you will turn out to be correct 2 times for every 1 time it turns out to be wrong. Equivalently: the probability that he’s attracted to you is 2/3. Any other confidence level would be inconsistent.
Our culture hasn’t internalized the lessons of probability theory—that the correct answer to questions like “How sure can I be that Bob has a crush on me?” is just as logically constrained as the correct answer to a question on an algebra quiz or in a geology textbook. Our clichés are out of step with the discovery that “what beliefs should I hold?” has an objectively right answer, whether your question is “does my classmate have a crush on me?” or “do I have an immortal soul?” There really is a right way to change your mind. And it’s a precise way.
How to Not Actually Change Your Mind
Revising our beliefs in anything remotely like this idealized way is a tricky task, however.
In the first volume of Rationality: From AI to Zombies, we discussed the value of “proper” beliefs. There’s nothing intrinsically wrong with expressing your support for something you care about—like a group you identify with, or a spiritual experience you find exalting. When we conflate cheers with factual beliefs, however, those misunderstood cheers can help shield an entire ideology from contamination by the evidence.
Even beliefs that seem to elegantly explain our observations aren’t immune to this problem. It’s all too easy for us to see a vaguely scientific-sounding (or otherwise authoritative) phrase and conclude that it has “explained” something, even when it doesn’t affect the odds we implicitly assign to our possible future experiences.
Worst of all, prosaic beliefs—beliefs that are in principle falsifiable, beliefs that do constrain what we expect to see—can still get stuck in our heads, reinforced by a network of illusions and biases.
In 1951, a football game between Dartmouth and Princeton turned unusually rough. Psychologists Hastorf and Cantril asked students from each school who had started the rough play. Nearly all agreed that Princeton hadn’t started it; but 86% of Princeton students believed that Dartmouth had started it, whereas only 36% of Dartmouth students blamed Dartmouth. (Most Dartmouth students believed “both started it.”)
There’s no reason to think this was a cheer, as opposed to a real belief. The students were probably led by their different beliefs to make different predictions about the behavior of players in future games. And yet somehow the perfectly ordinary factual beliefs at Dartmouth were wildly different from the perfectly ordinary factual beliefs at Princeton.
Can we blame this on the different sources Dartmouth and Princeton students had access to? On its own, bias in the different news sources that groups rely on is a pretty serious problem.
However, there is more than that at work in this case. When actually shown a film of the game later and asked to count the infractions they saw, Dartmouth students claimed to see a mean of 4.3 infractions by the Dartmouth team (and identified half as “mild”), whereas Princeton students claimed to see a mean of 9.8 Dartmouth infractions (and identified a third as “mild”).
Never mind getting rival factions to agree about complicated propositions in national politics or moral philosophy; students with different group loyalties couldn’t even agree on what they were seeing.
When something we care about is threatened—our worldview, our ingroup, our social standing, or anything else—our thoughts and perceptions rally to their defense.[4,5] Some psychologists these days go so far as to hypothesize that our ability to come up with explicit justifications for our conclusions evolved specifically to help us win arguments.
One of the defining insights of 20th-century psychology, animating every- one from the disciples of Freud to present-day cognitive psychologists, is that human behavior is often driven by sophisticated unconscious processes, and the stories we tell ourselves about our motives and reasons are much more biased and confabulated than we realize.
We often fail, in fact, to realize that we’re doing any story-telling. When we seem to “directly perceive” things about ourselves in introspection, it often turns out to rest on tenuous implicit causal models.[7,8] When we try to argue for our beliefs, we can come up with shaky reasoning bearing no relation to how we first arrived at the belief. Rather than judging our explanations by their predictive power, we tell stories to make sense of what we think we know.
How can we do better? How can we arrive at a realistic view of the world,when our minds are so prone to rationalization? How can we come to a realistic view of our mental lives, when our thoughts about thinking are also suspect? How can we become less biased, when our efforts to debias ourselves can turn out to have biases of their own?
What’s the least shaky place we could put our weight down?
The Mathematics of Rationality
At the turn of the 20th century, coming up with simple (e.g., set-theoretic) axioms for arithmetic gave mathematicians a clearer standard by which to judge the correctness of their conclusions. If a human or calculator outputs “2 + 2 = 4,” we can now do more than just say “that seems intuitively right.” We can explain why it’s right, and we can prove that its rightness is tied in systematic ways to the rightness of the rest of arithmetic.
But mathematics and logic let us model the behaviors of physical systems that are a lot more interesting than a pocket calculator. We can also formalize rational belief in general, using probability theory to pick out features held in common by all successful forms of inference. We can even formalize rational behavior in general by drawing upon decision theory.
Probability theory defines how we would ideally reason in the face of uncertainty, if we had the time, the computing power, and the self-control. Given some background knowledge (priors) and a new piece of evidence, probability theory uniquely defines the best set of new beliefs (posterior) I could adopt. Likewise, decision theory defines what action I should take based on my beliefs. For any consistent set of beliefs and preferences I could have about Bob, there is a decision-theoretic answer to how I should then act in order to satisfy my preferences.
Humans aren’t perfect reasoners or perfect decision-makers, any more than we’re perfect calculators. Our brains are kludges slapped together by natural selection. Even at our best, we don’t compute the exact right answer to “what should I think?” and “what should I do?” We lack the time and computing power, and evolution lacked the engineering expertise and foresight, to iron out all our bugs.
A maximally efficient bug-free reasoner in the real world, in fact, would still need to rely on heuristics and approximations. The optimal computationally tractable algorithms for changing beliefs fall short of probability theory’s consistency.
And yet, knowing we can’t become fully consistent, we can certainly still get better. Knowing that there’s an ideal standard we can compare ourselves to—what researchers call “Bayesian rationality”—can guide us as we improve our thoughts and actions. Though we’ll never be perfect Bayesians, the mathematics of rationality can help us understand why a certain answer is correct, and help us spot exactly where we messed up.
Imagine trying to learn math through rote memorization alone. You might be told that “10 + 3 = 13,” “31 + 108 = 139,” and so on, but it won’t do you a lot of good unless you understand the pattern behind the squiggles. It can be a lot harder to seek out methods for improving your rationality when you don’t have a general framework for judging a method’s success. The purpose of this book is to help people build for themselves such frameworks.
In a blog post discussing how rationality-enthusiast “rationalists” differ from anti-empiricist “rationalists,” Scott Alexander observed:
[O]bviously it’s useful to have as much evidence as possible, in the same way it’s useful to have as much money as possible. But equally obviously it’s useful to be able to use a limited amount of evidence wisely, in the same way it’s useful to be able to use a limited amount of money wisely.
Rationality techniques help us get more mileage out of the evidence we have, in cases where the evidence is inconclusive or our biases and attachments are distorting how we interpret the evidence. This applies to our personal lives, as in the tale of Bob. It applies to disagreements between political factions (and between sports fans). And it applies to technological and philosophical puzzles, as in debates over transhumanism, the position that we should use technology to radically refurbish the human condition. Recognizing that the same mathematical rules apply to each of these domains—and that the same cognitive biases in many cases hold sway—How to Actually Change Your Mind draws on a wide range of example problems.
The first sequence of essays in How to Actually Change Your Mind, “Overly Convenient Excuses,” focuses on questions that are as probabilistically clear- cut as questions get. The Bayes-optimal answer is often infeasible to compute, but errors like confirmation bias can take root even in cases where the available evidence is overwhelming and we have plenty of time to think things over.
From there, we move into murkier waters with a sequence on “Politics and Rationality.” Mainstream national politics, as debated by TV pundits, is famous for its angry, unproductive discussions. On the face of it, there’s something surprising about that. Why do we take political disagreements so personally, even when the machinery and effects of national politics are so distant from us in space or in time? For that matter, why do we not become more careful and rigorous with the evidence when we’re dealing with issues we deem important?
The Dartmouth-Princeton game hints at an answer. Much of our reasoning process is really rationalization—storytelling that makes our current beliefs feel more coherent and justified, without necessarily improving their accuracy. “Against Rationalization” speaks to this problem, followed by “Against Doublethink” (on self-deception) and “Seeing with Fresh Eyes” (on the challenge of recognizing evidence that doesn’t fit our expectations and assumptions).Leveling up in rationality means encountering a lot of interesting and powerful new ideas. In many cases, it also means making friends who you can bounce ideas off of and finding communities that encourage you to better yourself. “Death Spirals” discusses some important hazards that can afflict groups united around common interests and amazing shiny ideas, which will need to be overcome if we’re to get the full benefits out of rationalist communities. How to Actually Change Your Mind then concludes with a sequence on “Letting Go.”
Our natural state isn’t to change our minds like a Bayesian would. Getting the Dartmouth and Princeton students to notice what they’re really seeing won’t be as easy as reciting the axioms of probability theory to them. As Luke Muehlhauser writes, in The Power of Agency:
You are not a Bayesian homunculus whose reasoning is “cor- rupted” by cognitive biases.
You just are cognitive biases.
Confirmation bias, status quo bias, correspondence bias, and the like are not tacked on to our reasoning; they are its very substance.
That doesn’t mean that debiasing is impossible. We aren’t perfect calculators underneath all our arithmetic errors, either. Many of our mathematical limitations result from very deep facts about how the human brain works. Yet we can train our mathematical abilities; we can learn when to trust and distrust our mathematical intuitions, and share our knowledge, and help one another; we can shape our environments to make things easier on us, and build tools to offload much of the work.
Our biases are part of us. But there is a shadow of Bayesianism present in us as well, a flawed apparatus that really can bring us closer to truth. No homunculus—but still, some truth. Enough, perhaps, to get started.
1. Robin Hanson, “You Are Never Entitled to Your Opinion,” Overcoming Bias (blog) (2006), http: //www.overcomingbias.com/2006/12/you_are_never_e.html.