“These essays are fumbling attempts to put into words lessons better taught by experience. But at least there’s underlying math, plus experimental evidence from cognitive psychology on how humans actually think. Maybe that will be enough to cross the stratospherically high threshold required for a discipline that lets you actually get it right, instead of just constraining you into interesting new mistakes.”
I. MAP AND TERRITORY
What Do We Mean By “Rationality”?: Epistemic vs Instrumental; probability theory and decision theory. Bayesian belief: one that conforms to a coherent probability distribution, Bayesian decisions: that maximize the expectation of a coherent utility function Need to learn our own flaws, overcome our biases, prevent ourselves from self-deceiving, get ourselves into good emotional shape to do what needs doing why have an additional word for “rational” as well as “true” and “good”? Because we want to talk about systematic methods for obtaining truth and winning.
Feeling Rational: Feelings are neither irrational nor independent of rationality. it is rational to feel. Feelings are caused by beliefs! the way to appear cultured and sophisticated has been to never let anyone see you care strongly about anything they’re not used to seeing sane adults who visibly care about anything.
Why truth? And…: Reasons: Curiosity. Question-driven. Utility. Answer-driven. Ethics of belief. Process-driven. (Duty.)
“For ethical performance” is a dubious reason to seek. Can reify wrong behaviour, e.g. Vulcan calm. An emotion is not rational if it rests on irrational epistemic conduct Rationality: thinking in certain ways rather than others
…What’s a bias, again?: Truth is a very small part of conceivability. Error is not an exceptional condition. Cognitive bias: not produced by the cost of information, nor by limited computation, nor a “mistake”, error arising from particular cognitive content (adopted beliefs or adopted duties), nor errors arising from individual pathology, nor from socialisation. A universal obstacle to truth, written in the shape of our mental machinery. 1) Brains do something wrong; 2) experimentation, 3) someone identifies the problem in a System 2 fashion; 4) we call it a “bias”.
Availability judging the probability of an event by the ease with which examples come to mind. Slovic study: People say “accidents > disease, homicide > suicide”. More discussed, more emotional. -> events that have never happened, are not recalled, and hence deemed to have probability zero -> worst flood in recent memory is the upper bound on implicitly predicted future flood Memory is not good guide to past probability, let alone future.
Burdensome Details BEWARE details that sound neat. Conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability”. By adding extra details, you can make an outcome seem more characteristic of the process that generates it. We gain credulity as it gains improbability.
SO?: NOTICE the word “and”. be wary of it, be shocked by the audacity of anyone asking to endorse such an insanely complicated prediction. Penalize the probability substantially - a factor of four, at least,
Where are all these details coming from? Where did that specific detail come from?
Planning The planning fallacy is that people think they can plan, ha ha. Optimism about detailed intentions. When asked for a “realistic” scenario, they envision everything going as planned, with no unexpected delays — the happy path, “best case”. Reality usually delivers worse than the “worst case.”
So?: Use the outside view. Look at other cases, avoid the minutiae of yours. Trust a clever outsider!
Illusion of Transparency: Why No One Understands You We always know what we mean by our words, and so we expect others to know it too. Don’t blame those who misunderstand your ‘perfectly clear’ sentences. Chances are, your words are more ambiguous than you think.
Expecting Short Inferential Distances In the EEA, all knowledge was passed down by speech and memory, all background knowledge was universal knowledge. No disciplines with carefully gathered evidence generalized into elegant theories transmitted by books whose conclusions are 100 steps removed from universal premises. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge. A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself. Never let on that you’re working a dozen inferential steps away from what the audience knows, or that you have special background knowledge.
The Lens That Sees Its Flaws How is it that a subjective system can sometimes be objective? You can understand how you see your shoelaces. You can think about which thinking processes will create beliefs which mirror reality, which will not. A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains. Their camera does not take pictures of its own lens. Science is reflection about a reliable process for making your mind mirror the contents of the world. e.g. wishful thinking. Internal distortion of lens which we correct for by hedging. Nothing anything against happiness, but it should follow from your picture of the world, rather than tampering with the paintbrushes.
Making Beliefs Pay Rent We can learn to model the unseen. And but not only unseen: unreal. We build whole networks of beliefs connected only to each other — no constraint on expected experiences. empiricism: constantly asking which experiences our beliefs prohibit arguing about labels in your belief network forever SO: don’t ask what to believe — ask what to anticipate Not all beliefs are direct anticipations. If a belief offers none, evict it.
A Fable of Science and Politics 6 responses to explosive information 1. Tribalism: you happen to be right: use it to destroy the enemy. 2. Tribalism: you happen to be wrong: weep. Curse god by Modus tollens 3. Doublethink: Hide it. Retreat to relativism / noble lie / fictionalism 4. Tribalism: be a heroic turncoat. Virtue of relinquishment. 5. Transcendence: Practical ethics, amelioration. Virtue of responsibility. 6. Transcendence: Rise above, secede from society. Virtue of lightness, curiosity. Getting out of local maxima involves doing damage.
Belief in Belief where it is difficult to believe a thing, often easier to believe you ought to believe it P, wanting P, believing P, wanting to believe P, and believing that you believe P. People don’t believe in belief in belief, they just believe in belief. If all our thoughts were sentences, the mind would be easier to understand. Fleeting images, flinches, masked desires — these account for as much as words. Liars must be accurate on some level; so too the deluded. When someone makes up excuses in advance, it would seem to require that belief, and belief in belief, have become unsynchronized.
Pretending to be Wise Problem: The display of neutrality or suspended judgment, to signal maturity. Trying to signal wisdom by refusing to make guesses - refusing to sum up evidence - refusing to pass judgment - refusing to take sides - staying above the fray and looking down with a lofty and condescending gaze - which is to say, signaling wisdom by saying and doing nothing. Selfish. Participants are not lesser than the neutral. Neutrality is a definite judgment. when the rational conclusion is to suspend judgment, people conclude that any judgment is as plausible as any other.
Religion’s Claim to be Non-Disprovable Orthogonality of religion and fact is a recent Western concept. The writers didn’t know the difference. Religion laid down a code of law - before legislative bodies; religion laid down history - before historians and archaeologists; religion laid down the sexual morals - before Women’s Lib; religion described the forms of government - before constitutions; and religion answered scientific questions from biological taxonomy to the formation of stars. Religion is purely ethical because every other area was taken over by better institutions. People think ethics is what’s left but it isn’t.
Professing and Cheering Claim to believe absurdities despite knowing enough science. Flaunting because beliefs were scientifically outrageous: credo quia absurdam est.
Beyond “belief in belief”. Not belief: profession. Saying something aloud as substitute for believing it. Even if a sentence is meaningless, you can still know when you are supposed to chant it.
Belief as attire Proper belief: I expect the world to be a particular way. Belief in belief: It is right to model the world as p. Trying to be passionate, and desperation mistaken for passion. Profession: I say the world is p to convince someone else of p or B(p) Cheering: I endorse people associated with p. Attire: I think the world is p to identify or fit in with my group.
Applause lights What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind? You have said the word “democracy”, so the audience is supposed to cheer. It’s not a proposition: tells audience when to clap. Most applause lights can be detected by a simple reversal test: if the reversal sounds crazy, initial statement probably does not convey new information. if no specifics follow, the sentence is probably an applause light.
Focus Your Uncertainty Retrodiction is boundless. To pundits, possibilities don’t ‘conflict’ because all can be papered over. But you can’t spend all 100 minutes on “up,” and all 100 minutes on “down,” You’ve got to prioritize. Clueless? Spend the same fraction of time on each? No… there’s a relation between how much you anticipate the outcomes and how much time you want to spend preparing each excuse. anticipation is limited, unlike excusability, but like time to prepare excuses. Even if you get more anticipation, you won’t have any more time to prepare. Your only course is to allocate your limited anticipation as best you can. “If only there were an art of focusing your uncertainty — of squeezing as much anticipation as possible into whichever outcome will actually happen!”
What Is Evidence? An event entangled by causation with whatever you want to know about. If your retina ended up in the same state regardless, you would be blind. Hence “blind faith.” A chain of cause and effect between world and brain, leads to a state of mind which mirrors the state of your shoes. Not only brains: film contracts shoelace-entanglement from photons: i.e. photo is evidence. Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about each. If your eyes and brain work correctly, your beliefs are entangled with the facts: rational thought produces beliefs which are themselves evidence. This is why testimony works. Conversely: If your beliefs are entangled with reality, they should be communicable and warranting. try to explain why the thought-processes you use systematically produce beliefs that mirror reality. Iff. If you don’t believe your thought processes are entangled with reality, why believe them?
Scientific Evidence, Legal Evidence, Rational Evidence Police commissioner tells you the kingpin is Wilkinsen. Is this enough evidence for belief? Well, it’s prudent to act as if Wulky has a way higher probability of being a crime boss, so the police commissioner’s statement was strong Bayesian evidence. Not legal evidence, and a good thing too (pragmatically, unconditional trust in cops leads to abuse). Still rational evidence. Not social evidence for instrumental reasons. All legal evidence should be rational evidence, but not the other way around. As I write this sentence I am wearing white socks. Are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No: no experiment you can perform to verify it. Science is generalization, so that you can run new experiments which test the generalization, verify it for yourself, no trust. Science is publicly reproducible knowledge. Is a rationalist licensed to believe in the historical existence of Alexander the Great? Yes. Ancient history untrustworthy but better than maximum entropy. Can’t check: Historical knowledge is not scientific knowledge. Is a rationalist licensed to believe that the Sun will rise on September 18th, 2007? Yes. The prediction that the Sun will rise is an extrapolation from a generalization. Can test models of the Solar System yourself by experiment. Is my definition of “scientific knowledge” true? That is not a well-formed question. The standards we impose upon science are pragmatic choices. (Quine!) Can we fully trust a result if people must pay to criticize it? only open, public knowledge counts. However we choose to define “science,” information in a $20,000/year closed-access journal will still count as Bayesian evidence; so too, the police commissioner’s nod.
How Much Evidence Does It Take? How hard would you have to entangle yourself with the lottery in order to win? 131,115,985, hence a random ticket 0.0000007% of winning. Now, noisy evidence. a little black box 100% TP, 25% FP - likelihood ratio = 4 to 1. 20 incorrect combinations, the box will beep on 5 of them by sheer chance doesn’t let you win the lottery, but it’s better than nothing. if you want to license a strong belief that you will win the lottery — p < 1% of being wrong — 34 bits of evidence. The larger the space of possibilities, or the more unlikely the hypothesis seems a priori, or the more confident you wish to be, the more evidence you need.
You cannot defy the rules; you cannot form accurate beliefs based on inadequate evidence. Of course, you can still believe based on inadequate evidence; but you will not be able to believe accurately. like trying to drive without fuel because you don’t believe it ought to take fuel to go places. the further you want to go, the more fuel you need.
Einstein’s Arrogance Einstein famously replied: “Then I would feel sorry for the good Lord. The theory is correct.” Who can know that the theory is correct, in advance of experimental test? Theory selection requires many, many bits. The more complex an explanation is, the more evidence you need just to find it in belief-space. Traditional Rationality emphasizes justification: “to convince me of X, present me with Y amount of evidence.” implies that you start with a hunch and gather “evidence” to confirm it — to convince the scientific community, or justify saying you believe. But from a Bayesian perspective, you need an amount of evidence roughly equivalent to the complexity of the hypothesis just to get the hunch. It’s not a question of justifying: guessing already shows massive amounts. Hunchings and intuitings are brain processes.
First formulating the hypothesis — the very first time — Einstein must have had, already in his possession, sufficient observational evidence to single out GR for unique attention. Or he couldn’t have gotten them right.
If Einstein had enough observational evidence to single out the correct equations of General Relativity in the first place, then he probably had enough evidence to be damn sure that General Relativity was true. doesn’t sound nearly as appalling when you look at it from that perspective.
(what about chance? 1000 fundamental physicists guessing 5 times a year doesn’t begin, and anyway he did it like 10 times)
Occam’s Razor How can we measure the complexity of an explanation? How can we determine how much evidence is required? NOT length of an English sentence is not a good way to measure complexity NOT intuitiveness NOT number of concepts (just hides complexity, just labels for concepts the listener already has).
If anger seems simple, it’s because we don’t see all the neural circuitry that’s implementing the emotion. You’ve got to explain your language, and the language behind the language, and the concept of mathematics, before electricity. Vastly easier to write a computer program simulating Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind (Thor). Solomonoff: Shorter bitstrings that specify Turing machines. Better: assign probabilities to strings. Can explain a fair coin by writing one function fairCoin_prob(coinFlips) : 0.5 ** len(coinFlips)
How do we trade off the fit to the data against the complexity of the program? fit only -> programs deterministically predict the data (assign it 100% probability). If the coin shows HTTHHT, then the program that claims that the coin was fixed to show HTTHHT fits the observed data 64 times better than the program which claims the coin is fair.
complexity only -> “fair coin” hypothesis always simpler than any other. Even if the coin turns up HTHHTHHHTHHHHTHHHHHT …
we see another hypothesis, not too complicated, that fits the data much better. (for i in len(seq) : s += “H” * i + “T”) If a program stores 1 more bit, it will halve the possibilities, and hence assign 2x probability to all points remaining one bit of program complexity should require at least a “factor of two” gain in fit. If you explicitly store an outcome like HTTHHT, 6 bits of complexity destroy the plausibility gained by a 64-fold improvement in fit (2^6 = 64). Otherwise, you will sooner or later decide that all fair coins are fixed. (compressing the data can mean that complexity is a net gain)
Solomonoff induction: sum over all allowed programs — uncomputable if all are allowed — with each having a prior p = 0.5^(code length in bits), and each is weighted by fit to all data observed. -> weighted mixture of experts that can predict future bits.
the problem with using “The lady down the street is a witch; she made the sequence come out 0101010101.” Your accusation of witchcraft doesn’t shorten the message; you would still have to describe, in full detail, the data her witchery caused. Witchcraft fits in the sense of qualitatively permitting our observeds; but this is because witchcraft permits everything. Like phlogiston, it doesn’t constrain anticipation.
Your Strength as a Rationalist is noticing when things contradict, when you don’t have a clear picture of something, when you are more confused by fiction than by reality and when you act on this confusion. belief is easier than disbelief, so we have to actively patrol our reactions to info. A hypothesis that forbids nothing fails to constrain anticipation. A falsehood ‘still feeling a little forced’ is one of the most important feelings a truthseeker can have.
Absence of Evidence Is Evidence of Absence People used eerie lack of Japanese-American sabotage as evidence of a conspiracy, a coming sabotage. When we see evidence, hypotheses that assigned it higher likelihood gain probability, at the expense of those that assigned it lower likelihood. You can assign a high likelihood to the evidence and still lose mass, if another hypothesis assigns even higher likelihood. Absence of proof is not proof of absence (that’s just denying the antecedent). a cause may not reliably produce signs of itself, but the absence of the cause is even less likely to produce the signs. The absence of an observation may be strong evidence of absence or very weak evidence of absence, depending on how likely the cause is to produce the observation. The absence of an observation that is only weakly permitted is very weak evidence of absence. if there are no positive observations at all, it is time to worry; hence the Fermi Paradox. only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also no evidence; no brain and no eyes.
Conservation of Expected Evidence Why make predictions beforehand? To avoid hindsight bias, ad hoccery. “absence of evidence is evidence of absence” is a special case of Conservation of Expected Evidence: the expectation of the posterior probability (after viewing the evidence) must equal the prior probability P(H) = P(H & E) + P(H & ¬E) Strong probability of seeing weak evidence in one direction must be balanced by a weak expectation of strong evidence in the other. the mere expectation of encountering evidence — before you’ve actually seen it — should not shift your prior. if you claim “no sabotage” is evidence for a Japanese-American Fifth Column, you must conversely hold that seeing sabotage would argue against a Fifth Column. If you claim that “a good and proper life” is evidence that a woman is a witch, then an evil and improper life must be evidence that she is not a witch. If you argue that God, to test humanity’s faith, refuses to reveal His existence, then the miracles described in the Bible must argue against the existence of God. You need not worry about how to interpret every possible experimental result to confirm your theory. You needn’t bother planning how to make any given iota of evidence confirm your theory, because you know that for every expectation of evidence, there is an equal and oppositive expectation of counterevidence. It is a zero-sum game. You might as well sit back and relax while you wait for the evidence to come in… Human psychology is so screwed up.
Hindsight Devalues Science “Day after day social scientists go out [and] discover that people’s behavior is pretty much what you’d expect.” Of course, the “expectation” is all hindsight . Hindsight bias: Subjects who know the answer to a question assign much higher probabilities they “would have” guessed that answer, compared to subjects who guess without knowing. How many times did you admit you would have been wrong? That’s how good your model really was. Hindsight -> undervaluing the surprisingness of scientific findings, especially those we understand — the ones that seem real, retrofittable. unfairly devalues the contribution of the researchers; worse, prevented from noticing you’re seeing evidence that doesn’t fit what you really would have expected. Knowing the outcome, we reinterpret the situation in light of that outcome. Even when warned, we can’t de-interpret to empathize with someone who doesn’t know what we know. We need to make a conscious effort to be shocked enough.
The Illusion of Transparency Closely related to hindsight: We always know what we mean by our words, and so we expect others to know it too. It’s hard to empathize with someone guided only by the words. participants took Mark’s communicative intention as transparent. It was as if they assumed that June would perceive whatever intention Mark wanted her to perceive. Speakers thought that they were understood in 72% of cases and were actually understood in 61%. When addressees did not understand, speakers thought they did 46%; when addressees understood, speakers thought they didn’t in only 12%. Additional subjects who overheard the explanation showed no such bias, expecting listeners to understand in only 56% of cases. Chamberlain wrote Hitler, intending that Britain would fight if any invasion occurred. phrased in polite diplomatese, was heard by Hitler as conciliatory. Don’t blame those who misunderstand your perfectly clear sentences. Your words are more ambiguous than you think.
Short Inferential Distances All knowledge was common knowledge in the ancestral env. you were unlikely to end up more than one inferential step away from anyone else. you almost never had to explain your concepts. This is where we got our communicative norms. A good mature science is one hundred steps past present-day common knowledge. Bias! The novel was probably false. Disagreement is stupidity or madness. Both sides (expert talker and lay audience) expect very short inferential distances to any new knowledge. A clear argument has to lay out an inferential pathway starting from what the audience already accepts. If you don’t recurse far enough, you’re just talking to yourself. you think you’re working a dozen inferential steps from the audience, which enrages them.
Fake Explanations Beware creative explanations when “I don’t know” or “This seems impossible.” is better. If you don’t understand something, you need to call it by a term that reminds you that don’t understand it or else you’ll think you’ve explained it when you’ve just named it. “Because of heat conduction?” isomorphic to saying “magic.” It feels like an explanation, but it’s not. instead of guessing, measure the heat at various points and various times. temperatures satisfy an equilibrium of the diffusion equation with respect to the boundary conditions. The deeper error is that they thought they were doing physics. They said the phrase “because of,” followed by the sort of words Spock might say, and thought they thereby entered the magisterium of science.
Guessing the Teacher’s Password explanation =/= password beliefs =/= words a physicist says “light is made of waves,” the teacher says “What is light made of?”, the student says “Waves!” We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student? But words do not have intrinsic definitions. syllables “made of waves” is not a hypothesis; it is a pattern of vibrations traveling through the air, or ink on paper. It can associate to a hypothesis in someone’s mind, but it is not, of itself, right or wrong. Since words get the gold star, students think verbal behavior has a truth-value. even worse: If you say “I don’t know,” you have no chance of getting a gold star Even visualizing the symbols of the diffusion equation doesn’t mean you’ve formed a hypothesis. passwords test memory; explanations test models, anticipations of experience. the school system is all verbal behavior If we are not strict about “Eh, maybe because of heat conduction?” being fake, the student will get stuck on some password. This happened to the whole human species for thousands of years.
Science as Attire The X-Men comics use “evolution,” “mutation,” and “genetic” to get into the literary genre of science. how many media people understand science only as a literary genre? How many skeptics don’t know what evolution prohibits? want to be part of the scientific in-crowd, like wearing a lab coat. Saying “because of evolution” is to identify with a tribe. People dismiss superintelligence because it matches the apocalypse genre, not because it contradicts their model. They don’t realize that science is about models.
What are you proud to believe? ask yourself which future experiences your belief prohibits from happening to you. That is the sum of what you have assimilated and made a true part of yourself. Anything else is probably passwords or attire.
Password: a merely verbal dangling placeholder for a belief. Attire: belief as an expression of support for a social group.
Fake Causality Fake explanations don’t feel fake. That’s what makes them dangerous. Phlogiston used effects as evidence for a cause AND used cause as evidence for effects. What is “fire”? Why does wood transform into ash? eighteenth-century chemists answered, “phlogiston.” Phlogiston escaped from fuel as visible fire. As it escaped, the fuel lost phlogiston and so became ash, one didn’t use phlogiston theory to predict the outcome of a chemical transformation. You looked at the result, then used phlogiston theory to explain it. it could explain everything. This was early science; no one realized there was a problem.
Causal inference: learning something about whether it’s raining (from some source other than observing the sidewalk to be wet) sends a forward-message from
Sidewalk Wetand raises our expectation of the sidewalk being wet. If you observe the sidewalk to be wet, this sends a backward-message to our belief that it is raining, and this message propagates from
Rainto all neighboring nodes except
Sidewalk Wet. Count evidence exactly once; no update message ever “bounces” back and forth. We learn about parent nodes from observing children, and predict child nodes from beliefs about parents. But we don’t keep separate books for the backward-message and forward-message. Until you notice that no advance predictions are being made, the non-constraining causal node is not labeled “fake.”
“hindsight bias”: humans do not rigorously separate forward and backward messages; backward contaminate forward. Those who long ago went down the path of phlogiston were not trying to be fools. Are there any fake explanations in your mind? Thanks to hindsight bias, it’s not enough to check how well your theory “predicts” facts you already know. You’ve got to predict for tomorrow, not yesterday. It’s the only way a human can send a pure forward message.
Semantic Stopsigns Why? Why? Why? Why…? the seeming paradox of the First Cause. Why Big Bang? At this point, people say, “God!” -> “Where did God come from? Why does the metasystem exist? Or “Why is this allowed to be uncaused?” “God is uncaused” is isomorphic to “Time began with the Big Bang.” “God!” is attire. But not only: a semantic stopsign — not a proposition, a traffic signal: do not think past this point. See also “democracy!”, “intelligence” -> “How well have liberal democracies performed, historically, on problems this tricky?” No word is a stopsign of itself; the question is effect on a particular person. What distinguishes a semantic stopsign is failure to consider the obvious next question
Mysterious Answers to Mysterious Questions The year 1800. why do muscles move instead of lying there like clay? Is this not magic? ‘It appears that animated creatures have the power of immediately applying forces by which the motions of these particles are directed to produce desired mechanical effects.’ vitalism: that the mysterious difference between living matter and non-living matter was explained by an élan vital. “élan vital” was a stopsign, a shrine. the greater lesson lies in the vitalists’ eagerness to pronounce it a mystery beyond science. they submitted. loath to relinquish their ignorance, worship your ignorance. There are no phenomena which are mysterious of themselves. an answer cannot be mysterious. Vitalism shared with phlogiston the error of encapsulating the mystery as a substance. Neither concentrated the probability density we more readily postulate mysterious inherent substances than complex underlying processes.
These theories were mysterious answers to mysterious questions
- curiosity-stopper rather than anticipation-controller.
- no moving parts — not a specific mechanism, but a blank solid. the reason the mysterious force behaves thus is wrapped.
- cherished ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
- Nothing changes. the phenomenon possesses the same wonderful inexplicability that it had at the start.
The Futility of Emergence What’s a current fake explanation? The noun ‘emergence’. There’s nothing wrong with saying “X emerges from Y” where Y is some specific model with moving parts. Gravity arises from the curvature of spacetime, according to a specific mathematical model. Chemistry arises from interactions between atoms, according to a specific mathematical model. this is not the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right. What do you know, after you have said that intelligence is “emergent”? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. no moving parts. confess their ignorance of the internals, and take pride in it the phenomenon is still a mystery
Human intelligence is an emergent product of neurons firing == Human intelligence is a product of neurons firing. the ant colony is the emergent outcome of many individual ants. == the ant colony is the outcome of many individual ants.
“Emergence” has become very popular, just as saying “magic” used to be very popular. “Emergence” has the same deep appeal to human psychology, for the same reason. “Emergence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Emergence is the junk food of curiosity. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors.
Say Not “Complexity” How would an AI invent for itself the concept of an “operator,” or “macro,” the key to solving the Rubik’s Cube? “Well, I think the AI needs complexity to do X, and complexity to do Y—” “Complexity should never be a goal in itself. You may need to use a particular algorithm that adds some amount of complexity, but complexity for the sake of complexity just makes things harder.” No one can think fast enough to deliberate, in words, about each sentence of their stream of consciousness; that would require an infinite recursion.
“Saying ‘complexity’ doesn’t concentrate your probability mass.”
concepts are not useful or useless of themselves. Only usages are correct or incorrect. constantly skipping over things they don’t understand, without realizing that’s what they’re doing.
The mistake takes place below the level of words. It requires no special character flaw; it is how human beings think by default, how they have thought since the ancient times. There are many words that can skip over mysteries, and some of them would be legitimate in other contexts — “complexity,” for example. But the essential mistake is that skip-over You have to feel which parts of your map are still blank, and more importantly, pay attention to that feeling. Academia: huge pressure to sweep problems under the rug so that you can present a paper with the appearance of completeness.
when we run into something we don’t understand, say “X magically does Y” — TODO without illusion of understanding.
Positive Bias: Look into the Dark 2-4-6 Yes. 4-6-8 Yes. 10-12-14 Yes. Turns out “Add two” is wrong. (Ascending) Subjects who attempt the 2-4-6 task usually try to generate positive examples, rather than negative. (see if it’s labeled “Yes.”) Test the triplet 8-10-12, hear that it fits, and confidently announce the rule. Not just “confirmation bias” (trying to preserve the belief you started with). By instinct, humans only live in half the world. You have to learn to flinch toward the zero, instead of away from it. THINK OF NONEXAMPLES So much of rationalist skill is below the level of words. Search for positive examples of positive bias, or spare a fraction for what positive bias leads you to not see?
Lawful Uncertainty Predict next card red/blue given base-rate: 70% are blue Winning strategy: predict the more common event. yields a 70% success rate. Subjects tended to match probabilities: predict blue 70% of the time, red 30% of the time. yields a 58% success rate The most important idea in all of rationality: subjects cannot believe they cannot predict. Should bet blue while constructing your hypothesis in your head. But all-blue strategy just didn’t occur to the subjects. Counterintuitive, that the optimal strategy is to behave lawfully in an environment with random elements. When your knowledge is incomplete — meaning that the world will seem to have randomness — randomizing takes you further from the target, not closer. In a world already foggy, throwing away your intelligence makes things worse. So there are not many rationalists: most who perceive a chaotic world will try to fight chaos with chaos. There are lawful forms of thought that still generate the best response, even when faced with an opponent who breaks those laws. Decision theory does not burst into flames and die when faced with an opponent who disobeys decision theory. Each bet that you make on red is an expected loss, and so too with every departure from the Way in your own thinking. How many StarTrek episodes are thus refuted? How many theories of AI?
My Wild and Reckless Youth Do not attempt long chains of reasoning or complicated plans. I gave a mysterious answer to a mysterious question once: neurons were exploiting quantum gravity No retrospective predictions. To a Bayesian: if a hypothesis does not have a favorable likelihood ratio over “I don’t know,” it raises the question of why you today believe anything more complicated than “I don’t know.”
Careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort. As if you could save magic from being isomorphic to magic, by calling it quantum gravity. I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic.
Why do people who call themselves “rationalists” not rule the world? You need a lot of rationality before it does anything but lead you into new and interesting mistakes.
Could’ve spent thirty years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest about my predictions, and accepted the disproof when it arrived. In Traditional Rationality, you’re allowed to guess, and then test your guess. But experience has taught me that if you don’t know, and you guess, you’ll end up being wrong.
Failing to Learn from History I didn’t realize that solving a mystery should make it feel less confusing. But I was trying to explain the Mysterious Phenomenon! not render it mundane, something not calling for an unusual explanation. Stars and matter and life were mysteries for hundreds of years and thousands of years, from the dawn of human thought. I thought the lesson of history was that astrologers and alchemists and vitalists had a character flaw. “But surely, if a phenomenon really was very weird, a weird explanation might be in order?” We read history but we don’t live it, we don’t experience it. If only I had personally postulated astrological mysteries and then discovered Newtonian mechanics, postulated alchemical mysteries and then discovered chemistry, postulated vitalistic mysteries and then discovered biology. “No way am I falling for that again.”
Making History Available fallacy of generalization from fictional evidence: The Terminator is available as if it were an illustrative historical case. The inverse: failing to be sufficiently moved by historical evidence. The trouble with fictional evidence is, it’s not from the same distribution as our universe. History did, but is in dry texts now. In our ancestral environment, there were no movies; what you saw with your own eyes was true. Is it any wonder that fictions we see in lifelike moving pictures have too great an impact on us? The inverse error is to treat history as mere story, process it with the same part of your mind that handles the novels you read. I realized that the invention and destruction of vitalism had actually happened to real people. To feel the force of history, try to think as if everything I read about in history books actually happened to me. Is there so much difference between seeing an event through your eyes — which is actually a causal chain involving reflected photons, not a direct connection — and seeing an event through a history book? Photons and history books both descend by causal chains from the event itself. The Earth became older, of a sudden. I should remember being a thousand peasants for every ruler. People sometimes wonder if overcoming bias is important. Don’t you remember how many times your biases have killed you? Don’t imagine how you could have predicted the change, for that is amnesia. Remember that, in fact, you did not guess. Remember how, century after century, the world changed in ways you did not guess. Maybe then you will be less shocked by what happens next.
Explain/Worship/Ignore? Ignore: you could simply not ask why: Explain: find which substances or processes cause the thing. Worship: enjoy the mysteriousness. Each time you hit Explain, science grinds for a while, returns an explanation, and then another dialog box pops up. We can hit Explain for the Big Bang, wait, and maybe someday it will return a perfectly good explanation. But then that will just bring up another dialog box. So, if we continue long enough, we come to an Explanation That Needs No Explanation, a place where the chain ends — and this, maybe, is the only explanation worth knowing. There — I just hit Worship. There are more ways to worship something than lighting candles around an altar. If I’d said, “Huh, that does seem paradoxical. I wonder how the apparent paradox is resolved?” then I would have hit Explain, which does sometimes take a while to produce an answer. If the whole issue seems to you unimportant, or irrelevant, if you’d rather put off thinking about it, you have hit Ignore.
“Science” as Curiosity-Stopper Imagine I caused a brilliant light, flaring in empty space beyond my outstretched hands. Most people would be fairly curious. I want to cast my spells whenever and wherever I please. Is there a spell that stops curiosity? Yes indeed! Whenever anyone asks “How did you do that?” I just say “Science!” You don’t actually know anything more than you knew before I said the magic word. If you thought the light bulb was scientifically inexplicable, it would seize the entirety of your attention. You would drop whatever else you were doing, and focus on that light bulb. “scientifically explicable” means someone else knows how the light bulb works. Because someone else knows, it devalues the knowledge in your eyes. Look at yourself in the mirror. Do you know what you’re looking at? Do you know what looks out from behind your eyes? Do you know what you are? Some of that Science knows, some of it Science does not. Why should that distinction matter to your curiosity, if you don’t know? Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery.
Truly Part of You Beware names. You can write a trivial program and name it ‘HAPPINESS’. If you delete the suggestive English names, they don’t grow back. How can you realize that you shouldn’t trust your seeming knowledge that “light is waves”? One test is, “Could I regenerate this knowledge if it were somehow deleted from my mind?” If you acquire your beliefs about beavers by someone else telling you facts about “beavers,” you may not be able to recognize a beaver when you see one. If you don’t have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all? If no one had ever explained mathematical proof to me, would I be able to reinvent that? Someone invented it. What was it that they noticed? Would I notice, if I saw something equally novel and equally important? Would I be able to think that far outside the box? How much of your knowledge could you regenerate? It’s not just a test to cast out insufficiently connected beliefs. It’s a way of absorbing a fountain of knowledge, not just one fact.
The Simple Truth Truth is correspondence from a mental model to external reality.
II. HOW TO ACTUALLY CHANGE YOUR MIND
Tsuyoku Naritai! (I Want to Become Stronger) ‘the Torah loses knowledge in every generation; science gains. So science must surpass Torah.’ So long as you keep moving forward you will reach your destination. tsuyoku naritai, the will to transcendence. the Ashamnu does not end, “But that was this year, and next year I will do better.” “We are all biased, we are all irrational, not fully informed, overconfident, poorly calibrated…” Tell me how you plan to become less irrational, more informed, less overconfident, better calibrated. Do not glory in your awareness of your flaws. Never confess you are just as flawed as I am unless you can tell me what you plan to do about it. Afterward you will still have plenty of flaws left, but that’s not the point; the important thing is to do better, to keep moving ahead, to take one more step forward.
The Proper Use of Humility Good science requires humility. What kind? Creationist?: “Who can know whether evolution is correct? It is just a theory.” -> motivated skepticism, disconfirmation bias. Safety engineer?: fail-safes, even though they’re damn sure the machinery won’t fail? -> Risk mindset, outside view. Lazy maths student?: “No matter how much I check, I can’t ever be certain I’m correct”, doesn’t check even once? -> Social modesty (regulating status, rather than accuracy). Latter is default, ancient. Scientific humility is recent. “If you do not seek perfection you will halt before taking your first steps.” It is not normal to not compromise. So science disturbs social people. Scientists are getting above themselves — they think they’re chiefs of the whole tribe! Where people have vague models, they usually end up believing whatever they started out wanting to believe. Always ask: “Does acting humbly make you stronger, or weaker?” Are you adding a few extra support cables or shrugging? Buy a lottery ticket: “you can’t know I’ll lose.” Disbelieves evolution: “you can’t prove it’s true.” Humility, in its most commonly misunderstood form, is a fully general excuse not to believe something. Beware of fully general excuses! The point of thinking is to shape our plans. “To be humble is to take action in anticipation of errors. To confess your fallibility and then do nothing is boasting of your modesty.”
Tsuyoku vs. the Egalitarian Instinct A successful hunter-gatherer downplayed accomplishments, to avoid envy. If you’re ashamed of wanting to do better than others — the median’s where you stop. Unhealthy? I’ll take all the useful motivation I can get. Unhealthy to be ashamed of doing better. Zero-sum? So is Go; it doesn’t mean we should abolish that human activity, if fun or interesting. Just run as fast as you can. Perhaps wise to downplay the accomplishment. Fun to proudly display your modesty, so long as everyone knows how much you have to be modest about. Even if you only whisper it: Tsuyoku, tsuyoku! Then set a higher target. Even though I fall behind, I’ll run as fast as I can.
- The Third Alternative
- “Believing in Santa Claus gives children a sense of wonder and encourages them to behave well.
- Therefore, even though Santa-belief is false, it is a Noble Lie, preserved because useful.”
- False dilemma / package-deal fallacy. Other policies supply a sense of wonder, such as a space launch or scifi. Praise without bribes leads to unsurveilled good behavior. Noble Lies are generally package-deal fallacies; but if we need the supposed gain, construct an alternative.
- Decide to look for one; 2. Search; 3. accept it. One factory for false dilemmas is pointing to a supposed benefit of status quo over nothing. Finding a Third Alternative would destroy the justification. if the goal is to justify a particular strategy by claiming it helps people, alternative is a competitor. We have to generate options before comparing; this is costly. But we always have resources for justifying our current policy. Believing whatever you started out with is more convenient than updating. One does not find Noble Liars who calculate an optimal new Noble Lie; they keep whatever lie they started with. “If I saw there’s a superior alternative, would I be glad?” If no, you may not have searched for a Third Alternative. Did I spend five minutes with my eyes closed, brainstorming wild and creative options? Were you careful not to think of a good one? There are mental searches we secretly wish to fail; when success is uncomfortable, people take the earliest possible excuse.
Lotteries: A Waste of Hope Is the real benefit of lottery play non-financial? A dollar for a day’s worth of fantasy? But occupying your valuable brain with a negligible fantasy. Money sink and emotional sink. A passive and infinitesimal replacement for improvement. Dreams are for noticing actions. But how can such reality-limited fare compete with the artificially sweetened prospect? Our intuitive anticipations aren’t very flexible. People don’t realize that expected utility ought to override imprecise instincts, and instead treat the calculation as merely one emotionally weak argument. Overcoming bias requires (1) noticing the bias, (2) analyzing, (3) deciding it’s bad, (4) figuring a workaround, and (5) implementing it. (3) should be the easiest of the five.
New Improved Lottery “big difference between zero chance of becoming wealthy, and epsilon.” No: there is an order-of-epsilon difference. Grant that the lottery sells people epsilon hope. Improve it. Pays out every 5 years on average, at a random time. Buy in once and get years of epsilon. Mobile app to see their instantaneous chances of winning. Tastier than grad school, getting home early.
People are willing to pay; it must be valuable. The alternative is that consumers are making mistakes, and we all know that can’t happen. If you believe the lottery is a service, it is clearly an enormously overpriced service; it’s your solemn duty to demand this instead.
But There’s Still a Chance, Right? A: “Maybe chimp and human DNA is similar by coincidence.” B: “The odds of that are like googol : 1.” A: “But there’s still a chance, right?” B: “Practically, no.” Human intuitions make a qualitative distinction between “No chance” and “A very tiny chance.” Probability theory lets us calculate a chance too tiny to be worth the mental space — but by that time, you’ve already calculated it. We can use words to describe numbers that small, but not feelings — a feeling that small doesn’t exist. Confirmation bias: Once an idea gets into your head, you tend to find support for it everywhere Also qualitative distinction between “certain” and “uncertain” arguments: if not certain, allowed to ignore it. If you’re going to ignore it when the likelihood is one over googol, why not also ignore likelihood zero? Why is it so much worse to ignore certain evidence than uncertain evidence? “But you can’t prove me wrong.” If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?
The Fallacy of Gray A: “No one does pure good or pure bad. It’s all gray. No one is better than anyone else.” B: “You conclude all grays are the same shade. You mock the simplicity of the two-color view, yet replace it with a one-color view…”
All probabilities were simply “uncertain” and licensed to ignore them. Banks: “every society imposes some of its values on those raised within it, but some societies try to maximize that, and some try to minimize it.” even if you can’t switch something from on to off, you can still increase it or decrease it. Many have said of Overcoming Bias: “It is impossible, no one can completely eliminate bias.” That which I cannot eliminate may be well worth reducing. Gandhi was imperfect and Stalin was imperfect, but not the same shade of imperfection. “Every scientific paradigm imposes some of its assumptions on how it interprets experiments,” But there are worldviews which try to minimize that imposition, and worldviews which glory in it. “Science is based on faith too, so there!” “So the priests of science can blatantly, publicly, verifiably walk on the Moon as a faith-based miracle, and your priests can’t do the same.” NB: “This isn’t as light as you think, because of specific problems X, Y, and Z.” is good. If there is no black and white, there is yet lighter and darker.
- Absolute Authority
The Authoritative Way vs the Quantitative Way. Religious vs scientific.
“Science doesn’t really know anything. You can’t know for certain you’re right.
You changed your minds about gravity — who’s to say tomorrow you won’t about evolution?”
To the unenlightened, there is only binary authority.
“What changes cannot be an authority, can never again be trusted — they’re a witness caught lying.”
When someone is accustomed to certainty, you can’t just say, “Science is probabilistic, like all knowledge.”
Scriptures come from infallible God; any flaw would destroy their authority utterly; claiming certainty is mandatory whether you’re certain or not.
Also school. Teacher tells you, you have to parrot them. But when a pupil makes a suggestion in class, you’re free to agree or disagree and no one punishes you. Belief as social authority.
Authorities must be yielded to, while suggestions can be obeyed or discarded as a matter of personal preference. Science, since it confesses itself to have a possibility of error, must belong in the second class.
Believe they have knowledge more reliable than probabilistic guesses: more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted.
The power of science comes from changing our minds. If you’ve never admitted you’re wrong, it doesn’t mean you’ve made fewer mistakes.
Anyone can say they’re absolutely certain. Scientists have higher standards for saying that.
When a scientist says the same thing, it means that they think the probability is so tiny that you couldn’t see it with an electron microscope, but the scientist is willing to see the evidence in the extremely unlikely event that you have it.”
How do you teach someone to live in a universe without certainty?
- you can live without certainty — it would not deprive you moral or factual distinctions. You do not need absolute knowledge of absolutely good options and absolutely evil options in order to be moral. You can have uncertain knowledge of relatively better and relatively worse options, and still choose.
- doubt, questioning, and confession of error are not terrible shameful things.
- There’s the whole notion of gaining information by looking at things.
- Calibrated confidence — “probability” =/= emotional commitment to an idea. If anything, statements people are really fanatic about are less likely to be correct. Fanatic professions of belief do not arise in the absence of opposition. Scientists should say “We are not INFINITELY certain” rather than “We are not certain.” (Suggests you know some specific reason for doubt.) Reversed stupidity is not intelligence. You can’t arrive at a correct answer by reversing every single line of an argument that ends with a bad conclusion.
- How to Convince Me That 2 + 2 = 3
Unconditional facts are not the same as unconditional beliefs.
I cannot conceive of 2 + 2 = 4 being false. But belief can always be contradicted.
Imagine evidential crossfire: physical observation, mental visualization, social agreement.
- neurological fault?
- someone messing with me (hypnosis?) We got 2+2 = 4 from empirical entanglement. Two possibilities, for a belief — either it got there via a mind-reality entangling, or not. If not, the belief can’t be correct except by coincidence. For beliefs with any shred of internal complexity, the space of possibilities is large enough that coincidence vanishes.
Infinite Certainty Universal in the territory =/= Certain in the map. Degree of truth =/= degree of uncertainty CALIBRATION: To justify “99% confidence”, imagine being asked 100 times & only fucking up once. Humans who say “99% confident” do not have 99% accuracy, in general. Flip a coin and don’t look: the coin is heads or tails - and you’re completely unsure of which. The credibility of “2 + 2 = 4 is always true” »> the credibility of any particular philosophical position on what “true,” “always,” or “is” means. I could have hallucinated all that previous evidence, or misremember. Maybe 99.99% on 2+2=4. Not “53 is prime.” Even if I’m 99% confident of p, it doesn’t mean I think p is true 99 times out of 100. 100% confidence means infinite certainty. 99.9999% means one error in one million statements: years of faultless talking. ‘once I assign a probability of 1 to a proposition, I can never undo it. No matter what I see or learn, I have to reject everything that disagrees with the axiom.’
0 And 1 Are Not Probabilities Infinities are not integers: symbols for talking about integers. Don’t even behave like an integer, violate field axioms (5 + inf = inf, inf - inf != 5). In the real world, you don’t need a whole lot of infinity. Probabilities and odds are isomorphic. Odds (O = P/(1 − P)) don’t allow probability 1. Odds are simpler for Bayesian updates, probabilities for additive questions like “chance of 1 to 4 on d6?” Cox’s Theorem: all ways of representing uncertainties that obey x constraints are isomorphic. Log odds are even simpler for updates. Don’t allow 0 or 1. Reaching ends seems simple in additive terms: 0.9999 and 0.99999 seem only 0.00009 apart To get to probability 1 from probability 0.99999, you should travel merely 0.00001. But in odds, 0.9999 and 0.99999 go to 9,999 and 99,999. In log odds, 0.9999 and 0.99999 go to 40 decibels and 50 decibels. The l-o distance between two uncertainties equals the evidence you need to go from one to the other. Exposes the fact that reaching infinite certainty requires infinitely strong evidence. I propose: 1 and 0 are not in the probabilities. If you made a magical symbol to stand for “all possibilities I haven’t considered,” then you could marginalize over events including this, and get a magical symbol “T” meaning infinite certainty. Is there some way to derive a theorem without using magic symbols? There are mathematicians who refuse to believe in excluded middle or infinite sets; I’d like to not believe in absolute certainty.
Your Rationality Is My Business Is it my business if someone else chooses to believe what is pleasant rather than what is true? Well, why do you care whether I care whether someone else cares about the truth? But seriously: it is right to have an interest in what human civilization becomes in the future. One of those interests is the human pursuit of truth, which has strengthened slowly over the generations. I wish to strengthen that pursuit further, in this generation. That is a wish of mine, for the Future. And we are all players on that vast gameboard, whether we accept responsibility or not. People hate normative epistemology because of heretic persecution. Non sequitur. Let’s argue against bad ideas but not set their bearers on fire. Science: factual disagreements should be decided with experiments and mathematics, not violence & edicts. You should have to win by convincing people, and should not be allowed to burn them. Advocates of relativism or selfishness are not truly relativistic or selfish. Real relativist wouldn’t judge realists. Real egoist would get on with making money. Relativism: goal is to prevent players from making certain kinds of judgments. Selfishness: goal is to make all players selfish. If there are any true Relativists or Selfishes, we do not hear them — they remain silent. I cannot help but care how you think, because each time a human being turns away from the truth, the unfolding story gets a little darker. Lying to yourself does not shadow humanity’s history as much as public lies or setting people on fire. Yet a part of me cannot help but mourn.
Politics is the Mind-Killer Politics was once maximally emotional: life and death. Today, arguments about whether the minimum wage retain this drama: losing = death. When making a general point, avoid examples from contemporary politics. Politics is important and needs rationality — but a terrible place to learn or discuss rationality. ‘Arguments are soldiers. You must support all arguments of your side; otherwise stabbing your soldiers in the back—providing aid and comfort to the enemy. Scientists can suddenly turn into slogan-chanting zombies. Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Not mentioning politics is like trying to resist a chocolate cookie. It doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the community growth to discuss it without invoking color politics.
Policy Debates Should Not Appear One-Sided Hanson: “Let’s have Banned Goods Stores.” Yudkowsky: Some mother of five will buy “Sulfuric Acid Drink” for her arthritis and die. Why did people think I was in favor of regulation? They deny all costs of a favored policy; policy tradeoffs seem simpler than they actually are. On questions of fact it’s legit to expect one-sided; facts are one way. Complex actions with many consequences will not be onesided. To the politically infested, policy debates appear one-sided — drawbacks of your favored policy are enemy soldiers, to be attacked. But thinking it’s deep to just average policy positions is also a failure mode. It’s an unfair universe. Humans have strong negative reactions to perceived unfairness; stressful. -> change view of the facts — deny that unfair events happened, or edit history to make it appear fair. -> change morality — deny that the events are unfair.
Birth lottery for intelligence — unfairness so extreme that many people choose to deny the facts. Some people born to environments where the witch doctor tells them that it is wrong to be skeptical. Are you so smart that you’d have been a proper scientific skeptic even if born in 500 CE?
Saying “People who buy dangerous products deserve to get hurt!” is refusing to live in an unfair universe. Economist: “They don’t deserve to get hurt but the cost-benefit is in favour of keeping it open, sorry.” I draw a line at capital punishment. If you’re dead, you can’t learn from your mistakes. Unfortunately the universe doesn’t agree with me.
The Scales of Justice, the Notebook of Rationality scales are zero-sum. a gross distortion. discussion as combat; only two sides, and points scored are irrelevant; binary win/lose Everything the winner says must be true, and everything the loser says must be wrong. the facts don’t know whose side they’re on. people tend to judge problems by an overall good or bad feeling. “reactor produces less waste” -> “probability of meltdown lower” mixed up logically distinct questions — treated facts like soldiers on different sides. all Bayesian evidence consists of probability flows between hypotheses; there is no such thing as evidence that “supports” or “contradicts” a single hypothesis, except insofar as other hypotheses do worse or better. If a strictly factual question with a binary answer space, scales would be appropriate. Not all arguments reduce to mere up or down.
Correspondence Bias CB: tendency to draw inferences about a person’s essence from behaviors entirely explained by context. Simplistic correspondence from their action to their essence. Fundamental attribution error: tendency to overattribute others’ behaviors to their dispositions, while reversing this tendency for ourselves. False consensus effect: we overestimate how likely others are to respond the same way as us. You kick a vending machine for no visible reason, and are “an angry person.” I kick the vending machine because the bus was late and now the damned machine has eaten your lunch money again. We attribute our actions to our situations, seeing our behaviors as normal responses. But when someone else kicks a vending machine, we don’t see their history trailing behind them. Priors: more late buses in the world, than mutants born with unnaturally high anger levels. Similarly, any given aspect of someone’s disposition is probably not very far from average. Even when informed of situational causes, we don’t properly discount the behavior. Mechanisms sound more complicated than essences; they are harder to think of, less available. So ask what situations people see themselves as being in. Most people see themselves as perfectly normal. Even people you hate, people who do terrible things, are not exceptional mutants. When you understand this, you are ready to stop being surprised by human events.
Are Your Enemies Innately Evil? When someone offends us — commits an action of which we (rightly or wrongly) disapprove — correspondence bias redoubles.
Evil deeds demand evil disposition hypothesis. What might the Enemy believe about their situation that would reduce the bizarreness of their behavior? Are al-Qaeda mutants that hate freedom? The Enemy’s self-narrative is not going to make the Enemy look bad. If you think of them as self-consciously evil, you are wrong. But rhetorical incentives lead to spiral of widespread inaccuracy. Also simplifies combat. People hate people who take this ITT view; understanding mistaken for agreement / justification. When you accurately estimate the Enemy’s psychology, you should feel unbearable sadness or empathic horror. Enmity + environmental influences implies tragedy. Welcome to Earth.
- Reversed Stupidity Is Not Intelligence
Human silliness is operates orthogonally to alien intervention: see flying saucer cults whether or not there were flying saucers.
The conditional probability P (cults | aliens) isn’t less than P(cults | ¬aliens).
“flying saucer cults exist” is not evidence against the existence of flying saucers, nor for.
Stupidity does not reliably anticorrelate with truth.
If someone was 99.99% wrong, you could get 99.99% accuracy by reversing them.
A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.
- To really argue against an idea, argue against the best arguments. Arguing against weak proves nothing, because even the strongest idea attracts weak advocates.
- Exhibiting sad, pathetic lunatics is no evidence against that Idea.
- Your willingness to believe changes with willingness to affiliate with the people associated.
- Not “system x+y+z is broken therefore build new system with none of x,y,z”. Maybe just z is the fault.
- If a hundred inventors fail to build flying machines using metal and wood and canvas, it doesn’t imply that what you really need is a flying machine of bone and flesh. Until you understand the problem, hopeful reversals are exceedingly unlikely to hit the solution.
- Argument Screens Off Authority
- What counts as argument from authority? Is trusting expert a fallacy?
- A good technical argument eliminates reliance on the authority of the speaker.
- If we have all the steps, all of the support, all the other authorities appealed to, we can ignore credentials.
- (Assuming we have enough technical ability to process the argument. Otherwise, depends a great deal on his authority.)
- asymmetry between argument and authority. If we know your authority, we are still interested in hearing the arguments; but if we know the arguments fully, we have very little left to learn from authority. Are authority and argument fundamentally different kinds of evidence? How to represent the difference in probability theory?
Night -> Sprinkler -> Slippery
if we know the sprinkler is on, the probability is 90% that the sidewalk is slippery. Sprinkler on 10% of the night, so if we know it’s night, P of slippery is 9%. If we know it’s night and the sprinkler’s on, P of the sidewalk being slippery is 90%. i.e. P(Slippery|Night,Sprinkler) = P(Slippery|Sprinkler) The Night node is screened off by the Sprinkler node (S is conditionally independent of N)
Truth -> Argument strength -> Expert belief
So path blocking and P(truth argument,expert) = P(truth argument)
In practice you can never completely eliminate reliance on authority. Good authorities know about counterevidence. And very hard to reduce arguments to pure math; and otherwise, judging the strength of an inferential step may rely on intuitions you can’t duplicate without thirty years of experience.
Hug the query Hypothesis -> Direct implication -> Indirect implication Can never get greater info from distant nodes. You want to stay as close to the original q as you can, to screen off as much as possible. authority(Kelvin) > authority(Wright Brothers) but this means nothing when it takes off. argue physics > argue credentials argue physics > argue rationality argue original issue > argue any proxy Who was more rational? If we can check their calculations, we don’t have to care! Sometimes you don’t know the background / private information / isn’t time. Often worthwhile to judge the speaker’s rationality. But do it with a hollow feeling.
Rationality and the English Language Orwell is an artist-rationalist. His adversary was not Nature, but human evil. evil hides in muddy thinking to imprison people for years without trial, you must prevent clear images outraging conscience. “Unreliable elements were subjected to an alternative justice process.” Passive voice removes the actor, leaving only the subject. subjected by whom? Static noun phrases keep anything unpleasant from actually happening. It sounds more authoritative to say “The subjects were administered Progenitorivox” than“I gave each college student a bottle of 20 Progenitorivox, and told themto take one every night until they were gone.” too abstract to show what actually happened—the postdoc with the bottle in their hand, trying to look stern; the student listening with a nervous grin. cliches
Nonfiction conveys knowledge, fiction conveys experience.
Authors: What the audience thinks you said is what you said; you can’t argue with the audience. A fictional experience is a continuous stream of first impressions. A rationalist should become consciously aware of the experiences which words create. Meaning does not excuse impact! “A speaker who uses that kind of [aggressive cliche] has gone some distance toward turning himself into a machine… his brain is not involved,as it would be if he were choosing his words for himself. In prose, the worst thing one can do with words is surrender to them.” “When you think of something abstract you use words from the start, and unless effort to prevent it, the existing dialect rushes in, at the expense of blurring or even changing your meaning.” “better put off words as long as possible and get the meaning through pictures and sensations.”
Human Evil and Muddled Thinking Orwell worked for the future, with clear writing. muddled language is muddled thinking “political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness” because the real rationales are unpalatable, vote-losers “If you simplify your English, when you make a stupid remark its stupidity will be obvious, even to yourself” For evil to avoid its natural opposition, revulsion must remain latent. Clarity must be avoided. “Since you don’t know what Fascism is, how can you struggle against Fascism?” where there is human evil in the world, where there is cruelty and tortureand deliberate murder, there are biases enshrouding it Our last enemy is ourselves; and this is a war, and we are soldiers
Knowing About Biases Can Hurt People Knowing that experts are miscalibrated lets you dismiss anyone you don’t like. If you’re irrational to start with, having more knowledge can hurt you
- Prior attitude effect. Subjects who feel strongly evaluate supportive arguments more favorably.
- Disconfirmation bias. Subjects spend more time and resources denigrating contrary arguments.
- Confirmation bias. Subjects free to choose info sources seek out supportive sources.
- Attitude polarization. balanced set of pro and con arguments will exaggerate initial polarization.
- Attitude strength effect. stronger attitudes -> more prone to the above.
- Sophistication effect. Politically knowledgeable more prone to the above (more special cases).
too much ready ammo is a primary way people with high mental agility end up stupid Even “sophisticated arguer” can be deadly, if it leaps too readily to mind when you encounter an intelligent person who says something you don’t like.
never mention calibration and overconfidence unless first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!
NB: “Yeah but experts are miscalibrated” is a dangerous Fully General Counterargument.
Update Yourself Incrementally Active confirmation bias / explaining everything in prospect is invalid by definition (equal and opposite expectation of counterevidence). Taking hits occasionally is okay and expected. But public opinion is binary, qualitative, demands perfection. “impossible for a false theory to fit even a single event. Thus, any confirming evidence is all theory needs.” This is how humans have argued, trying to defeat all enemy arguments, denying the enemy even a single shred of support. allowing a single item of probabilistic counterevidence would be the end of the world. But what about instrumental stubbornness? “If you concede that any counterarguments exist, the Enemy will harp on them over and over! You’ll lose!” Rationality is not for winning debates, it’s for deciding which side to join. If choosing the wrong side is viscerally terrifying you’d best integrate all the evidence. Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down. If the theory is true, supporting evidence will come in shortly. If the theory is false, you don’t really want it anyway. When not even single contrary observations allowed, cognitive dissonance, have to be argued away. rules out incremental progress. If you think you already know what evidence will come in, you’re already sure of your theory - little room for p to go up. however unlikely disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side. The weighted mean of your expected posterior probability must equal your prior probability. Silly to be terrified of revising downward, if you’re bothering to investigate a matter at all. On average, you must anticipate as much downward shift as upward shift from every individual observation. perhaps an iota of antisupport comes in over and over, and your belief drifts down and further down. Until, finally, you realize. no point in constructing excuses. In that moment of realization, you have already relinquished your cherished belief. You can’t become stronger by keeping the beliefs you started with.
One Argument Against An Army Arguing the same issue repeatedly can lead to double-counting and n-ple counting. (“Rehearsing arguments”) Imagine you have 3 arguments for p, and someone says 1 against p. you keep your belief that p. Someone else says another 1 against p. you say your 3 again, keep your belief that p, and strengthen conviction. But you’ve already taken those into account! Imagine a scientist has 50 subjects, fails to obtain statistically significant results, so counts the data twice. selectively double-count only some evidence! you still have to shift the probability down from whatever it was before you heard the contrary evidence. With wrong reasoning, a handful of support can hold off an army of contradictions.
The Bottom Line Two algorithms: 1. rationality. Enumerate arguments -> Work out conclusion; 2. rationalization. Decide conclusion -> search for supporting arguments Only the actual causes of your beliefs determine your rationality To imagine probability of a collection of worlds — close possible worlds / Everett branches / Tegmark duplicates Seeing the word ‘therefore’ can confuse you in a Stroop way - it’s not evidence, but looks like it. The handwriting of the curious inquirer is entangled with the signs and portents and the contents of the boxes, whereas the handwriting of the clever arguer is evidence only of which owner paid the higher bid. There is a great difference in the indications of ink, though a foolish read of the ink-shapes will not reveal it. The clever arguer is a marketer, not a scientist. Not entangled with the world; not dependent on it. If your brakes squeal, you can look for reasons your car doesn’t need fixing. No e[epistemic alarms will sound. But the percentage of you surviving in Everett branches is determined by the algorithm that decided which conclusion you would seek arguments for. Your real algorithm is “Never repair anything expensive.” The arguments you write afterward, above the bottom line, change nothing.
NB: “My opponent is [just] a clever arguer” is a dangerous Fully General Counterargument
What Evidence Filtered Evidence? How do you handle adversarial poisoning / selection bias? A Bayesian must always condition on all known evidence, on pain of paradox. But then the clever arguer can make you believe anything they choose. You don’t know how strong evidence is unless you know the sampling algorithm. condition on the subset of worlds where a speaker following some particular algorithm said, “The 4th coinflip came up heads.” Utterance != the fact. condition on the facts and on the additional fact of their presentation by A.
For binary outcomes, setting two biased adversaries against each other is an ok solution: someone has a motive to present any given piece of evidence, so the court sees all the evidence. But reality has many-sided problems not readily found by Blues and Greens shouting at each other. In a way, no one should really trust natural selection theory until listening to creationists for five minutes; then they know it’s solid.
NB: “That argument was filtered, therefore I can ignore it” is a dangerous Fully General Counterargument.
Rationalization Poor word - you cannot “rationalize” what is not already rational. Precludes change - not every change is an improvement, but every improvement is necessarily a change. Rationality: the forward flow that gathers evidence, weighs it, and outputs a conclusion. Curiosity is the first virtue, without which questioning is purposeless and skill without direction.
A Rational Argument You defeat yourself the instant you specify your conclusion in advance. Campaign should publish facts unfavorable to their candidate? To let others be epistem rational, yes. Only one way to do politics rationally: Before anyone hires you, gather all the evidence, state criteria, select best.
Avoiding Your Belief’s Real Weak Points More likely to spontaneously self-attack strong points with comforting replies rehearsed. Likely to stop at the first reply rather than criticizing the reply. Religion is sustained by people just-not-thinking-about the real weak points of their religion. When doubting most cherished belief, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Ask what smart people who disagree would say to your first reply, and your second. What is true is already so. Owning up to it doesn’t make it worse. People can stand what is true, for they are already enduring it.
Motivated Stopping and Motivated Continuation You have to generate options and this isn’t free. In real life evidence is costly so we need a stopping threshold. Search: Sampling; dynamic relative preferences; marginal exploration; exhaustion and final choice. Motivated skeptic: “does the evidence compel me to accept this?” Motivated credulist: “does the evidence allows me to accept this?” Motivated stopping: hidden motive for choosing the “best” current option -> reject further consideration. Motivated continuation: hidden motive to reject the current best option -> suspend judgment, more options, wait. Fisher’s motivated skepticism (shill and smoker) was motivated continuation. Nothing virtuous about refusing to integrate the evidence you have. You can always change your mind later. Motivated stopping occurs when third alternative is feared, when you have an argument whose obvious counterargument you would rather not see, when you feel good just for acting, so you’d rather not investigate effects. Wherever your beliefs and anticipations get out of sync. The decision to terminate search is, like the search procedure itself, subject to bias and hidden motives.
Fake Justification Most self-criticism is like free elections in a one-party country. Worse than motivated stopping: absolutely no search for alternatives, post-hoc rationalisation. Someone who buys a million-dollar laptop was really thinking, “Ooh, shiny,” and that was the causal history of their decision. No amount of “justification” can change this, unless the justification is a genuine, new search process that can change the conclusion. Writing the justification of “literary quality” above the bottom line of “I <3 the Bible” misrepresents how the bottom line got there. Real criticism changes the entanglement of your conclusion over possible worlds. With all those open minds out there, you’d think there’d be more belief-updating. If rolling dice likely wouldn’t produce the correct answer, how likely is it to pop out of other irrational processes?
Is That Your True Rejection? The reason for rejection people tell you may not make the difference. Fixing it may, or may not, change anything. Pattern recognition is not reason. Most reject transhumanism because “weird idea” or “science fiction” or “cult” or “callow youth” is activated. Post-hoc search for justification -> “speaker does not have a PhD.” Expect that persistent disagreements are based in hard to communicate or hard to expose things: in order of Depth: Uncommon well-supported scientific knowledge or math; Long inferential distances; Hard-to-verbalize intuitions, perhaps stemming from specific visualizations; Zeitgeists inherited from a profession (that may have good reason for it); Patterns perceptually recognized from experience; Sheer habit; Emotional commitments to believing in an outcome; Fear that a past mistake could be exposed; Deep self-deception for the sake of pride or other personal benefit. If true rejections could be laid on the table, the disagreement would probably never have lasted past the first meeting. “Is this my true rejection?” is something that both disagreers should be asking themselves, to make things easier on the other person. You can openly ask, “Is that simple reason your true rejection, or does it come from intuition-X or professional-zeitgeist-Y ?” More embarrassing possibilities are left to the Other’s conscience, as their responsibility.
Entangled Truths, Contagious Lies A single pebble implies our physics, thus kind of the whole universe. I don’t know pebbles well enough to guess the signatures by which falsehood is caught. In one sense every event is entangled with its whole past lightcone. If you said, “Everything is entangled with something else” or, “Everything is inferentially entangled and some entanglements are much stronger than others,” you might be really wise instead of just Deep. Great Web of Causality: the list of noticeable entanglements is much shorter, and is something like a network. Tangled web: Occasionally someone lies about a fact, and then has to lie about an entangled fact, and then another. Not all lies are uncovered, not all liars are punished. But the Great Web is very commonly underestimated. Compared to outright lies, honesty or silence involves less exposure to recursively propagating risks you don’t know you’re taking.
Of Lies and Black Swan Blowups
Tangled web blowing up in a Black Swan epic fail: Example of the latter: Pillar of community went to jail for two years over a series of perjuries and lies that started with a $77 speeding ticket. Uncovered his bought PhD, etc.
- Dark Side Epistemology
A single Lie That Must Be Protected can block someone’s whole reason. Once tell a lie and the truth is your enemy ever after - and every truth connected to that truth, and every ally of truth in general.
Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.
- Instead of lying about the connected nodes in the network, you can lie about the laws governing links. Then cover that up with lies about the rules of science — what it means to call something a “theory,”, what “not absolutely certain” entails.
- Lying about specific facts, to lying about general laws, to lying about the rules of reasoning. Which brings us to self-deception. So they say, “In general, beliefs require evidence.” This is a soldier fighting on the other side, so you say: “Not all beliefs require evidence.” “There’s a reason for the rule that beliefs require evidence. To draw a correct map of the city, you have to walk through the streets and make lines on paper that correspond - if you sit in your living room and draw lines on the paper at random, the map’s going to be wrong with extremely high probability.” “Then there’s still a chance, right? I don’t have to believe if it’s not absolutely certain.” “…The dragon is in a separate magisterium.” Having false beliefs isn’t good, but it can be temporary — if, when you discover em, you get over it. The dangerous thing is belief-in-belief. Thence the Dark Side. “Everyone has a right to their own opinion.” Where was that proverb generated? propagate recursively through the network of causality, and the network of general empirical rules, and the rules of reasoning themselves, and the understanding behind those rules. You have to refute the proposition in itself, not by accusing its inventor of bad intentions. Fear is the path that leads to it, and one betrayal can turn you.
Anchoring and Adjustment We latch on to any point of reference, no matter how stupid, and under-adjust from the anchor— stop at the first satisfying-sounding answer. “Sliding adjustment” Obvious applications in salary negotiations, or buying a car. I won’t suggest that you exploit it, but watch out. Try to notice when you are adjusting a figure in search of an estimate. If the initial guess sounds implausible, try to throw it away entirely and come up with a new estimate. Try to think of an anchor in the opposite direction — clearly too small or too large, and dwell on it briefly.
Priming and Contamination Semantic priming: show string “water” -> quicker recognition of “drink” Contamination: known false or totally irrelevant “information” can influence estimates. Discloses the nature of neural architecture: top-down and parallelised. Is the more common mechanism for anchoring: priming compatible thoughts and memories. And contamination is another face of confirmation bias. Once an idea gets into your head, it primes information compatible with it— ensuring its continued existence. Never mind political arguments; confirmation bias is built into our hardware. A single fleeting image can prime associated words for recognition: the birth of bias, the nudge to a bad bottom line.
Do We Believe Everything We’re Told? Descartes: when told a proposition p: 1. comprehend p, 2. evaluate p, and 3. accept / reject. Spinoza: 1. comprehend-and-accept p, 2. consider and possibly 3. reject. If Descartes then distraction will interfere with accepting truth and rejecting falsehood. If Spinoza then distraction will cause them to remember falsehood as true, but not truths as false. Undistracted devote more attention to “unbelieving”. Be careful when you glance at that newspaper in the supermarket. Disfluency leads to believing falsehoods rather than activating system 2???
Cached Thoughts “hundred-step rule”: any neural operation has to complete in less than 100 steps — as parallel as you like, but never more than 100 spikes in sequence. Imagine having to program using 100Hz CPUs. You’d need a hundred billion to get anything done in realtime. If you needed to write realtime programs for a hundred billion processors, you’d use as much caching as possible. Store previous results and look them up next time, instead of recomputing. Recognition, association, pattern completion. Most cognition consists of cache lookups. Some hits we’d be better off recomputing. If you don’t consciously realize the pattern needs correction, you’ll be left with a completed pattern. Worse: we use lookups to other people’s caches. You save on computing power by caching their conclusion. No one can think fast enough to recapitulate the wisdom of a hunter-gatherer tribe in one lifetime from scratch. e.g. “You can’t prove or disprove a religion by factual evidence” -> false as probability theory, AND as psychology: a few centuries ago, this would have gotten you burned. e.g. “Death gives meaning to life.” e.g. “Maybe the human species doesn’t deserve to survive.” e.g. “Love isn’t rational.” What patterns are being completed, inside your mind, that you never chose to be there? It can be hard to see with fresh eyes. There may be no better answer than the standard one, but you can’t think about the answer until you can stop your brain from filling in automatically. And don’t let “cached thought” become a cached thought! Think!
Original Seeing Brain tries not to really look. Pirsig on cache invalidation: “Couldn’t think of anything to say.” Even though every fact has an infinity of hypotheses. Blocked because trying to repeat things she had already heard. Look and see freshly without primary regard for what had been said before. The narrowing down to one brick destroyed the blockage because it was so obvious she had to dosome original seeing.
The Virtue of Narrowness Can say more about specific objects because you don’t have to abstract. People grasp the importance of narrowness in their specialty. Outside, they try to go as wide as possible. “more glorious, more wise, more impressive, to talk about all the apples in the world? to explain human thought in general, without smaller questions like how humans invent techniques for solving a Rubik’s Cube.” Not poetic to give one word many meanings, and thereby spread connotation all around? Must focus narrowly on unusual pebbles with some special quality. Alas, some unfortunates use “evolution” to cover blindly selected patterns of replicating life, and accidental structure of stars, and intelligently configured structure of technology. “If people use the same word, it must all be the same thing.” What could be more virtuous than seeing connections?
A fully connected graph conveys the same amount of information as a graph with no edges. When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. When you understand things, you see how they are not alike: subtract edges off your graph! Good hypotheses only explain some possible outcomes. Sneering at narrowness like Greeks who thought that actually looking at things was techne, slave work. Rationalists and poets need narrow words to express precise thoughts, categories that include only some things. If you make your words too broad, you end up with something that isn’t true and doesn’t even make good poetry.
- Stranger than History
120 years ago it would have been difficult to separate out the truth-value of these two sets of claims:
- If you paint yourself a certain exact color between blue and green, it will reverse the force of gravity on you and cause you to fall upward.
- In the future, the sky will be filled by billions of floating black spheres. Each sphere will be larger than all the zeppelins that have ever existed put together. If you offer a sphere money, it will lower a male prostitute out of the sky on a bungee cord.
Your grandchildren will think it is not just foolish, but evil, to put thieves in jail instead of spanking them.
- There is an absolute speed limit on how fast two objects can seemto be traveling relative to each other, which is exactly 670,616,629.2 miles per hour. Oh, and time changes around too.
- There will be a superconnected global network of billions of adding machines, each with more power than all 1901 adding machines put together. One primary use is moving pictures of lesbian sex by pretending they are made out of numbers.
- Your grandchildren will think it is not just foolish, but evil, to say that someone should not be President of the United States because she is black.
The Logical Fallacy of Generalization from Fictional Evidence “Oh, you mean like the Terminator movies!” What’s wrong with using fiction as a starting point? Why not take advantage of the thinking already done? Not every misstep consists of belief in a falsehood. Scifi isn’t rational forecasting - biased towards fun. When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced)? A story is never a rational attempt at analysis because they don’t use probability distributions. Single draw. The author can’t say “I don’t know.” In problems with large answer spaces, the greatest difficulty is notverifying the correct answer but simply locating it in answer space. The “preliminary” step of locating possibilities worthy of consideration includes: weighing what you know and don’t know, what you can and can’t predict; making a deliberate effort to avoid absur-dity bias and widen confidence intervals; pondering which questions are the important ones, trying to adjust for possible Black Swans. “The Matrix: Yes or No?” skips all of this. Anchoring to complex and unjustified parts of event space. Lost are: considering more than one mind design; dependence on initial conditions; the sheer power and unpredictability of smarter-than-human intelligence; people taking the whole matter seriously and trying to do something about it. In Chess or Go, every wasted move is a loss; in rationality, any non-evidential influence is (on average) entropic. Do movie-viewers succeed in unbelieving what they see? The movie is not believed, but it is available. “Vinge chose to depict Tunç as crippled, for reasons that may or may not have had anything to do with his personal best forecast.” The Matrix is not an example! Logical fallacy of arguing from imaginary evidence: Updating on evidence predicted, but not observed. The most damaging aspect of using others’ imagination is it stops people using their own. Remembered fictions substitute for seeing—the deadliest convenience.
We Change Our Minds Less Often Than We Think Once I can assign a higher probability to one option I have already decided. We become able to guess what our answer will be within half a second of hearing the question: a tiny window for intelligence to act in. Once your belief is fixed, no amount of argument will alter the truth-value; once your decisionis fixed, no amount of argument will alter the consequences. You might think that you could arrive at a belief, or a decision, by non-rational means, and then try to justify it, and if you found you couldn’t justify it, reject it. You can think of occasions when you’ve changed your mind. We all can. How about the occasions when you didn’t? Between hindsight bias, fake causality, positive bias, anchoring, and above all confirmation bias, once an idea gets into your head, it’s probably going to stay.
Hold Off On Proposing Solutions “Do not propose solutions until the problem has been discussed as thoroughly as possible.” Prevent anchoring, factionalism, greedy search, rationalisation. Very tough problems most likely to lead to immediate proposals. After you write the bottom line, it is too late to write more reasons above. If you make your decision early on, it will be based on very little thought, no matter how many amazing arguments you come up with afterward. Trad Rationality emphasizes falsification — the ability to relinquish an initial opinion when confronted by clear counterevidence. But oncean idea gets into your head, it will require way too much evidence to get it out again. A more powerful (more difficult) method is to hold off answering. To draw out that tiny moment when we can’t yet guess what our answer will be, giving our intelligence a longer time to act. Even half a minute would be an improvement.
The Genetic Fallacy attacking a belief based on the context of discovery. specific origin != general warrant Justification is the sum of all current support and antisupport. But if the causes of a belief do not determine its systematic reliability, what does? Matter of degree: origin can be relevant to evidence evaluation, e.g. from trusted expert; Kekulé saw benzene in a dream, but this doesn’t mean we disbelieve aromaticity because later evidence so strong. Clear your mind when you suspect your ideas came from a flawed source. Reversed stupidity is not intelligence: the goal is to shake loose, not let “negate the Bible” be your algorithm. Only conclusive evidence completely screens off the context of discovery. In the absence of clear-cut evidence, you need to attend to the sources of ideas. Only a fallacy when justification exists, but genetic accusation is presented as settling the issue. genetic heuristic: correct appeal to a claim’s origins.
Be suspicious of genetic accusations against beliefs that you dislike. Don’t think you can get good information about a technical issue just by sagely psychoanalyzing personalities involved. When suspicion is cast on one of your fundamental sources, doubt all the leaves that grew from it. Be extremely suspicious if you find that you still believe the early suggestions of a source you later reject.
- The Affect Heuristic
- : using emotional impression to judge value of outcome.
- Subjects judged a disease as more dangerous when described as killing 1,286 / 10,000, versus a disease that was 24.14% likely to be fatal. mental image of a thousand corpses trumps risk to one case.
- “It’s a worse probability, yes, but you’re still more likely to win.”
- Presenting high benefits makes people perceive lower risks; presenting higher risks made people perceive lower benefits; and so on. Halo: People conflate judgments about particular good/bad aspects of something into an overall good or bad feeling. Time pressure greatly increased the inverse relationship between perceived risk and perceived benefit.
- Evaluability (And Cheap Holiday Shopping)
- Subjectively, without seeing both gifts: $45 scarf > $55 coat
- Can only evaluate value with a reference example. With one, we choose absurdly based on trivial properties.
- P(payout) comes apart from option attractiveness.
- You can make a gamble more attractive by adding a strict loss! Isn’t psychology fun?
- Of course, it only works if the subjects don’t see the two gambles side-by-side.
- Which is better value for the money? Ah, but that question only makes sense if you see the two side-by-side.
- To display your friendship, rather than actually help, deliberately don’t shop for value.
- Unbounded Scales, Huge Jury Awards, & Futurism
How much more acoustic energy before noise sounds twice as loud? more like eight times as much.
unbounded scale: zero is “not audible at all” but no upper end.
For a subject rating a single sound, on an unbounded scale, without a fixed comparator, nearly all the variance is due to the arbitrary choice of modulus, rather than the sound itself.
Compare juries deliberating on punitive damages.
- Rate the outrageousness of the defendant’s actions, on a bounded scale,
- Rate the degree to which the defendant should be punished, on a bounded scale, or
- Assign a dollar value to punitive damages. if you knew the scenario presented — the child whose clothes caught on fire — you could guess the punishment rating and the rank-ordering of the dollar award relative to other cases, but the dollar award itself would be completely unpredictable. A jury award for punitive damages isn’t an economic valuation: an attitude expression on an unbounded scale with no standard modulus. As if I’d asked, “On a scale where zero is ‘not difficult at all,’ how difficult does the AI problem feel to you?” So people just make up a number. Then they tack “years” on the end, and that’s their futurist prediction. if these “time estimates” represent anything other than attitude expressions on an unbounded scale with no modulus, I’m unable to determine it.
- The Halo Effect
- the manifestation of the affect heuristic in social psychology. “we automatically assign to good-looking individuals favorable traits: talent, kindness, honesty, and intelligence, without being aware that physical attractiveness plays a role …attractive defendants were twice as likely to avoid jail as unattractive defendants.” Bias, since judgments of honesty and attractiveness aren’t legitimately correlated. Finding the truth, and saying the truth, are not as widely separated in nature as looking pretty and looking smart. There may be a halo effect for kindness, or intelligence. Be suspicious if people seem to separate too cleanly into devils and angels. Be just a little more skeptical of the more attractive political candidates.
Superhero Bias Halo effect for strong characters: seem to possess more courage and heroism, though logically it’s less. “How tough can it be to act all brave and courageous when you’re pretty much invulnerable?” Fame seems to combine additively with all other personality characteristics. Gandhi was protected by his celebrity. What about the others? Gandhi’s fame score seems to get added to his altruism score. Similarly, which is greater—to risk your life to save two hundred children, or to risk your life to save three adults? “Which should I choose if I have to do one or the other?” then it is greater to save two hundred than three. But in the sense of revealed virtue, someone who’d risk their life to save 3 lives reveals more courage than someone to save 200 but not 3. Someone who risks their life because they want to be virtuous has revealed far less virtue than someone who risks their life because they want to save others. You cannot reveal virtue by trying to reveal virtue. Truly virtuous people will constantly seek to save more lives with less effort, which means that less of their virtue will be revealed. Superman is a mere superhero.
Affective Death Spirals Positive feedback cycle of credulity and confirmation: bias mean overestimating how well our beloved theory explains things, which means more phenomena you use your favored theory to explain, the truer your favored theory seems which means you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations. That’s nothing compared to the affective death spiral: Positive characteristics enhance perception of every other positive characteristic -> chain reaction. Every time they use the Great Idea to interpret another event, the Great Idea is confirmed all the more. It feels better and when something feels good, we want to believe it all the more. When it feels good enough to make you seek out new opportunities to feel even better about it, applying it every day, the resonance of positive affect is like a chamber full of mousetraps loaded with ping-pong balls.
Resist the Happy Death Spiral Francis Bacon was the only crackpot in all history to claim godlike benefit to humanity and turn out to be completely right. Some ideas really are that good. Science is legitimately related, one way or another, to just about every important facet of human existence. How can we resist the happy death spiral with respect to Science itself? Cached thoughts: “Science gave us air conditioning, but it also made the hydrogen bomb” or “Science can tell us about stars, but it can never prove or disprove gods.” No good: darksider. Generally skeptical of people who claim that one bias can be used to counteract another. Whatever the solution, it ought to involve believing true things, rather than believing you believe things that you believe are false. An emotional problem from a perceptual problem, the halo effect + conjunction fallacy Apply enough critical reasoning to keep the halos subcritical. The whole problem starts with people not bothering to critically examine additional burdensome details — demanding sufficient evidence to compensate for complexity, searching for flaws as well as support, invoking curiosity — once they’ve accepted some core premise. Do: Cut into smaller independent ideas, and treat them as independent; Treating every additional detail as burdensome; Thinking about the specifics of the causal chain instead of the good or bad feelings; Not rehearsing evidence; and Not adding happiness from claims that “you can’t prove are wrong”; Don’t: Refuse to admire anything too much; Conduct a biased search for negative points until you feel unhappy again; or Forcibly shove it into a safe box.
Uncritical Supercriticality Arguing over the meaning of a word nearly always means that you’ve lost track of the original question. “religious person” (1): someone with a definite opinion about the existence of God(s), probability < 10% or > 90% to Zeus “religious person” (2): someone with a positive opinion (probability > 90%) on the existence of God(s). Redefining a word won’t change the facts of history one way or the other. Deuteronomy, Stalin, Hitler: “do not debate the critic; do not perform experiments or examine history; turn him in.” Supercritical halo <- when it feels wrong to argue against any positive feature about the Idea. Supernaturalist claims are worth distinguishing, because they always turn out to be wrong for fairly fundamental reasons. But it’s still just one kind of mistake. Affective death spiral can nucleate around supernatural beliefs, particularly if asserting supernatural punishments for disbelief. Or political innovation, charismatic leader, belief in racial destiny, an economic hypothesis. Faith. If you don’t place an appropriate burden of proof on each and every additional nice claim, the affective resonance gets started very easily. There is never an Idea so true that it’s wrong to criticize any argument that supports it. The vast majority of possible beliefs are false, and the vast majority of possible supporting arguments for a true belief are also false, and not even the happiest idea can change that. Bad argument gets counterargument. Does not get bullet. Never ever never for ever.
Evaporative Cooling of Group Beliefs Why would group belief become stronger after encountering counterevidence? “Cognitive dissonance - must find reinforcing thoughts to counter the shock, and become more fanatical. Increased group fanaticism is the result of increased individual fanaticism.” Who gets fed up and leaves first? An average cult member? Or a relative skeptic, previously a voice of moderation? Remaining discussions are between the extreme fanatics on one end and the slightly less extreme fanatics on the other end. e.g. Ayn Rand Institute is more fanatical after the breakup than the original Objectivists. One reason to tolerate dissent. Wait until after it seems justified to eject a member, before ejecting. If you get rid of the old outliers, the group position will shift, and someone else will become the oddball. Converse Kuhn: A science can only make real progress once it abandons outside accessibility, and assumes familiarity with large cores of technical material.
When None Dare Urge Restraint 9/11: “The overreaction to this will be ten times worse than the original event.” No one dared to be the voice of restraint. Thus the spiral of hate. It’s really hard to aim low enough that you’re pleasantly surprised around as often and as much as you’re unpleasantly surprised. The vast majority of all complex statements are untrue, so the vast majority of negative things you can say about anyone, even the worst person in the world, are untrue. It is too dangerous for there to be any target, the Jews or Adolf Hitler, about whom saying negative things trumps saying accurate things. The US spent billions of dollars and thousands of soldiers shooting off its own foot more effectively than any terrorist group could dream. Whoever argues for a greater response is a patriot. Whoever dissects a patriotic claim is a traitor. Once restraint becomes unspeakable, no matter where the discourse starts out, the level of fury and folly can only rise with time.
Every Cause Wants To Be A Cult Wikipedia editor crackdown on criticism. But explained by ordinary human nature, not by extraordinary human nature. Ingroup-outgroup dichotomy is part of ordinary human nature. Cult is a high-entropy state into which the system trends, an attractor in human psychology. Status games, ingroup-outgroup bias, affective spirals, leader-gods. A noble cause won’t make its adherents something other than human. Every group of people with an unusual goal will trend toward cult without constant resistance. Counter equilbrium. Objectivists managed it despite centring “Rationality! Objective reality!” Labeling the Great Idea “rationality” won’t protect you any more than putting up a sign over your house that says “Cold!” The question is not, “Cultish, yes or no?” but, “How much cultishness and where?” the worthiness of the Cause does not mean you can spend any less effort in resisting the attractor. if you point to current battle lines, it does not mean you confess your Noble Cause unworthy. If you believe that it was the Inherent Impurity of those Foolish Other Causes that made them go wrong, if you laugh at the folly of “cult victims,”, you will not expend the necessary effort to pump against entropy — to resist being human.
Two Cult Koans “You spend all this time listening to your master, talking of ‘rational’ this and ‘rational’ that — you’re in a cult!” The novice put on the cowboy hat. Ougi said, “How long will you repeat my words and ignore the meaning? Disordered thoughts begin as feelings of attachment to preferred conclusions. You are too anxious about your self-image as a rationalist. You came to me to seek reassurance. If you had been truly curious, not knowing one way or the other, you would have thought of ways to resolve your doubts. Because you needed to resolve your cognitive dissonance, you were willing to put on a silly hat. If I had been an evil man, I could have made you pay a hundred silver coins. When you concentrate on a real-world question, the worth or worthlessness of your understanding will soon become apparent.” “Since you are so concerned about the interactions of clothing with probability theory,” Ougi said, “it should not surprise you that you must wear a special hat to understand.”
Asch’s Conformity Experiment How many people, placed in this situation, would say “C”—giving an obviously incorrect answer that agrees with the unanimous answer of the other subjects? Three-quarters gave a “conforming” answer at least once. A third of the subjects conformed more than half the time. most subjects claimed to have not believed their conforming answers, but some said they did. Three honest people say something you don’t see. What are the odds that only you are the one who’s right? I hope I would notice my own severe confusion and then assign >50% probability to the majority vote. In terms of group rationality, the proper thing to say is, “How surprising, it looks to me like B is the same size as X. But if we’re all looking at the same diagram and reporting honestly, I have no reason to believe that my assessment is better than yours.” So the conformists are not automatically convicted of irrationality… so far. Adding a single dissenter reduces conformity very sharply, down to 5–10% of subjects. people who are emotionally nervous about being the odd one out. Subjects in the one-dissenter condition did not think their nonconformity had been enabled by the dissenter. Like 90% of drivers who think they’re above-average. People are not self-aware, which weighs against attempts to argue that conformity is rational. Being the first dissenter is a valuable (and costly!) social service, but you’ve got to keep it up. Consistently within and across experiments, all-female groups (a female subject alongside female confederates) conform significantly more often than all-male groups. Around one-half the women conform more than half the time, versus a third of the men. If you argue that the average subject is rational, then apparently women are too agreeable and men are too disagreeable, so neither group is actually rational . . . Ingroup-outgroup manipulations (e.g., a handicapped subject alongside other handicapped subjects) similarly show that conformity is significantly higher among ingroup. When subjects can respond in a way that will not be seen by the group, conformity also drops, which also argues against an Aumann interpretation.
On Expressing Your Concerns Being a voice of dissent can bring real benefits to the group. But it has a fixed cost. And you have to keep it up. Plus you could be wrong. Don’t expect thanks. Individual working alone will have natural doubts because there’s nothing impolite about doubting your own competence. But group becomes more optimistic than either would be on their own, each quelled by the other’s seeming confidence. Disagreeing with the group is serious business. You can’t wave it off with “Everyone is entitled to their own opinion.” Distinguish “expressing concern” from “disagreement.” Raising a point others haven’t voiced is not a promise to disagree at the end of discussion. But once you speak out, you’ve become the nail sticking up and you can’t undo that. If everyone refrains from voicing their doubts, that will lead groups into madness. Cynical self-help books (e.g. Machiavelli) advise you to mask your nonconformity entirely.
Lonely Dissent Difference between joining the rebellion and leaving the pack: Lonely dissent doesn’t feel like going to school dressed in black. It feels like wearing a clown suit. People know how to relate to vegetarians, whether stock impressedness or dismissal. Same with goths. Outside the System in a standard way. Not, y’know, actually outside. What takes real courage is braving outright incomprehension. They don’t hate you as a rebel. They just think you’re, like, weird, and turn away. In ancestral times this might have concluded, not with the band splitting, but with you being driven out alone. Cryonics shows that the fear of thinking really different is stronger than the fear of death. We are just built such that many more go skydiving than get frozen. The fear of lonely dissent is a hindrance to good ideas, but not every dissenting idea is good. Most of the difficulty in having a new true scientific thought is in the “true” part. This is the true courage of lonely dissent, which every rock band tries to fake. If you do things differently only when you see an overwhelmingly good reason, you will have more than enough trouble to last you the rest of your life. if you think you would totally wear that clown suit, don’t be too proud of that either! It just means that you need to make an effort in the opposite direction to avoid dissenting too easily. That’s what I have to do. Other people do have reasons for thinking what they do, and ignoring that completely is as bad as being afraid to contradict them. You wouldn’t want to end up as a free thinker. It’s not a virtue, you see — just a bias either way.
Cultish Countercultishness Joining a cult is one of the worse things that can happen to you. Best-case you end up among sincere but deluded people, making an honest mistake, and you spend a lot of time and money but end up with nothing. Real cults are vastly worse. “Love bombing”, sleep deprivation, hard labor, distant communes, daily self-criticism meetings, take everything to make you dependent. Serious brainwashing, serious harm. “Cults” and “non-cults” aren’t natural kinds. The human mind prefers essences to attractors. “It is a cult” and the task of classification is done once and for all. But if a certain group exhibits ingroup-outgroup polarization and positive halo around their Favorite Thing you cannot deduce whether they have achieved uncriticality. The characteristics of cultness are not all present or all absent. You cannot get an accurate picture of a group’s reasoning using this essentialism. Also wrong: “if you infer that a group is a cult, therefore their beliefs must be false”, because false beliefs are characteristic of cults. Nervously seeking reassurance is not the best frame of mind to evaluate questions of rationality. not genuinely curious or strict testing. “If it doesn’t have fur, it must not be a cat!” phew. If you decide, “It’s not a cult!”, you won’t continue to push back ordinary tendencies toward cultishness. You’ll decide the cult-essence is absent, and stop pumping against the attractor. The halo effect doesn’t become okay just because everyone does it; if everyone walks off a cliff, you wouldn’t too. Why are cults called “religions” once they’ve been around for a hundred years? Nervousness is not the fear of believing falsely, or the fear of physical harm. It is the fear of lonely dissent. Groups whose beliefs around long enough to seem “normal” don’t inspire the same nervousness. “cult” used as a label for weird thing. That which you want to do better, you have no choice but to do differently. the first and foremost characteristic of “cult members” is that they are Outsiders with Peculiar Ways. unusualness is a risk factor, not the disease. lofty goals are a risk factor, not the disease. Giving people advice about how to think is inherently dangerous. a risk factor, not a disease. Being afraid of your friends looking at you disapprovingly is exactly the effect that real cults use to convert and keep members! When you’re out, it keeps you out. But when you’re in, it keeps you in. Just look at the group’s reasoning for yourself, and decide whether it’s something you want to be part of, once you get rid of the fear of weirdness. so long as someone needs reassurance — even reassurance about being a rationalist — that will always be a flaw in their armor. When you know what you’re trying to do and why, you’ll know whether a group is helping you or hindering you.
Singlethink The journey begins when you see a flaw in your art and discover a drive to improve, to create new skills. Refused to play spank game. Refused to play a negative-sum game. Or refused because I didn’t want to get hurt, and standing in the corner was an acceptable price. I had always known this — the real memory lurking in a corner of my mind, my mental eye glancing and then looking away. Caught the feeling — generalized — “So that’s what it feels like to shove away an unwanted truth! Now I’m going to notice every time!” doublethink: forget, and then forget you have forgotten singlethink: notice you are forgetting, and then remember. One keeps on discovering new mechanisms by which your brain shoves things out of the way. But I swept out quite a few corners with that first broom.
The Importance of Saying “Oops” Enron executives never admitted to having made a large mistake. “how are we going to hide the problem on our balance sheet?” as opposed to, “I’ve been stupid.” If we only admit small local errors, we will only make small local changes. Big change comes from acknowledging a big mistake. I switched because I realized that Traditional Rationality’s fuzzy verbal tropes had been insufficient to prevent a large mistake. A series of small concessions, grudgingly conceded, realizing as little as possible of my mistake on each occasion, admitting failure only in tolerable nibbles. I could have moved so much faster if I had simply screamed “OOPS!” It is important to have the watershed moment, to not divide it into palatable bite-size mistakes. Do not become proud of admitting errors. Get it right the first time - but if you do make an error, see it all at once. The alternative is stretching out the battle with yourself over years. They do their best to minimize embarrassment by saying I was right in principle, or It could have worked, or I still want to embrace the true essence. Defending their pride ensures they will make the same mistake, again need to defend their pride. Better to swallow the entire bitter pill in one terrible gulp.
The Crackpot Offer Failed disproof. -> “I’ll get that theorem eventually!” look for other disproofs. But now that I’d spotted my mistake, there was no reason to suspect Cantor’s Diagonal Argument any more than other major theorems. Path to becoming a math crank. No: gave a small laugh, and let it go. How many people writing green ink letters were 13 when they made that misstep. If I had reinterpreted my mistake as virtuous, insisted on being at least a little right, then I would not have let go. Until you admit you were wrong, you cannot get on with your life. Whenever tempted to hold on to a thought, you have the opportunity to become a crackpot. It’s nothing but a thought you should never have thought.
Just Lose Hope Already when you refuse to lose hope you are stuck in a loop of bad decisions. LTCM refused to lose hope. equity of $5 bn, leverage of $125 billion, derivatives of $1.25 trillion. Every profession has a different way to be smart. So how does “rationality” as a general discipline contribute? But how to not be stupid has a great deal in common. If you teach someone how to not turn little mistakes into big mistakes, it’s nearly the same in hedge funds or romance, and one key is: Be ready to admit you lost.
The Proper Use of Doubt “Organized belief systems exist to flee from doubt.” “What about Jesuits?” “Fake doubt? Predetermined outcome?” “No: they are to imagine that their doubts may become stronger.” OK, not “fleeing from doubt”. Still suspicious - a program of desensitization for something very scary. Doubt should not be scary. Does it matter if their reasons are flawed? Every doubt exists in order to annihilate a particular belief. If a doubt fails, the doubt dies unfulfilled — but still resolved. A doubt that destroys neither itself nor its target might as well have never existed. The resolution of doubts, not the mere act of doubting, drives the ratchet of rationality forward. Not all doubts are rational. Wearing doubts doesn’t make you a rationalist. Rational doubt: caused by specific reason to suspect the specific belief is wrong. implies an investigation to resolve it. A doubt that is not investigated might as well not exist. Performative doubt to connote rationality, modesty. You can be proud when you have torn a cherished belief to shreds.
You Can Face Reality Gendlin: “People can stand what is true, for they are already enduring it.”
The Meditation on Curiosity Criticizing yourself from a sense of duty leaves you wanting to have investigated yourself (not the same as wanting to investigate). Aim is not truth but removing cognitive dissonance. -> motivated stopping. done your duty. When you’re really curious, you’ll gravitate to inquiries most promising to shift belief, least like the ones tried before. Your posterior likely should not look like the prior. either direction is equally fine to you. LeGuin: “In innocence there is no strength against evil. But there is strength in it for good.” We can try to keep the lightness and eager reaching of innocence. No substitute for genuine curiosity. Itch to know > solemn truth-seeking vow. Sometimes, all we have is our mere solemn vows. Keep an eye out for sparks of genuine intrigue, or even genuine ignorance. Conservation of Expected Evidence: For every new inquiry, for every piece of new evidence you look at, the expected posterior probability should equal your prior probability. Belief should always be evenly poised to shift in either direction. this may help to keep you interested or even curious about the microprocess of inquiry. If the argument you are considering is not new, why are you going here? Is this where you would look if you were genuinely curious? Are you rehearsing the evidence? Restorative for curiosity, the Litany of Tarski: If the box contains a diamond, I desire to believe that the box contains a diamond; If the box does not contain a diamond, I desire to believe that the box does not contain a diamond; Let me not become attached to beliefs I may not want. Then meditate on the possibility that there is no diamond, and the advantage that will come to you if you believe not, and the disadvantage if you believe there is a diamond. If you find the slightest shred of true uncertainty, guard it like a forester nursing a campfire. If you can make it into a flame of curiosity, it will make you light and give purpose to your questioning.
No One Can Exempt You From Rationality’s Laws Traditional Rationality: social rules, with irrationality as defections. To accept a belief from you, you are obligated to provide a certain amount of evidence. If you try to get out of it, we know you’re cheating. A theory is obligated to make bold predictions for itself, not just steal predictions. A theory is obligated to expose itself to falsification; you must pay your dues. Similar to deep customs so easy to pass on by word of mouth. Humans detect social cheating better than isomorphic logical rules. -> “Well, you can’t justify your belief in science!” “How dare you criticize me for having unjustified beliefs, you hypocrite!” Bayes brain: processes entangled evidence into a map that reflects the territory. Rationality is laws in the same sense as the Second Law of Thermodynamics: obtaining a reliable belief requires a calculable amount of entangled evidence. Water doesn’t freeze itself. If rationality is custom, then X is excused if you point out others are doing the same thing. We will mercifully excuse you from your social obligation to provide evidence for your belief. If rationality is mathematical law, then trying to justify evidence-free belief by pointing to someone else doing the same thing will be as effective as listing reasons you shouldn’t fall off a cliff if you jump. If two engineers design their engines equally poorly, neither engine will work. As a matter of human law in liberal democracies, everyone is entitled to their own beliefs. As a matter of Nature’s law, you are not entitled to accuracy. Physicists don’t decide the laws of physics, they just guess them. Rationalists don’t decide the laws of rationality, we just guess them. You cannot “rationalize” anything that is not rational to begin with. Even “We don’t decide” is too anthropomorphic. There is no higher authority to exempt you. There is only cause and effect.
Leave a Line of Retreat “Always allow the enemy an escape route, an alternative to death.” — Sun Tzu When we’re faced with an uncomfortable idea, our first impulse is naturally to think of all the reasons why it can’t possibly be so. Try figuring out how you’d deal with it if true. “visualize what the world would be like if there are no souls, and what you would do about that. Don’t think about all the reasons that it can’t be that way; just accept it as a premise and then visualize the consequences. rather than it being too horrifying to face. As a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it.” Plan your retreat in detail — visualize every step — before you come to the battlefield. Calculate. Only then can you fairly assess the probability. It takes less courage to visualize an uncomfortable state of affairs as a thought experiment, than to consider how likely it is to be true. But after you do the former, it becomes easier to do the latter. Require a certain minimum of self-honesty to use correctly: admit which ideas scare you, which you are attached to. If theists could visualize their real reaction to believing that God did not exist, they’d realize they wouldn’t go around slaughtering babies. People get over things. if the scary belief turned out to be true, you would come to terms with it. If that world is already actual, visualizing it won’t make it worse. if the scary thing really is true, you’d want to believe it, and you should visualize that too; not believing wouldn’t help you. Leaving a line of retreat is a powerful technique, but it’s not easy. Honest visualization doesn’t take as much effort as admitting outright that God doesn’t exist, but it does take an effort.
- Crisis of Faith Many retain beliefs whose flaws a ten-year-old could point out. “Had the idea of god not come along until the scientific age, only an exceptionally weird person would invent such an idea and pretend it explained anything.” This should scare you. You can be a world-class scientist conversant with Bayesian mathematics and still fail to reject an absurd belief. Not “How can I reject long-held false belief X?” but “How do I know if long-held belief X is false?” How to create a true crisis of faith, that could just as easily go either way? If you stay with cached thoughts, you will not conduct a crisis of faith. Meta-cognition is relatively rare. “Try to stop your mind from completing the pattern the usual way!” “Imagine what a skeptic would say — and then imagine their reply to your response — and then imagine what else they’d say, harder to answer.” “Try to think the thought that hurts the most.” Put forth the same level of desperate effort it would take for a theist to reject their religion. Without a convulsive, wrenching effort to be rational, the kind to throw off a religion — how dare you believe anything, when Robert Aumann believes in God? by the time you know a belief is an error, it is already defeated. We’re talking about a desperate effort to figure out if you should be throwing off the chains, or keeping them. Self-honesty is at its most fragile when we don’t know which path we’re supposed to take — when rationalizations are not obviously sins. Consider Crisis of Faith when: A belief has long remained in your mind; It is surrounded by a cloud of known arguments and refutations; You have sunk costs in it (time, money, public declarations); The belief has emotional consequences (remember that this does not make it wrong); It has gotten mixed up in your personality. A belief that will take more than ordinary effort to doubt. Admit “My belief has these warning signs,” without having to say “My belief is false.” NB: These hold for Richard Dawkins’ evolutionary biology, not just the Pope’s Catholicism. The point is not to have shallow beliefs, but to have a map that reflects the territory. When should you stage a Crisis of Faith? Don’t try to do it haphazardly; don’t try it in an ad-hoc spare moment. Don’t rush so that you can say, “I have doubted, as I was obliged to do.” Allocate uninterrupted hours. Find somewhere quiet. Clear your cache; try original seeing. Make a desperate effort at true doubt that would destroy a false — and only a false — deeply held belief. Attack Your Belief’s Real Weak Points. Be curious if you can. Tsuyoku Naritai! The Genetic Heuristic. Say “Oops” - it hurts less to swallow the entire pill at once. Singlethink- become aware of what you are not thinking, so you can think it. Resist the Happy Death Spiral - it takes a Crisis of Faith to shake one loose. Hold Off On Proposing Solutions. isshoukenmei—the lifelong uncompromising effort to be so incredibly rational that you rise above the level of stupid damn mistakes. I wish you the best of luck against your opponent, yourself. Have a wonderful crisis!
III. THE MACHINE IN THE GHOST
xx. How An Algorithm Feels From Inside
Our philosophical intuitions are generated by algorithms in the brain. They are how particular cognitive algorithms feel from the inside.
To dissolve a philosophical dilemma, it often suffices to understand the cognitive algorithm that generates the appearance of a dilemma - if you understand the algorithm in sufficient detail. “An algorithm does it!” might as well be magic.
Carnap, Quine, Wittgenstein!
“If a tree falls in the forest, and no one hears it, does it make a sound?”
really about the definition of “sound”: “Are there acoustic vibrations?” or “Are there auditory experiences?” Why do people get into such an argument? Task: Separate blue eggs or “bleggs” and red cubes or “rubes”. Because bleggs contain vanadium, rubes contain palladium. Except 2% of blue egg-shaped objects contain palladium instead.
if you find a blue egg-shaped thing that contains palladium, should you call it a “rube” instead? You’re going to put it in the rube bin But nearly all bleggs glow faintly in the dark. blue egg-shaped objects that contain palladium are just as likely to glow. If you ask “Which bin does the object go in?”, you choose as if it’s a rube.
If you ask “If I turn off the light, will it glow?”, you predict as if the object is a blegg.
“Is it a blegg?” ambiguous between “Which bin does it go in?” OR “Will it glow in the dark?” Even after you know whether an object is blue or red, egg or cube, furred or smooth, bright or dark, and whether it contains vanadium or palladium, it feels like there’s a leftover, unanswered question: But is it really a blegg? Network 1 has one major advantage over Network 2: Every unit in the network corresponds to a testable query. If you observe every observable, clamping every value, there are no units in the network left over. Network 2 is like the human mind: fast, cheap, scalable—and has an extra dangling unit in the center, whose activation can still vary, even after we’ve observed all surrounding nodes. We know where Pluto is, and where it’s going; we know Pluto’s shape, and Pluto’s mass—but is it a planet? When you look at Network 2, you’re seeing the algorithm from the outside. People don’t think to themselves, “Should the central unit fire, or not?” any more than you think “Should neuron #12,234,320,242 in my visual cortex fire, or not?” if you are a brain running Network 2, even after you know every characteristic of the object, you still find yourself wondering: “But is it a blegg, or not?” We don’t instinctively see our intuitions as “intuitions”, we just see them as the world. When people argue over whether the tree makes a sound, whether Pluto is a planet, they don’t see themselves as arguing over whether a categorization should be active in their neural networks. If you were a mind constructed along the lines of Network 1, you wouldn’t say “It depends on how you define ‘planet’,” you would just say, “Given that we know Pluto’s orbit and shape and mass, there is no question left to ask.” Or that’s how it would feel — like there was no question left. Before you can question your intuitions, you have to realize what you’re ‘looking’ at is an intuition — a cognitive algorithm from the inside. People cling to their intuitions because they can’t see intuitions as the way their cognitive algorithms look from the inside. Everything you try to say about how the native cognitive algorithm goes astray, ends up being contrasted to their direct perception of the Way Things Really Are — and discarded as obviously wrong. “A wheel that can be turned though nothing else turns with it, is not part of the mechanism.”
Traditional and Bayesian rationality
Trad: goal is metaphysical justification
Bayes: goal is instrumental success
Hollywood: Spock, HAL,
Trad: Sagan, Gardner, Popper, Feynman
Bayes: Jaynes, Kahneman,
Trad: Extreme underweighting of merely rational evidence. Only Scientific evidence counts.
Bayes: Priors and accuracy first
Trad: social rules, academic prestige, process
Bayes: individual enquiry, result
Trad: Binary belief, NHST
Bayes: Degrees of credence, Bayesian inference
Trad: attacking fallacies
Trad: “fitting” the facts by merely failing to prohibit them
Trad: false zeroing out of authority (reputation)
Trad: agree to disagree
Trad: The more complex, the more evidence needed to assert it.
Bayes: The more complex, the more evidence you need just to individuate it.
Trad: internal justification: “to convince me of hunch X, present me with Y amount of evidence.”
Bayes: externalism: you need an amount of evidence roughly equivalent to the complexity just to get the hunch. It’s not a question of justifying: guessing already shows massive amounts. Hunchings and intuitings are brain processes.
If you want me to accept a belief from you, you are obligated to provide me with a certain amount of evidence. If you try to get out of it, we all know you're cheating on your obligation.
A theory is obligated to make bold predictions for itself, not just steal predictions that other theories have labored to make. A theory is obligated to expose itself to falsification - if it tries to duck out, that's like trying to duck out of a fearsome initiation ritual; you must pay your dues.