The Undoing Project: A Friendship That Changed Our Minds
Michael Lewis

Ended: Jan. 24, 2017

A lot of people saw in Oakland’s approach to building a baseball team a more general lesson: If the highly paid, publicly scrutinized employees of a business that had existed since the 1860s could be misunderstood by their market, who couldn’t be? If the market for baseball players was inefficient, what market couldn’t be? If a fresh analytical approach had led to the discovery of new knowledge in baseball, was there any sphere of human activity in which it might not do the same?
But—they went on to say—the author of Moneyball did not seem to realize the deeper reason for the inefficiencies in the market for baseball players: They sprang directly from the inner workings of the human mind. The ways in which some baseball expert might misjudge baseball players—the ways in which any expert’s judgments might be warped by the expert’s own mind—had been described, years ago, by a pair of Israeli psychologists, Daniel Kahneman and Amos Tversky. My book wasn’t original. It was simply an illustration of ideas that had been floating around for decades and had yet to be fully appreciated by, among others, me.
From his stint as a consultant he learned something valuable, however. It seemed to him that a big part of a consultant’s job was to feign total certainty about uncertain things. In a job interview with McKinsey, they told him that he was not certain enough in his opinions. “And I said it was because I wasn’t certain. And they said, ‘We’re billing clients five hundred grand a year, so you have to be sure of what you are saying.’” The consulting firm that eventually hired him was forever asking him to exhibit confidence when, in his view, confidence was a sign of fraudulence. They’d asked him to forecast the price of oil for clients, for instance. “And then we would go to our clients and tell them we could predict the price of oil. No one can predict the price of oil. It was basically nonsense.”
People who didn’t know Daryl Morey assumed that because he had set out to intellectualize basketball he must also be a know-it-all. In his approach to the world he was exactly the opposite. He had a diffidence about him—an understanding of how hard it is to know anything for sure. The closest he came to certainty was in his approach to making decisions. He never simply went with his first thought. He suggested a new definition of the nerd: a person who knows his own mind well enough to mistrust it.
“Confirmation bias,” he’d heard this called. The human mind was just bad at seeing things it did not expect to see, and a bit too eager to see what it expected to see. “Confirmation bias is the most insidious because you don’t even realize it is happening,” he said. A scout would settle on an opinion about a player and then arrange the evidence to support that opinion. “The classic thing,” said Morey, “and this happens all the time with guys: If you don’t like a prospect, you say he has no position. If you like him, you say he’s multipositional. If you like a player, you compare his body to someone good. If you don’t like him, you compare him to someone who sucks.” Whatever prejudice a person brought to the business of selecting amateur players he tended to preserve, even when it served him badly, because he was always looking to have that prejudice confirmed. The problem was magnified by the tendency of talent evaluators—Morey included—to favor players who reminded them of their younger selves.
You saw someone who reminded you of you, and then you looked for the reasons why you liked him.
Maybe the mind’s best trick of all was to lead its owner to a feeling of certainty about inherently uncertain things.
When the NBA returned to work he made yet another unsettling discovery. Just before the draft, the Toronto Raptors called and offered to trade their high first-round draft pick for Houston’s backup point guard, Kyle Lowry. Morey talked about it with his staff, and they were on the brink of not doing the deal when one of the Rockets executives said, “You know, if we had the pick we’re thinking of trading for and they offered Lowry for it, we wouldn’t even consider it as a possibility.” They stopped and analyzed the situation more closely: The expected value of the draft pick exceeded, by a large margin, the value they placed on the player they’d be giving up for it. The mere fact that they owned Kyle Lowry appeared to have distorted their judgment about him.** Looking back over the previous five years, they now saw that they’d systematically overvalued their own players whenever another team tried to trade for them.
And yet even Leslie Alexander, the only owner with both the inclination and the nerve to hire someone like him back in 2006, could grow frustrated with Daryl Morey’s probabilistic view of the world. “He will want certainty from me, and I have to tell him it ain’t coming,” said Morey. He’d set out to be a card counter at a casino blackjack table, but he could live the analogy only up to a point. Like a card counter, he was playing a game of chance. Like a card counter, he’d tilted the odds of that game slightly in his favor. Unlike a card counter—but a lot like someone making a life decision—he was allowed to play only a few hands. He drafted a few players a year. In a few hands, anything could happen, even with the odds in his favor.
by some freak accident he found himself at a gathering of his fellow human beings that held no appeal for him, he’d become invisible. “He’d walk into a room and decide he didn’t want anything to do with it and he would fade into the background and just vanish,” says Dona. “It was like a superpower. And it was absolutely an abnegation of social responsibility. He didn’t accept social responsibility—and so graciously, so elegantly, didn’t accept it.”
Amos liked to say that stinginess was contagious and so was generosity, and since behaving generously made you happier than behaving stingily, you should avoid stingy people and spend your time only with generous ones. He paid attention to what Edwards was up to without paying a lot of attention to Edwards himself.
People thought Tel Aviv was like New York but that New York was not like Tel Aviv. People thought that the number 103 was sort of like the number 100, but that 100 wasn’t like 103. People thought a toy train was a lot like a real train but that a real train was not like a toy train. People often thought that a son resembled his father, but if you asked them if the father resembled his son, they just looked at you strangely. “The directionality and asymmetry of similarity relations are particularly noticeable in similes and metaphors,”
When people compared one thing to another—two people, two places, two numbers, two ideas—they did not pay much attention to symmetry. To Amos—and to no one else before Amos—it followed from this simple observation that all the theories that intellectuals had dreamed up to explain how people made similarity judgments had to be false. “Amos comes along and says you guys aren’t asking the right question,” says University of Michigan psychologist Rich Gonzalez. “What is distance? Distance is symmetric.
Amos had his own theory, which he called “features of similarity.”† He argued that when people compared two things, and judged their similarity, they were essentially making a list of features. These features are simply what they notice about the objects. They count up the noticeable features shared by two objects: The more they share, the more similar they are; the more they don’t share, the more dissimilar they are. Not all objects have the same number of noticeable features: New York City had more of them than Tel Aviv, for instance. Amos built a mathematical model to describe what he meant—and to invite others to test his theory, and prove him wrong.
By changing the context in which two things are compared, you submerge certain features and force others to the surface.
A banana and an apple seem more similar than they otherwise would because we’ve agreed to call them both fruit. Things are grouped together for a reason, but, once they are grouped, their grouping causes them to seem more like each other than they otherwise would. That is, the mere act of classification reinforces stereotypes. If you want to weaken some stereotype, eliminate the classification.
Amos was not merely an optimist; Amos willed himself to be optimistic, because he had decided pessimism was stupid. When you are a pessimist and the bad thing happens, you live it twice, Amos liked to say. Once when you worry about it, and the second time when it happens.
“Belief in the Law of Small Numbers” teased out the implications of a single mental error that people commonly made—even when those people were trained statisticians. People mistook even a very small part of a thing for the whole. Even statisticians tended to leap to conclusions from inconclusively small amounts of evidence. They did this, Amos and Danny argued, because they believed—even if they did not acknowledge the belief—that any given sample of a large population was more representative of that population than it actually was.
The Oregon researchers went and tested the hypothesis anyway. It turned out to be true. If you wanted to know whether you had cancer or not, you were better off using the algorithm that the researchers had created than you were asking the radiologist to study the X-ray. The simple algorithm had outperformed not merely the group of doctors; it had outperformed even the single best doctor. You could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.
“If these findings can be generalized to other sorts of judgmental problems,” Goldberg wrote, “it would appear that only rarely—if at all—will the utilities favor the continued employment of man over a model of man.” But how could that be? Why would the judgment of an expert—a medical doctor, no less—be inferior to a model crafted from that very expert’s own knowledge? At that point, Goldberg more or less threw up his hands and said, Well, even experts are human. “The clinician is not a machine,” he wrote. “While he possesses his full share of human learning and hypothesis-generating skills, he lacks the machine’s reliability. He ‘has his days’: Boredom, fatigue, illness, situational and interpersonal distractions all plague him, with the result that his repeated judgments of the exact same stimulus configuration are not identical.
Londoners in the Second World War thought that German bombs were targeted, because some parts of the city were hit repeatedly while others were not hit at all. (Statisticians later showed that the distribution was exactly what you would expect from random bombing.) People find it a remarkable coincidence when two students in the same classroom share a birthday, when in fact there is a better than even chance, in any group of twenty-three people, that two of its members will have been born on the same day. We have a kind of stereotype of “randomness” that differs from true randomness. Our stereotype of randomness lacks the clusters
and patterns that occur in true random sequences.
Danny and Amos had noticed how oddly, and often unreliably, their own minds recalculated the odds, in light of some recent or memorable experience. For instance, after they drove past a gruesome car crash on the highway, they slowed down: Their sense of the odds of being in a crash had changed. After seeing a movie that dramatizes nuclear war, they worried more about nuclear war; indeed, they felt that it was more likely to happen. The sheer volatility of people’s judgment of the odds—their sense of the odds could be changed by two hours in a movie theater—told you something about the reliability of the mechanism that judged those odds.
Here, clearly, was another source of error: not just that people don’t know what they don’t know, but that they don’t bother to factor their ignorance into their judgments.
Amos liked to say that if you are asked to do anything—go to a party, give a speech, lift a finger—you should never answer right away, even if you are sure that you want to do it. Wait a day, Amos said, and you’ll be amazed how many of those invitations you would have accepted yesterday you’ll refuse after you have had a day to think it over.
Unless you are kicking yourself once a month for throwing something away, you are not throwing enough away, he said.
“people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic error.”
“Evidently, people respond differently when given no specific evidence and when given worthless evidence,” wrote Danny and Amos. “When no specific evidence is given, the prior probabilities are properly utilized; when worthless specific evidence is given, prior probabilities are ignored.”*
The instructors in a flight school adopted a policy of consistent positive reinforcement recommended by psychologists. They verbally reinforced each successful execution of a flight maneuver. After some experience with this training approach, the instructors claimed that contrary to psychological doctrine, high praise for good execution of complex maneuvers typically results in a decrement of performance on the next try. What should the psychologist say in response? The subjects to whom they posed this question offered all sorts of advice. They surmised that the instructors’ praise didn’t work because it led the pilots to become overconfident. They suggested that the instructors didn’t know what they were talking about. No one saw what Danny saw: that the pilots would have tended to do better after an especially poor maneuver, or worse after an especially great one, if no one had said anything at all. Man’s inability to see the power of regression to the mean leaves him blind to the nature of the world around him. We are exposed to a lifetime schedule in which we are most often rewarded for punishing others, and punished for rewarding.
“Historical Interpretation: Judgment Under Uncertainty,” Amos had called it. With a flick of the wrist, he showed a roomful of professional historians just how much of human experience could be reexamined in a fresh, new way, if seen through the lens he had created with Danny. In the course of our personal and professional lives, we often run into situations that appear puzzling at first blush. We cannot see for the life of us why Mr. X acted in a particular way, we cannot understand how the experimental results came out the way they did, etc. Typically, however, within a very short time we come up with an explanation, a hypothesis, or an interpretation of the facts that renders them understandable, coherent, or natural. The same phenomenon is observed in perception. People are very good at detecting patterns and trends even in random data. In contrast to our skill in inventing scenarios, explanations, and interpretations, our ability to assess their likelihood, or to evaluate them critically, is grossly inadequate. Once we have adopted a particular hypothesis or interpretation, we grossly exaggerate the likelihood of that hypothesis, and find it very difficult to see things any other way. Amos was polite about it. He did not say, as he often said, “It is amazing how dull history books are, given how much of what’s in them must be invented.”
When Richard Nixon announced his surprising intention to visit China and Russia, Fischhoff asked people to assign odds to a list of possible outcomes—say, that Nixon would meet Chairman Mao at least once, that the United States and the Soviet Union would create a joint space program, that a group of Soviet Jews would be arrested for attempting to speak with Nixon, and so on. After the trip, Fischhoff went back and asked the same people to recall the odds they had assigned to each outcome. Their memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon “hindsight bias.”
In Redelmeier’s experience, doctors did not think statistically. “Eighty percent of doctors don’t think probabilities apply to their patients,” he said. “Just like 95 percent of married couples don’t believe the 50 percent divorce rate applies to them, and 95 percent of drunk drivers don’t think the statistics that show that you are more likely to be killed if you are driving drunk than if you are driving sober applies to them.”
Redelmeier was newly struck by the inability of human beings to judge risks, even when their misjudgment might kill them. When making judgments, people obviously could use help—say, by requiring all motorcyclists to wear helmets. Later Redelmeier said as much to one of his fellow students, an American. What is it with you freedom-loving Americans? he asked. Live free or die. I don’t get it. I say, “Regulate me gently. I’d rather live.”
When you told people that they had a 90 percent chance of surviving surgery, 82 percent of patients opted for surgery. But when you told them that they had a 10 percent chance of dying from the surgery—which was of course just a different way of putting the same odds—only 54 percent chose the surgery. People facing a life-and-death decision responded not to the odds but to the way the odds were described to them. And not just patients; doctors did it, too. Working with Amos, Sox said, had altered his view of his own profession. “The cognitive aspects are not at all understood in medicine,” he said.
The Samuelson bet was named for Paul Samuelson, the economist who had cooked it up. As Amos explained it, people offered a single bet in which they have a 50-50 chance either to win $150 or lose $100 usually decline it. But if you offer those same people the chance to make the same bet one hundred times over, most of them accept the bet.
The secret to doing good research is always to be a little underemployed. You waste years by not being able to waste hours.
Amos had a clear idea of how people misperceived randomness, for instance. They didn’t understand that random sequences seemed to have patterns in them: People had incredible ability to see meaning in these patterns where none existed. Watch any NBA game, Amos explained to Redelmeier, and you saw that the announcers, the fans, and maybe even the coaches seemed to believe that basketball shooters had the “hot hand.” Simply because some player had made his last few shots, he was thought to be more likely to make his next shot. Amos had collected data on NBA shooting streaks to see if the so-called hot hand was statistically significant—he already could persuade you that it was not.
People had a miserable time for most of their vacation and then returned home and remembered it fondly; people enjoyed a wonderful romance but, because it ended badly, looked back on it mainly with bitterness. They didn’t simply experience fixed levels of happiness or unhappiness. They experienced one thing and remembered something else.
To answer the question, Redelmeier ran an experiment on roughly seven hundred people over a period of a year. One group of patients had the colonoscope yanked out of their rear ends at the end of their colonoscopy without ceremony; the other group felt the tip of the scope lingering in their rectums for an extra three minutes. Those extra three minutes were not pleasant. They were merely less unpleasant than the other procedure. The patients in the first group were on the receiving end of an old-fashioned wham-bam-thank-you-ma’am colonoscopy; those in the second group enjoyed a sweeter, or less painful, ending. The sum total of pain experienced by the second group was, however, greater. The patients in the second group experienced all the pain that those in the first group experienced, plus the extra three minutes’ worth. An hour after the procedure, the researchers entered the recovery room and asked the patients to rate their experience. Those who had been given the less unhappy ending remembered less pain than did the patients who had not. More interestingly, they proved more likely to return for another colonoscopy when the time came. Human beings who had never imagined that they might prefer more pain to less could nearly all be fooled into doing so. As Redelmeier put it, “Last impressions can be lasting impressions.”
“Cognitive Limitations and Public Decision Making.” It was troubling to consider, he began, “an organism equipped with an affective and hormonal system not much different from that of the jungle rat being given the ability to destroy every living thing by pushing a few buttons.” Given the work on human judgment that he and Amos had just finished, he found it further troubling to think that “crucial decisions are made, today as thousands of years ago, in terms of the intuitive guesses and preferences of a few men in positions of authority.” The failure of decision makers to grapple with the inner workings of their own minds, and their desire to indulge their gut feelings, made it “quite likely that the fate of entire societies may be sealed by a series of avoidable mistakes committed by their leaders.”
Both Amos and Danny thought that voters and shareholders and all the other people who lived with the consequences of high-level decisions might come to develop a better understanding of the nature of decision making. They would learn to evaluate a decision not by its outcomes—whether it turned out to be right or wrong—but by the process that led to it. The job of the decision maker wasn’t to be right but to figure out the odds in any decision and play them well. As Danny told audiences in Israel, what was needed was a “transformation of cultural attitudes to uncertainty and to risk.”
Danny was stunned: If a 10 percent increase in the chances of full-scale war with Syria wasn’t enough to interest the director-general in Kissinger’s peace process, how much would it take to convince him? That number represented the best estimate of the odds. Apparently the director-general didn’t want to rely on the best estimates. He preferred his own internal probability calculator: his gut. “That was the moment I gave up on decision analysis,” said Danny. “No one ever made a decision because of a number. They need a story.” As Danny and Lanir wrote, decades later, after the U.S. Central Intelligence Agency asked them to describe their experience in decision analysis, the Israeli Foreign Ministry was “indifferent to the specific probabilities.” What was the point of laying out the odds of a gamble, if the person taking it either didn’t believe the numbers or didn’t want to know them?
Happy people did not dwell on some imagined unhappiness the way unhappy people imagined what they might have done differently so that they might be happy. People did not seek to avoid other emotions with the same energy they sought to avoid regret. When they made decisions, people did not seek to maximize utility. They sought to minimize regret.
“In defiance of logic, there is a definite sense that one comes closer to winning the lottery when one’s ticket number is similar to the number that won,” Danny wrote in a memo to Amos, summarizing their data. In another memo, he added that “the general point is that the same state of affairs (objectively) can be experienced with very different degrees of misery,” depending on how easy it is to imagine that things might have turned out differently.
Few regretted what both Danny and Amos thought they should most regret: the Israeli government’s reluctance to give back the territorial gains from the 1967 war. Had Israel given back the Sinai to Egypt, Sadat would quite likely never have felt the need to attack in the first place. Why didn’t people regret Israel’s inaction? Amos and Danny had a thought: People regretted what they had done, and what they wished they hadn’t done, far more than what they had not done and perhaps should have. “The pain that is experienced when the loss is caused by an act that modified the status quo is significantly greater than the pain that is experienced when the decision led to the retention of the status quo,” Danny wrote in a memo to Amos. “When one fails to take action that could have avoided a disaster, one does not accept responsibility for the occurrence of the disaster.”
But what was this thing that everyone had been calling “risk aversion?” It amounted to a fee that people paid, willingly, to avoid regret: a regret premium.
It was instantly obvious to them that if you stuck minus signs in front of all these hypothetical gambles and asked people to reconsider them, they behaved very differently than they had when faced with nothing but possible gains. “It was a eureka moment,” said Danny. “We immediately felt like fools for not thinking of that question earlier.” When you gave a person a choice between a gift of $500 and a 50-50 shot at winning $1,000, he picked the sure thing. Give that same person a choice between losing $500 for sure and a 50-50 risk of losing $1,000, and he took the bet. He became a risk seeker. The odds that people demanded to accept a certain loss over the chance of some greater loss crudely mirrored the odds they demanded to forgo a certain gain for the chance of a greater gain. For example, to get people to prefer a 50-50 chance of $1,000 over some certain gain, you had to lower the certain gain to around $370. To get them to prefer a certain loss to a 50-50 chance of losing $1,000, you had to lower the loss to around $370.
Actually, they soon discovered, you had to reduce the amount of the certain loss even further if you wanted to get people to accept it. When choosing between sure things and gambles, people’s desire to avoid loss exceeded their desire to secure gain.
“The greater sensitivity to negative rather than positive changes is not specific to monetary outcomes,” wrote Amos and Danny. “It reflects a general property of the human organism as a pleasure machine. For most people, the happiness involved in receiving a desirable object is smaller than the unhappiness involved in losing the same object.” It wasn’t hard to imagine why this might be—a heightened sensitivity to pain was helpful to survival. “Happy species endowed with infinite appreciation of pleasures and low sensitivity to pain would probably
not survive the evolutionary battle,” they wrote.
When you gave them one bet with a 90 percent chance of working out and another with a 10 percent chance of working out, they did not behave as if the first was nine times as likely to work out as the second. They made some internal adjustment, and acted as if a 90 percent chance was actually slightly less than a 90 percent chance, and a 10 percent chance was slightly more than a 10 percent chance. They responded to probabilities not just with reason but with emotion. Whatever that emotion was, it became stronger as the odds became more remote. If you told them that there was a one-in-a-billion chance that they’d win or lose a bunch of money, they behaved as if the odds were not one in a billion but one in ten thousand. They feared a one-in-a-billion chance of loss more than they should and attached more hope to a one-in-a-billion chance of gain than they should. People’s emotional response to extremely long odds led them to reverse their usual taste for risk, and to become risk seeking when pursuing a long-shot gain and risk avoiding when faced with the extremely remote possibility of loss. (Which is why they bought both lottery tickets and insurance.)
The reference point was a state of mind. Even in straight gambles you could shift a person’s reference point and make a loss seem like a gain, and vice versa. In so doing, you could manipulate the choices people made, simply by the way they were described. They gave the economists a demonstration of the point: Problem A. In addition to whatever you own, you have been given $1,000. You are now required to choose between the following options: Option 1. A 50 percent chance to win $1000 Option 2. A gift of $500 Most everyone picked option 2, the sure thing. Problem B. In addition to whatever you own, you have been given $2,000. You are now required to choose between the following options: Option 3. A 50 percent chance to lose $1,000 Option 4. A sure loss of $500 Most everyone picked option 3, the gamble. The two questions were effectively identical.
People did not choose between things. They chose between descriptions of things. Economists, and anyone else who wanted to believe that human beings were rational, could rationalize, or try to rationalize, loss aversion. But how did you rationalize this? Economists assumed that you could simply measure what people wanted from what they chose. But what if what you want changes with the context in which the options are offered to you?
A lot of the items on it fell into a bucket that he eventually would label “The Endowment Effect.” The endowment effect was a psychological idea with economic consequences. People attached some strange extra value to whatever they happened to own, simply because they owned it, and so proved surprisingly reluctant to part with their possessions, or endowments, even when trading them made economic sense. But in the beginning, Thaler wasn’t thinking in categories. “At the time, I’m just collecting a list of stupid things people do,” he said. Why were people so slow to sell vacation homes that, if they hadn’t bought them in the first place and were offered them now, they would never buy? Why were NFL teams so reluctant to trade their draft picks when it was obvious that they could often get more than the players were worth in exchange? Why were investors so reluctant to sell stocks that had fallen in value, even when they admitted that they would never buy those stocks at their current market prices? There was no end of things people did that economic theory had trouble explaining. “When you start looking for the endowment effect,” Thaler said, “you see it everywhere.”
Danny now had an idea that there might be a fourth heuristic—to add to availability, representativeness, and anchoring. “The simulation heuristic,” he’d eventually call it, and it was all about the power of unrealized possibilities to contaminate people’s minds. As they moved through the world, people ran simulations of the future. What if I say what I think instead of pretending to agree? What if they hit it to me and the grounder goes through my legs? What happens if I say no to his proposal instead of yes? They based their judgments and decisions in part on these imagined scenarios. And yet not all scenarios were equally easy to imagine; they were constrained, much in the way that people’s minds seemed constrained when they “undid” some tragedy. Discover the mental rules that the mind obeyed when it undid events after they had occurred and you might find, in the bargain, how it simulated reality before it occurred.
Regret was the most obvious counterfactual emotion, but frustration and envy shared regret’s essential trait. “The emotions of unrealized possibility,” Danny called them, in a letter to Amos. These emotions could be described using simple math. Their intensity, Danny wrote, was a product of two variables: “the desirability of the alternative” and “the possibility of the alternative.” Experiences that led to regret and frustration were not always easy to undo. Frustrated people needed to undo some feature of their environment, while regretful people needed to undo their own actions. “The basic rules of undoing, however, apply alike to frustration and regret,” he wrote. “They require a more or less plausible path leading to the alternative state.”
Envy was different. Envy did not require a person to exert the slightest effort to imagine a path to the alternative state. “The availability of the alternative appears to be controlled by a relation of similarity between oneself and the target of envy. To experience envy, it is sufficient to have a vivid image of oneself in another person’s shoes; it is not necessary to have a plausible scenario of how one came to occupy those shoes.” Envy, in some strange way, required no imagination.
The force that led human judgment astray in this case was what Danny and Amos had called “representativeness,” or the similarity between whatever people were judging and some model they had in their mind of that thing. The minds of the students in the first Linda experiment, latching onto the description of Linda and matching its details to their mental model of “feminist,” judged the special case to be more likely than the general one.
The paper Amos and Danny set out to write about what they were now calling “the conjunction fallacy” must have felt to Amos like an argument ender—that is, if the argument was about whether the human mind reasoned probabilistically, instead of the ways that Danny and Amos had suggested. They walked the reader through how and why people violated “perhaps the simplest and the most basic qualitative law of probability.” They explained that people chose the more detailed description, even though it was less probable, because it was more “representative.” They pointed out some places in the real world where this kink in the mind might have serious consequences. Any prediction, for instance, could be made to seem more believable, even as it became less likely, if it was filled with internally consistent details. And any lawyer could at once make a case seem more persuasive, even as he made the truth of it less likely, by adding “representative” details to his description of people and events.
“The brain appears to be programmed, loosely speaking, to provide as much certainty as it can,” he once said, in a talk to a group of Wall Street executives. “It is apparently designed to make the best possible case for a given interpretation rather than to represent all the uncertainty about a given situation.” The mind, when it dealt with uncertain situations, was like a Swiss Army knife. It was a good enough tool for most jobs required of it, but not exactly suited to anything—and certainly not fully “evolved.” “Listen to evolutionary psychologists long enough,” Amos said, “and you’ll stop believing in evolution.”