Ended: Dec. 18, 2012
we can almost always detect antifragility (and fragility) using a simple test of asymmetry: anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.
Which brings us to the largest fragilizer of society, and greatest generator of crises, absence of “skin in the game.” Some become antifragile at the expense of others by getting the upside (or gains) from volatility, variations, and disorder and exposing others to the downside risks of losses or harm.
At no point in history have so many non-risk-takers, that is, those with no personal exposure, exerted so much control. The chief ethical rule is the following: Thou shalt not have antifragility at the expense of the fragility of others.
An annoying aspect of the Black Swan problem—in fact the central, and largely missed, point—is that the odds of rare events are simply not computable. We know a lot less about hundred-year floods than five-year floods—model error swells when it comes to small probabilities. The rarer the event, the less tractable, and the less we know about how frequent its occurrence—yet the rarer the event, the more confident these “scientists” involved in predicting, modeling, and using PowerPoint in conferences with equations in multicolor background have become.
In short, the fragilista (medical, economic, social planning) is one who makes you engage in policies and actions, all artificial, in which the benefits are small and visible, and the side effects potentially severe and invisible.
Commerce is fun, thrilling, lively, and natural; academia as currently professionalized is none of these. And for those who think that academia is “quieter” and an emotionally relaxing transition after the volatile and risk-taking business life, a surprise: when in action, new problems and scares emerge every day to displace and eliminate the previous day’s headaches, resentments, and conflicts. A nail displaces another nail, with astonishing variety. But academics (particularly in social science) seem to distrust each other; they live in petty obsessions, envy, and icy-cold hatreds, with small snubs developing into grudges, fossilized over time in the loneliness of the transaction with a computer screen and the immutability of their environment. Not to mention a level of envy I have almost never seen in business.… My experience is that money and transactions purify relations; ideas and abstract matters like “recognition” and “credit” warp them, creating an atmosphere of perpetual rivalry. I grew to find people greedy for credentials nauseating, repulsive, and untrustworthy.
This mechanism of overcompensation hides in the most unlikely places. If tired after an intercontinental flight, go to the gym for some exertion instead of resting. Also, it is a well-known trick that if you need something urgently done, give the task to the busiest (or second busiest) person in the office. Most humans manage to squander their free time, as free time makes them dysfunctional, lazy, and unmotivated—the busier they get, the more active they are at other tasks. Overcompensation, here again.
This obsession with the skeleton got started when I found a paper published in the journal Nature in 2003 by Gerard Karsenty and colleagues. The tradition has been to think that aging causes bone weakness (bones lose density, become more brittle), as if there was a one-way relationship possibly brought about by hormones (females start experiencing osteoporosis after menopause). It turns out, as shown by Karsenty and others who have since embarked on the line of research, that the reverse is also largely true: loss of bone density and degradation of the health of the bones also causes aging, diabetes, and, for males, loss of fertility and sexual function. We just cannot isolate any causal relationship in a complex system. Further, the story of the bones and the associated misunderstanding of interconnectedness illustrates how lack of stress (here, bones under a weight-bearing load) can cause aging, and how depriving stress-hungry antifragile systems of stressors brings a great deal of fragility which we will transport to political systems in Book II
We saw that stressors are information, in the right context. For the antifragile, harm from errors should be less than the benefits. We are talking about some, not all, errors, of course; those that do not destroy a system help prevent larger calamities. The engineer and historian of engineering Henry Petroski presents a very elegant point. Had the Titanic not had that famous accident, as fatal as it was, we would have kept building larger and larger ocean liners and the next disaster would have been even more tragic. So the people who perished were sacrificed for the greater good; they unarguably saved more lives than were lost. The story of the Titanic illustrates the difference between gains for the system and harm to some of its individual parts. The same can be said of the debacle of Fukushima: one can safely say that it made us aware of the problem with nuclear reactors (and small probabilities) and prevented larger catastrophes. (Note that the errors of naive stress testing and reliance on risk models were quite obvious at the time; as with the economic crisis, nobody wanted to listen.)
Further, my characterization of a loser is someone who, after making a mistake, doesn’t introspect, doesn’t exploit it, feels embarrassed and defensive rather than enriched with a new piece of information, and tries to explain why he made the mistake rather than moving on. These types often consider themselves the “victims” of some large plot, a bad boss, or bad weather. Finally, a thought. He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors—though never the same error more than once—is more reliable than someone who has never made any.
Saudi Arabia is the country that at present worries and offends me the most; it is a standard case of top-down stability enforced by a superpower at the expense of every single possible moral and ethical metric—and, of course, at the expense of stability itself. So a place “allied” to the United States is a total monarchy, devoid of a constitution. But that is not what is morally shocking. A group of between seven and fifteen thousand members of the royal family runs the place, leading a lavish, hedonistic lifestyle in open contradiction with the purist ideas that got them there. Look at the contradiction: the stern desert tribes whose legitimacy is derived from Amish-like austerity can, thanks to a superpower, turn to hedonistic uninhibited pleasure seeking—the king openly travels for pleasure with a retinue that fills four Jumbo jets. Quite a departure from his ancestors. The family members amassed a fortune now largely in Western safes. Without the United States, the country would have had its revolution, a regional breakup, some turmoil, then perhaps—by now—some stability. But preventing noise makes the problem worse in the long run. Clearly the “alliance” between the Saudi royal family and the United States was meant to provide stability. What stability? How long can one confuse the system? Actually “how long” is irrelevant: this stability is similar to a loan one has to eventually pay back. And there are ethical issues I leave to Chapter 24, particularly casuistry, when someone finds a justification “for the sake of” to violate an otherwise inflexible moral rule.2 Few people are aware of the fact that the bitterness of Iranians toward the United States comes from the fact that the United States—a democracy—installed a monarch, the repressive Shah of Iran, who pillaged the place but gave the United States the “stability” of access to the Persian Gulf. The theocratic regime in Iran today is largely the result of such repression. We need to learn to think in second steps, chains of consequences, and side effects.
An ethical problem arises when someone is put in charge. Greenspan’s actions were harmful, but even if he knew that, it would have taken a bit of heroic courage to justify inaction in a democracy where the incentive is to always promise a better outcome than the other guy, regardless of the actual, delayed cost.
There is an element of deceit associated with interventionism, accelerating in a professionalized society. It’s much easier to sell “Look what I did for you” than “Look what I avoided for you.” Of course a bonus system based on “performance” exacerbates the problem. I’ve looked in history for heroes who became heroes for what they did not do, but it is hard to observe nonaction; I could not easily find any. The doctor who refrains from operating on a back (a very expensive surgery), instead giving it a chance to heal itself, will not be rewarded and judged as favorably as the doctor who makes the surgery look indispensable, then brings relief to the patient while exposing him to operating risks, while accruing great financial rewards to himself. The latter will be driving the pink Rolls-Royce. The corporate manager who avoids a loss will not often be rewarded. The true hero in the Black Swan world is someone who prevents a calamity and, naturally, because the calamity did not take place, does not get recognition—or a bonus—for it. I will be taking the concept deeper in Book VII, on ethics, about the unfairness of a bonus system and how such unfairness is magnified by complexity.
Few understand that procrastination is our natural defense, letting things take care of themselves and exercise their antifragility; it results from some ecological or naturalistic wisdom, and is not always bad—at an existential level, it is my body rebelling against its entrapment. It is my soul fighting the Procrustean bed of modernity. Granted, in the modern world, my tax return is not going to take care of itself—but by delaying a non-vital visit to a doctor, or deferring the writing of a passage until my body tells me that I am ready for it, I may be using a very potent naturalistic filter. I write only if I feel like it and only on a subject I feel like writing about—and the reader is no fool. So I use procrastination as a message from my inner self and my deep evolutionary past to resist interventionism in my writing. Yet some psychologists and behavioral economists seem to think that procrastination is a disease to be remedied and cured.1 Given that procrastination has not been sufficiently pathologized yet, some associate it with the condition of akrasia discussed in Plato, a form of lack of self-control or weakness of will; others with aboulia, lack of will. And pharmaceutical companies might one day come up with a pill for it. The benefits of procrastination apply similarly to medical procedures: we saw that procrastination protects you from error as it gives nature a chance to do its job, given the inconvenient fact that nature is less error-prone than scientists. Psychologists and economists who study “irrationality” do not realize that humans may have an instinct to procrastinate only when no life is in danger. I do not procrastinate when I see a lion entering my bedroom or fire in my neighbor’s library. I do not procrastinate after a severe injury. I do so with unnatural duties and procedures. I once procrastinated and kept delaying a spinal cord operation as a response to a back injury—and was completely cured of the back problem after a hiking vacation in the Alps, followed by weight-lifting sessions. These psychologists and economists want me to kill my naturalistic instinct (the inner b****t detector) that allowed me to delay the elective operation and minimize the risks—an insult to the antifragility of our bodies. Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses. Few can grasp the logical consequence that, instead, one should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making. Actually I select the writing of the passages of this book by means of procrastination. If I defer writing a section, it must be eliminated. This is simple ethics: Why should I try to fool people by writing about a subject for which I feel no natural drive?2 Using my ecological reasoning, someone who procrastinates is not irrational; it is his…
If you want to accelerate someone’s death, give him a personal doctor. I don’t mean provide him with a bad doctor: just pay for him to choose his own. Any doctor will do. This may be the only possible way to murder someone while staying squarely within the law. We can see from the tonsillectomy story that access to data increases intervention, causing us to behave like the neurotic fellow. Rory Sutherland signaled to me that someone with a personal doctor on staff should be particularly vulnerable to naive interventionism, hence iatrogenics; doctors need to justify their salaries and prove to themselves that they have a modicum of work ethic, something that “doing nothing” doesn’t satisfy. Indeed, Michael Jackson’s personal doctor has been sued for something equivalent to overintervention-to-stifle-antifragility (but it will take the law courts a while to become directly familiar with the concept). Did you ever wonder why heads of state and very rich people with access to all this medical care die just as easily as regular persons? Well, it looks like this is because of overmedication and excessive medical care.
The previous two chapters showed how you can use and take advantage of noise and randomness; but noise and randomness can also use and take advantage of you, particularly when totally unnatural, as with the data you get on the Web or through the media. The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part, called the signal); hence the higher the noise-to-signal ratio. And there is a confusion which is not psychological at all, but inherent in the data itself. Say you look at information on a yearly basis, for stock prices, or the fertilizer sales of your father-in-law’s factory, or inflation numbers in Vladivostok. Assume further that for what you are observing, at a yearly frequency, the ratio of signal to noise is about one to one (half noise, half signal)—this means that about half the changes are real improvements or degradations, the other half come from randomness. This ratio is what you get from yearly observations. But if you look at the very same data on a daily basis, the composition would change to 95 percent noise, 5 percent signal. And if you observe data on an hourly basis, as people immersed in the news and market price variations do, the split becomes 99.5 percent noise to 0.5 percent signal. That is two hundred times more noise than signal—which is why anyone who listens to news (except when very, very significant events take place) is one step below sucker.
Consider the iatrogenics of newspapers. They need to fill their pages every day with a set of news items—particularly those news items also dealt with by other newspapers. But to do things right, they ought to learn to keep silent in the absence of news of significance. Newspapers should be of two-line length on some days, two hundred pages on others—in proportion with the intensity of the signal. But of course they want to make money and need to sell us junk food. And junk food is iatrogenic.
There is so much noise coming from the media’s glorification of the anecdote. Thanks to this, we are living more and more in virtual reality, separated from the real world, a little bit more every day while realizing it less and less. Consider that every day, 6,200 persons die in the United States, many of preventable causes. But the media only report the most anecdotal and sensational cases (hurricanes, freak accidents, small plane crashes), giving us a more and more distorted map of real risks. In an ancestral environment, the anecdote, the “interesting,” is information; today, no longer. Likewise, by presenting us with explanations and theories, the media induce an illusion of understanding the world. And the understanding of events (and risks) on the part of members of the press is so retrospective that they would put the security checks after the plane ride, or what the ancients call post bellum auxilium, sending troops after the battle. Owing to domain dependence, we forget the need to check our map of the world against reality. So we are living in a more and more fragile world, while thinking it is more and more understandable. To conclude, the best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible. This is hard to accept in the age of the Internet. It has been very hard for me to explain that the more data you get, the less you know what’s going on, and the more iatrogenics you will cause. People are still under the illusion that “science” means more data.
Sweden and other Nordic states, which are often offered as paragons of the large state “that works”—the government represents a large portion of the total economy. How could we have the happiest nation in the world, Denmark (assuming happiness is both measurable and desirable), and a monstrously large state? Is it that these countries are all smaller than the New York metropolitan area? Until my coauthor, the political scientist Mark Blyth, showed me that there, too, was a false narrative: it was almost the same story as in Switzerland (but with a worse climate and no good ski resorts). The state exists as a tax collector, but the money is spent in the communes themselves, directed by the communes—for, say, skills training locally determined as deemed necessary by the community themselves, to respond to private demand for workers. The economic elites have more freedom than in most other democracies—this is far from the statism one can assume from the outside. Further, illustrating a case of gaining from disorder, Sweden and other Nordic countries experienced a severe recession at the end of the cold war, around 1990, to which they responded admirably with a policy of fiscal toughness, thus effectively shielding them from the severe financial crisis that took place about two decades later.
There are ample empirical findings to the effect that providing someone with a random numerical forecast increases his risk taking, even if the person knows the projections are random. All I hear is complaints about forecasters, when the next step is obvious yet rarely taken: avoidance of iatrogenics from forecasting. We understand childproofing, but not forecaster-hubris-proofing.
Further, after the occurrence of an event, we need to switch the blame from the inability to see an event coming (say a tsunami, an Arabo-Semitic spring or similar riots, an earthquake, a war, or a financial crisis) to the failure to understand (anti)fragility, namely, “why did we build something so fragile to these types of events?” Not seeing a tsunami or an economic event coming is excusable; building something fragile to them is not.
In spite of their bad press, some people in the nuclear industry seem to be among the rare ones to have gotten the point and taken it to its logical consequence. In the wake of the Fukushima disaster, instead of predicting failure and the probabilities of disaster, these intelligent nuclear firms are now aware that they should instead focus on exposure to failure—making the prediction or nonprediction of failure quite irrelevant. This approach leads to building small enough reactors and embedding them deep enough in the ground with enough layers of protection around them that a failure would not affect us much should it happen—costly, but still better than nothing.
Another illustration, this time in economics, is the Swedish government’s focus on total fiscal responsibility after their budget troubles in 1991—it makes them much less dependent on economic forecasts. This allowed them to shrug off later crises.
Alas, men of leisure become slaves to inner feelings of dissatisfaction and interests over which they have little control. The freer Nero’s time, the more compelled he felt to compensate for lost time in filling gaps in his natural interests, things that he wanted to know a bit deeper. And, as he discovered, the worst thing one can do to feel one knows things a bit deeper is to try to go into them a bit deeper. The sea gets deeper as you go further into it, according to a Venetian proverb. Curiosity is antifragile, like an addiction, and is magnified by attempts to satisfy it—books have a secret mission and ability to multiply, as everyone who has wall-to-wall bookshelves knows well. Nero lived, at the time of writing, among fifteen thousand books, with the stress of how to discard the empty boxes and wrapping material after the arrival of his daily shipment from the bookstore. One subject Nero read for pleasure, rather than the strange duty-to-read-to-become-more-learned, was medical texts, for which he had a natural curiosity. The curiosity came from having had two brushes with death, the first from a cancer and the second from a helicopter crash that alerted him to both the fragility of technology and the self-healing powers of the human body. So he spent a bit of his time reading textbooks (not papers—textbooks) in medicine, or professional texts.
Fat Tony did not believe in predictions. But he made big bucks predicting that some people—the predictors—would go bust. Isn’t that paradoxical? At conferences, Nero used to meet physicists from the Santa Fe Institute who believed in predictions and used fancy prediction models while their business ventures based on predictions did not do that well—while Fat Tony, who did not believe in predictions, got rich from prediction. You can’t predict in general, but you can predict that those who rely on predictions are taking more risks, will have some trouble, perhaps even go bust. Why? Someone who predicts will be fragile to prediction errors. An overconfident pilot will eventually crash the plane. And numerical prediction leads people to take more risks. Fat Tony is antifragile because he takes a mirror image of his fragile prey. Fat Tony’s model is quite simple. He identifies fragilities, makes a bet on the collapse of the fragile unit, lectures Nero and trades insults with him about sociocultural matters, reacting to Nero’s jabs at New Jersey life, collects big after the collapse. Then he has lunch.
To show how eminently modern this is, I will next reveal how I’ve applied this brand of Stoicism to wrest back psychological control of the randomness of life. I have always hated employment and the associated dependence on someone else’s arbitrary opinion, particularly when much of what’s done inside large corporations violates my sense of ethics. So I have, accordingly, except for eight years, been self-employed. But, before that, for my last job, I wrote my resignation letter before starting the new position, locked it up in a drawer, and felt free while I was there. Likewise, when I was a trader, a profession rife with a high dose of randomness, with continuous psychological harm that drills deep into one’s soul, I would go through the mental exercise of assuming every morning that the worst possible thing had actually happened—the rest of the day would be a bonus. Actually the method of mentally adjusting “to the worst” had advantages way beyond the therapeutic, as it made me take a certain class of risks for which the worst case is clear and unambiguous, with limited and known downside. It is hard to stick to a good discipline of mental write-off when things are going well, yet that’s when one needs the discipline the most. Moreover, once in a while, I travel, Seneca-style, in uncomfortable circumstances (though unlike him I am not accompanied by “one or two” slaves). An intelligent life is all about such emotional positioning to eliminate the sting of harm, which as we saw is done by mentally writing off belongings so one does not feel any pain from losses. The volatility of the world no longer affects you negatively.
As to growth in GDP (gross domestic product), it can be obtained very easily by loading future generations with debt—and the future economy may collapse upon the need to repay such debt. GDP growth, like cholesterol, seems to be a Procrustean bed reduction that has been used to game systems. So just as, for a plane that has a high risk of crashing, the notion of “speed” is irrelevant, since we know it may not get to its destination, economic growth with fragilities is not to be called growth, something that has not yet been understood by governments. Indeed, growth was very modest, less than 1 percent per head, throughout the golden years surrounding the Industrial Revolution, the period that propelled Europe into domination. But as low as it was, it was robust growth—unlike the current fools’ race of states shooting for growth like teenage drivers infatuated with speed.
I initially used the image of the barbell to describe a dual attitude of playing it safe in some areas (robust to negative Black Swans) and taking a lot of small risks in others (open to positive Black Swans), hence achieving antifragility. That is extreme risk aversion on one side and extreme risk loving on the other, rather than just the “medium” or the beastly “moderate” risk attitude that in fact is a sucker game (because medium risks can be subjected to huge measurement errors). But the barbell also results, because of its construction, in the reduction of downside risk—the elimination of the risk of ruin. Let us use an example from vulgar finance, where it is easiest to explain, but misunderstood the most. If you put 90 percent of your funds in boring cash (assuming you are protected from inflation) or something called a “numeraire repository of value,” and 10 percent in very risky, maximally risky, securities, you cannot possibly lose more than 10 percent, while you are exposed to massive upside. Someone with 100 percent in so-called “medium” risk securities has a risk of total ruin from the miscomputation of risks. This barbell technique remedies the problem that risks of rare events are incomputable and fragile to estimation error; here the financial barbell has a maximum known loss. For antifragility is the combination aggressiveness plus paranoia—clip your downside, protect yourself from extreme harm, and let the upside, the positive Black Swans, take care of itself. We saw Seneca’s asymmetry: more upside than downside can come simply from the reduction of extreme downside (emotional harm) rather than improving things in the middle.
It also means letting people experience some, not too much, stress, to wake them up a bit. But, at the same time, they need to be protected from high danger—ignore small dangers, invest your energy in protecting them from consequential harm. And only consequential harm. This can visibly be translated into social policy, health care, and many more matters. One finds similar ideas in ancestral lore: it is explained in a Yiddish proverb that says “Provide for the worst; the best can take care of itself.” This may sound like a platitude, but it is not: just observe how people tend to provide for the best and hope that the worst will take care of itself. We have ample evidence that people are averse to small losses, but not so much toward very large Black Swan risks (which they underestimate), since they tend to insure for small probable losses, but not large infrequent ones. Exactly backwards.
Never ask people what they want, or where they want to go, or where they think they should go, or, worse, what they think they will desire tomorrow. The strength of the computer entrepreneur Steve Jobs was precisely in distrusting market research and focus groups—those based on asking people what they want—and following his own imagination. His modus was that people don’t know what they want until you provide them with it.
This kind of sum I’ve called in my vernacular “f*** you money”—a sum large enough to get most, if not all, of the advantages of wealth (the most important one being independence and the ability to only occupy your mind with matters that interest you) but not its side effects, such as having to attend a black-tie charity event and being forced to listen to a polite exposition of the details of a marble-rich house renovation. The worst side effect of wealth is the social associations it forces on its victims, as people with big houses tend to end up socializing with other people with big houses. Beyond a certain level of opulence and independence, gents tend to be less and less personable and their conversation less and less interesting.
Harvard’s former president Larry Summers got in trouble (clumsily) explaining a version of the point and lost his job in the aftermath of the uproar. He was trying to say that males and females have equal intelligence, but the male population has more variations and dispersion (hence volatility), with more highly unintelligent men, and more highly intelligent ones. For Summers, this explained why men were overrepresented in the scientific and intellectual community (and also why men were overrepresented in jails or failures). The number of successful scientists depends on the “tails,” the extremes, rather than the average. Just as an option does not care about the adverse outcomes, or an author does not care about the haters.
This property allowing us to be stupid, or, alternatively, allowing us to get more results than the knowledge may warrant, I will call the “philosopher’s stone” for now, or “convexity bias,” the result of a mathematical property called Jensen’s inequality. The mechanics will be explained later, in Book V when we wax technical, but take for now that evolution can produce astonishingly sophisticated objects without intelligence, simply thanks to a combination of optionality and some type of a selection filter, plus some randomness, as we see next.
To crystallize, take this description of an option: Option = asymmetry + rationality The rationality part lies in keeping what is good and ditching the bad, knowing to take the profits. As we saw, nature has a filter to keep the good baby and get rid of the bad. The difference between the antifragile and the fragile lies there. The fragile has no option. But the antifragile needs to select what’s best—the best option. It is worth insisting that the most wonderful attribute of nature is the rationality with which it selects its options and picks the best for itself—thanks to the testing process involved in evolution. Unlike the researcher afraid of doing something different, it sees an option—the asymmetry—when there is one. So it ratchets up—biological systems get locked in a state that is better than the previous one, the path-dependent property I mentioned earlier. In trial and error, the rationality consists in not rejecting something that is markedly better than what you had before.
As I said, in business, people pay for the option when it is identified and mapped in a contract, so explicit options tend to be expensive to purchase, much like insurance contracts. They are often overhyped. But because of the domain dependence of our minds, we don’t recognize it in other places, where these options tend to remain underpriced or not priced at all.
An option hides where we don’t want it to hide. I will repeat that options benefit from variability, but also from situations in which errors carry small costs. So these errors are like options—in the long run, happy errors bring gains, unhappy errors bring losses. That is exactly what Fat Tony was taking advantage of: certain models can have only unhappy errors, particularly derivatives models and other fragilizing situations.
In addition, these arguments about “long shots” are ludicrously cherry-picked. If you list the businesses that have generated the most wealth in history, you would see that they all have optionality. There is unfortunately the optionality of people stealing options from others and from the taxpayer (as we will see in the ethical section in Book VII), such as CEOs of companies with upside and no downside to themselves. But the largest generators of wealth in America historically have been, first, real estate (investors have the option at the expense of the banks), and, second, technology (which relies almost completely on trial and error). Further, businesses with negative optionality (that is, the opposite of having optionality) such as banking have had a horrible performance through history: banks lose periodically every penny made in their history thanks to blowups.
The other point of the chapter and Book IV is that the option is a substitute for knowledge—actually I don’t quite understand what sterile knowledge is, since it is necessarily vague and sterile. So I make the bold speculation that many things we think are derived by skill come largely from options, but well-used options, much like Thales’ situation—and much like nature—rather than from what we claim to be understanding. The implication is nontrivial. For if you think that education causes wealth, rather than being a result of wealth, or that intelligent actions and discoveries are the result of intelligent ideas, you will be in for a surprise. Let us see what kind of surprise. 1 I suppose that the main benefit of being rich (over just being independent) is to be able to despise rich people (a good concentration of whom you find in glitzy ski resorts) without any sour grapes. It is even sweeter when these farts don’t know that you are richer than they are.
Cherry-picking has optionality: the one telling the story (and publishing it) has the advantage of being able to show the confirmatory examples and completely ignore the rest—and the more volatility and dispersion, the rosier the best story will be (and the darker the worst story). Someone with optionality—the right to pick and choose his story—is only reporting on what suits his purpose. You take the upside of your story and hide the downside, so only the sensational seems to count.
This lesson “not the same thing” is quite general. When you have optionality, or some antifragility, and can identify betting opportunities with big upside and small downside, what you do is only remotely connected to what Aristotle thinks you do. There is something (here, perception, ideas, theories) and a function of something (here, a price or reality, or something real). The conflation problem is to mistake one for the other, forgetting that there is a “function” and that such function has different properties. Now, the more asymmetries there are between the something and the function of something, then the more difference there is between the two. They may end up having nothing to do with each other.
Sometimes, even when an economic theory makes sense, its application cannot be imposed from a model, in a top-down manner, so one needs the organic self-driven trial and error to get us to it. For instance, the concept of specialization that has obsessed economists since Ricardo (and before) blows up countries when imposed by policy makers, as it makes the economies error-prone; but it works well when reached progressively by evolutionary means, with the right buffers and layers of redundancies. Another case where economists may inspire us but should never tell us what to do—more on that in the discussion of Ricardian comparative advantage and model fragility in the Appendix.
Overconfidence leads to reliance on forecasts, which causes borrowing, then to the fragility of leverage. Further, there is convincing evidence that a PhD in economics or finance causes people to build vastly more fragile portfolios. George Martin and I listed all the major financial economists who were involved with funds, calculated the blowups by funds, and observed a far higher proportional incidence of such blowups on the part of finance professors—the most famous one being Long Term Capital Management, which employed Fragilistas Robert Merton, Myron Scholes, Chi-Fu Huang, and others.
So let us pick on Harvard Business School professors who deserve it quite a bit. When it comes to the first case (the error of ignoring positive asymmetries), one Harvard Business School professor, Gary Pisano, writing about the potential of biotech, made the elementary inverse-turkey mistake, not realizing that in a business with limited losses and unlimited potential (the exact opposite of banking), what you don’t see can be both significant and hidden from the past. He writes: “Despite the commercial success of several companies and the stunning growth in revenues for the industry as a whole, most biotechnology firms earn no profit.” This may be correct, but the inference from it is wrong, possibly backward, on two counts, and it helps to repeat the logic owing to the gravity of the consequences. First, “most companies” in Extremistan make no profit—the rare event dominates, and a small number of companies generate all the shekels. And whatever point he may have, in the presence of the kind of asymmetry and optionality we see in Figure 7, it is inconclusive, so it is better to write about another subject, something less harmful that may interest Harvard students, like how to make a convincing PowerPoint presentation or the difference in managerial cultures between the Japanese and the French. Again, he may be right about the pitiful potential of biotech investments, but not on the basis of the data he showed. Now why is such thinking by the likes of Professor Pisano dangerous? It is not a matter of whether or not he would inhibit research in biotech. The problem is that such a mistake inhibits everything in economic life that has antifragile properties (more technically, “right-skewed”). And it would fragilize by favoring matters that are “sure bets.” Remarkably, another Harvard professor, Kenneth Froot, made the exact same mistake, but in the opposite direction, with the negative asymmetries. Looking at reinsurance companies (those that insure catastrophic events), he thought that he found an aberration. They made too much profit given the risks they took, as catastrophes seemed to occur less often than what was reflected in the premia. He missed the point that catastrophic events hit them only negatively, and tend to be absent from past data (again, they are rare). Remember the turkey problem. One single episode, the asbestos liabilities, bankrupted families of Lloyd underwriters, losing income made over generations. One single episode. We will return to these two distinct payoffs, with “bounded left” (limited losses, like Thales’ bet) and “bounded right” (limited gains, like insurance or banking). The distinction is crucial, as most payoffs in life fall in either one or the other category.
Let me stop to issue rules based on the chapter so far. (i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalist Marc Andreessen); one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, whatever that means in your business.
Socrates’ technique was to make his interlocutor, who started with a thesis, agree to a series of statements, then proceed to show him how the statements he agreed to are inconsistent with the original thesis, thus establishing that he has no clue as to what he was taking about. Socrates used it mostly to show people how lacking in clarity they were in their thoughts, how little they knew about the concepts they used routinely—and the need for philosophy to elucidate these concepts.
When I last met Alison Wolf we discussed this dire problem with education and illusions of academic contribution, with Ivy League universities becoming in the eyes of the new Asian and U.S. upper class a status luxury good. Harvard is like a Vuitton bag or a Cartier watch. It is a huge drag on the middle-class parents who have been plowing an increased share of their savings into these institutions, transferring their money to administrators, real estate developers, professors, and other agents. In the United States, we have a buildup of student loans that automatically transfer to these rent extractors. In a way it is no different from racketeering: one needs a decent university “name” to get ahead in life; but we know that collectively society doesn’t appear to advance with organized education.
In the mid-1990s, I quietly deposited my necktie in the trash can at the corner of Forty-fifth Street and Park Avenue in New York. I decided to take a few years off and locked myself in the attic, trying to express what was coming out of my guts, trying to frame what I called “hidden nonlinearities” and their effects. What I had wasn’t quite an idea, rather, just a method, for the deeper central idea eluded me. But using this method, I produced close to a six-hundred-page-long discussion of managing nonlinear effects, with graphs and tables. Recall from the prologue that “nonlinearity” means that the response is not a straight line. But I was going further and looking at the link with volatility, something that should be clear soon. And I went deep into the volatility of volatility, and such higher-order effects. The book that came out of this solitary investigation in the attic, finally called Dynamic Hedging, was about the “techniques to manage and handle complicated nonlinear derivative exposures.” It was a technical document that was completely ab ovo (from the egg), and as I was going, I knew in my guts that the point had vastly more import than the limited cases I was using in my profession; I knew that my profession was the perfect platform to start thinking about these issues, but I was too lazy and too conventional to venture beyond. That book remained by far my favorite work (before this one), and I fondly remember the two harsh New York winters in the near-complete silence of the attic, with the luminous effect of the sun shining on the snow warming up both the room and the project. I thought of nothing else for
After the crisis of the late 2000s, I went through an episode of hell owing to contact with the press. I was suddenly deintellectualized, corrupted, extracted from my habitat, propelled into being a public commodity. I had not realized that it is hard for members of the media and the public to accept that the job of a scholar is to ignore insignificant current affairs, to write books, not emails, and not to give lectures dancing on a stage; that he has other things to do, like read in bed in the morning, write at a desk in front of a window, take long walks (slowly), drink espressos (mornings), chamomile tea (afternoons), Lebanese wine (evenings), and Muscat wines (after dinner), take more long walks (slowly), argue with friends and family members (but never in the morning), and read (again) in bed before sleeping, not keep rewriting one’s book and ideas for the benefit of strangers and members of the local chapter of Networking International who haven’t read it.
Then I opted out of public life. When I managed to retake control of my schedule and my brain, recovered from the injuries deep into my soul, learned to use email filters and autodelete functions, and restarted my life, Lady Fortuna brought two ideas to me, making me feel stupid—for I realized I had had them inside me all along.
We can see applications of the point across economic domains: central banks can print money; they print and print with no effect (and claim the “safety” of such a measure), then, “unexpectedly,” the printing causes a jump in inflation. Many economic results are completely canceled by convexity effects—and the happy news is that we know why. Alas, the tools (and culture) of policy makers are based on the overly linear, ignoring these hidden effects. They call it “approximation.” When you hear of a “second-order” effect, it means convexity is causing the failure of approximation to represent the real story.
One Saturday evening in November 2011, I drove to New York City to meet the philosopher Paul Boghossian for dinner in the Village—typically a forty-minute trip. Ironically, I was meeting him to talk about my book, this book, and more particularly, my ideas on redundancy in systems. I have been advocating the injection of redundancy into people’s lives and had been boasting to him and others that, since my New Year’s resolution of 2007, I have never been late to anything, not even by a minute (well, almost). Recall in Chapter 2 my advocacy of redundancies as an aggressive stance. Such personal discipline forces me to build buffers, and, as I carry a notebook, it allowed me to write an entire book of aphorisms. Not counting long visits to bookstores. Or I can sit in a café and read hate mail. With, of course, no stress, as I have no fear of being late. But the greatest benefit of such discipline is that it prevents me from cramming my day with appointments (typically, appointments are neither useful nor pleasant). Actually, by another rule of personal discipline I do not make appointments (other than lectures) except the very same morning, as a date on the calendar makes me feel like a prisoner, but that’s another story.
The other problem is that of misunderstanding the nonlinearity of natural resources, or anything particularly scarce and vital. Economists have the so-called law of scarcity, by which things increase in value according to the demand for them—but they ignore the consequences of nonlinearities on risk. My former thesis director, Hélyette Geman, and I are currently studying a “law of convexity” that makes commodities, particularly vital ones, even dearer than previously thought.
Do not cross a river if it is on average four feet deep. You have just been informed that your grandmother will spend the next two hours at the very desirable average temperature of seventy degrees Fahrenheit (about twenty-one degrees Celsius). Excellent, you think, since seventy degrees is the optimal temperature for grandmothers. Since you went to business school, you are a “big picture” type of person and are satisfied with the summary information. But there is a second piece of data. Your grandmother, it turns out, will spend the first hour at zero degrees Fahrenheit (around minus eighteen Celsius), and the second hour at one hundred and forty degrees (around 60º C), for an average of the very desirable Mediterranean-style seventy degrees (21º C). So it looks as though you will most certainly end up with no grandmother, a funeral, and, possibly, an inheritance. Clearly, temperature changes become more and more harmful as they deviate from seventy degrees. As you see, the second piece of information, the variability, turned out to be more important than the first. The notion of average is of no significance when one is fragile to variations—the dispersion in possible thermal outcomes here matters much more. Your grandmother is fragile to variations of temperature, to the volatility of the weather. Let us call that second piece of information the second-order effect, or, more precisely, the convexity effect.
The number of cars is the something, a variable; traffic time is the function of something. The behavior of the function is such that it is, as we said, “not the same thing.” We can see here that the function of something becomes different from the something under nonlinearities. (a) The more nonlinear, the more the function of something divorces itself from the something. If traffic were linear, then there would be no difference in traffic time between the two following situations: 90,000, then 110,000 cars on the one hand, or 100,000 cars on the other. (b) The more volatile the something—the more uncertainty—the more the function divorces itself from the something. Let us consider the average number of cars again. The function (travel time) depends more on the volatility around the average. Things degrade if there is unevenness of distribution. For the same average you prefer to have 100,000 cars for both time periods; 80,000 then 120,000, would be even worse than 90,000 and 110,000. (c) If the function is convex (antifragile), then the average of the function of something is going to be higher than the function of the average of something. And the reverse when the function is concave (fragile).
As an example for (c), which is a more complicated version of the bias, assume that the function under question is the squaring function (multiply a number by itself). This is a convex function. Take a conventional die (six sides) and consider a payoff equal to the number it lands on, that is, you get paid a number equivalent to what the die shows—1 if it lands on 1, 2 if it lands on 2, up to 6 if it lands on 6. The square of the expected (average) payoff is then (1+2+3+4+5+6 divided by 6)2, equals 3.52, here 12.25. So the function of the average equals 12.25. But the average of the function is as follows. Take the square of every payoff, 12+22+32+42+52+62 divided by 6, that is, the average square payoff, and you can see that the average of the function equals 15.17. So, since squaring is a convex function, the average of the square payoff is higher than the square of the average payoff. The difference here between 15.17 and 12.25 is what I call the hidden benefit of antifragility—here, a 24 percent “edge.” There are two biases: one elementary convexity effect, leading to mistaking the properties of the average of something (here 3.5) and those of a (convex) function of something (here 15.17), and the second, more involved, in mistaking an average of a function for the function of an average, here 15.17 for 12.25. The latter represents optionality. Someone with a linear payoff needs to be right more than 50 percent of the time. Someone with a convex payoff, much less. The hidden benefit of antifragility is that you can guess worse than random and still end up outperforming. Here lies the power of optionality—your function of something is very convex, so you can be wrong and still do fine—the more uncertainty, the better. This explains my statement that you can be dumb and antifragile and still do very well. This hidden “convexity bias” comes from a mathematical property called Jensen’s inequality. This is what the common discourse on innovation is missing. If you ignore the convexity bias, you are missing a chunk of what makes the nonlinear world go round. And it is a fact that such an idea is missing from the discourse. Sorry.
Let us take the same example as before, using as the function the square root (the exact inverse of squaring, which is concave, but much less concave than the square function is convex). The square root of the expected (average) payoff is then √(1+2+3+4+5+6 divided by 6), equals √3.5, here 1.87. The function of the average equals 1.87. But the average of the function is as follows. Take the square root of every payoff, (√1+√2+√3+√4+√5+√6), divided by 6, that is, the average square root payoff, and you can see that the average of the function equals 1.80. The difference is called the “negative convexity bias” (or, if you are a stickler, “concavity bias”). The hidden harm of fragility is that you need to be much, much better than random in your prediction and knowing where you are going, just to offset the negative effect. Let me summarize the argument: if you have favorable asymmetries, or positive convexity, options being a special case, then…
I am simplifying a bit. There may be a few degrees’ variation around 70 at which the grandmother might be better off than just at 70, but I skip this nuance here. In fact younger humans are antifragile to thermal variations, up to a point, benefiting from some variability, then losing such antifragility with age (or disuse, as I suspect that thermal comfort ages people and makes them fragile).
The grandmother does better at 70 degrees Fahrenheit than at an average of 70 degrees with one hour at 0, another at 140 degrees. The more dispersion around the average, the more harm for her. Let us see the counterintuitive effect in terms of x and function of x, f(x). Let us write the health of the grandmother as f(x), with x the temperature. We have a function of the average temperature, f{(0 + 140)/2}, showing the grandmother in excellent shape. But {f(o) + f(140)}/2 leaves us with a dead grandmother at f(0) and a dead grandmother at f(140), for an “average” of a dead grandmother. We can see an explanation of the statement that the properties of f(x) and those of x become divorced from each other when f(x) is nonlinear. The average of f(x) is different from f(average of x).
So the central tenet of the epistemology I advocate is as follows: we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition—given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily. If I spot a black swan (not capitalized), I can be quite certain that the statement “all swans are white” is wrong. But even if I have never seen a black swan, I can never hold such a statement to be true. Rephrasing it again: since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation. This idea has been associated in our times with the philosopher Karl Popper, and I quite mistakenly thought that he was its originator (though he is at the origin of an even more potent idea on the fundamental inability to predict the course of history). The notion, it turned out, is vastly more ancient, and was one of the central tenets of the skeptical-empirical school of medicine of the postclassical era in the Eastern Mediterranean. It was well known to a group of nineteenth-century French scholars who rediscovered these works. And this idea of the power of disconfirmation permeates the way we do hard science.
As you can see, we can link this to the general tableaus of positive (additive) and negative (subtractive): negative knowledge is more robust. But it is not perfect. Popper has been criticized by philosophers for his treatment of disconfirmation as hard, unequivocal, black-and-white. It is not clear-cut: it is impossible to figure out whether an experiment failed to produce the intended results—hence “falsifying” the theory—because of the failure of the tools, because of bad luck, or because of fraud by the scientist. Say you saw a black swan. That would certainly invalidate the idea that all swans are white. But what if you had been drinking Lebanese wine, or hallucinating from spending too much time on the Web? What if it was a dark night, in which all swans look gray? Let us say that, in general, failure (and disconfirmation) are more informative than success and confirmation, which is why I claim that negative knowledge is just “more robust.”
And, as expected, via negativa is part of classical wisdom. For the Arab scholar and religious leader Ali Bin Abi-Taleb (no relation), keeping one’s distance from an ignorant person is equivalent to keeping company with a wise man. Finally, consider this modernized version in a saying from Steve Jobs: “People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.”
When it comes to health care, Ezekiel Emanuel showed that half the population accounts for less than 3 percent of the costs, with the sickest 10 percent consuming 64 percent of the total pie. Bent Flyvbjerg (of Chapter 18) showed in his Black Swan management idea that the bulk of cost overruns by corporations are simply attributable to large technology projects—implying that that’s what we need to focus on instead of talking and talking and writing complicated papers.
I discovered that I had been intuitively using the less-is-more idea as an aid in decision making (contrary to the method of putting a series of pros and cons side by side on a computer screen). For instance, if you have more than one reason to do something (choose a doctor or veterinarian, hire a gardener or an employee, marry a person, go on a trip), just don’t do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error) require no more than a single reason. Likewise the French army had a heuristic to reject excuses for absenteeism for more than one reason, like death of grandmother, cold virus, and being bitten by a boar. If someone attacks a book or idea using more than one argument, you know it is not real: nobody says “he is a criminal, he killed many people, and he also has bad table manners and bad breath and is a very poor driver.” I have often followed what I call Bergson’s razor: “A philosopher should be known for one single idea, not more” (I can’t source it to Bergson, but the rule is good enough). The French essayist and poet Paul Valéry once asked Einstein if he carried a notebook to write down ideas. “I never have ideas” was the reply (in fact he just did not have chickens***t ideas). So, a heuristic: if someone has a long bio, I skip him—at a conference a friend invited me to have lunch with an overachieving hotshot whose résumé “can cover more than two or three lives”; I skipped to sit at a table with the trainees and stage engineers.2 Likewise when I am told that someone has three hundred academic papers and twenty-two honorary doctorates, but no other single compelling contribution or main idea behind it, I avoid him like the bubonic plague.
Next I present an application of the fooled by randomness effect. Information has a nasty property: it hides failures. Many people have been drawn to, say, financial markets after hearing success stories of someone getting rich in the stock market and building a large mansion across the street—but since failures are buried and we don’t hear about them, investors are led to overestimate their chances of success. The same applies to the writing of novels: we do not see the wonderful novels that are now completely out of print, we just think that because the novels that have done well are well written (whatever that means), that what is well written will do well. So we confuse the necessary and the causal: because all surviving technologies have some obvious benefits, we are led to believe that all technologies offering obvious benefits will survive. I will leave the discussion of what impenetrable property may help survival to the section on Empedocles’ dog. But note here the mental bias that causes people to believe in the “power of” some technology and its ability to run the world.
So with so many technologically driven and modernistic items—skis, cars, computers, computer programs—it seems that we notice differences between versions rather than commonalities. We even rapidly tire of what we have, continuously searching for versions 2.0 and similar iterations. And after that, another “improved” reincarnation. These impulses to buy new things that will eventually lose their novelty, particularly when compared to newer things, are called treadmill effects. As the reader can see, they arise from the same generator of biases as the one about the salience of variations mentioned in the section before: we notice differences and become dissatisfied with some items and some classes of goods. This treadmill effect has been investigated by Danny Kahneman and his peers when they studied the psychology of what they call hedonic states. People acquire a new item, feel more satisfied after an initial boost, then rapidly revert to their baseline of well-being. So, when you “upgrade,” you feel a boost of satisfaction with changes in technology. But then you get used to it and start hunting for the new new thing.
Simple, quite simple decision rules and heuristics emerge from this chapter. Via negativa, of course (by removal of the unnatural): only resort to medical techniques when the health payoff is very large (say, saving a life) and visibly exceeds its potential harm, such as incontrovertibly needed surgery or lifesaving medicine (penicillin). It is the same as with government intervention. This is squarely Thalesian, not Aristotelian (that is, decision making based on payoffs, not knowledge). For in these cases medicine has positive asymmetries—convexity effects—and the outcome will be less likely to produce fragility. Otherwise, in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small—say, those aiming for comfort—we have a large potential sucker problem (hence putting us on the wrong side of convexity effects). Actually, one of the unintended side benefits of the theorems that Raphael Douady and I developed in our paper mapping risk detection techniques (in Chapter 19) is an exact link between (a) nonlinearity in exposure or dose-response and (b) potential fragility or antifragility.
I also extend the problem to epistemological grounds and make rules for what should be considered evidence: as with whether a cup should be considered half-empty or half-full, there are situations in which we focus on absence of evidence, others in which we focus on evidence. In some cases one can be confirmatory, not others—it depends on the risks. Take smoking, which was, at some stage, viewed as bringing small gains in pleasure and even health (truly, people thought it was a good thing). It took decades for its harm to become visible. Yet had someone questioned it, he would have faced the canned-naive-academized and faux-expert response “do you have evidence that this is harmful?” (the same type of response as “is there evidence that polluting is harmful?”). As usual, the solution is simple, an extension of via negativa and Fat Tony’s don’t-be-a-sucker rule: the non-natural needs to prove its benefits, not the natural—according to the statistical principle outlined earlier that nature is to be considered much less of a sucker than humans. In a complex domain, only time—a long time—is evidence. For any decision, the unknown will preponderate on one side more than the other. The “do you have evidence” fallacy, mistaking evidence of no harm for no evidence of harm, is similar to the one of misinterpreting NED (no evidence of disease) for evidence of no disease. This is the same error as mistaking absence of evidence for evidence of absence, the one that tends to affect smart and educated people, as if education made people more confirmatory in their responses and more liable to fall into simple logical errors. And recall that under nonlinearities, the simple statements “harmful” or “beneficial” break down: it is all in the dosage.
In the emergency room, the doctor and staff insisted that I should “ice” my nose, meaning apply an ice-cold patch to it. In the middle of the pain, it hit me that the swelling that Mother Nature gave me was most certainly not directly caused by the trauma. It was my own body’s response to the injury. It seemed to me that it was an insult to Mother Nature to override her programmed reactions unless we had a good reason to do so, backed by proper empirical testing to show that we humans can do better; the burden of evidence falls on us humans. So I mumbled to the emergency room doctor whether he had any statistical evidence of benefits from applying ice to my nose or if it resulted from a naive version of an interventionism. His response was: “You have a nose the size of Cleveland and you are now interested in … numbers?” I recall developing from his blurry remarks the thought that he had no answer. Effectively, he had no answer, because as soon as I got to a computer, I was able to confirm that there is no compelling empirical evidence in favor of the reduction of swelling. At least, not outside of the very rare cases in which the swelling would threaten the patient, which was clearly not the case. It was pure sucker-rationalism in the mind of doctors, following what made sense to boundedly intelligent humans, coupled with interventionism, this need to do something, this defect of thinking that we knew better, and denigration of the unobserved. This defect is not limited to our control of swelling: this confabulation plagues the entire history of medicine, along with, of course, many other fields of practice. The researchers Paul Meehl and Robin Dawes pioneered a tradition to catalog the tension between “clinical” and actuarial (that is, statistical) knowledge, and examine how many things believed to be true by professionals and clinicians aren’t so and don’t match empirical evidence. The problem is of course that these researchers did not have a clear idea of where the burden of empirical evidence lies (the difference between naive or pseudo empiricism and rigorous empiricism)—the onus is on the doctors to show us why reducing fever is good, why eating breakfast before engaging in activity is healthy (there is no evidence), or why bleeding patients is the best alternative (they’ve stopped doing so). Sometimes I get the answer that they have no clue when they have to utter defensively “I am a doctor” or “are you a doctor?” But worst, I sometimes get some letters of support and sympathy from the alternative medicine fellows, which makes me go postal: the approach in this book is ultra-orthodox, ultra-rigorous, and ultra-scientific, certainly not in favor of alternative medicine.
And there is a simple statistical reason that explains why we have not been able to find drugs that make us feel unconditionally better when we are well (or unconditionally stronger, etc.): nature would have been likely to find this magic pill by itself. But consider that illness is rare, and the more ill the person the less likely nature would have found the solution by itself, in an accelerating way. A condition that is, say, three units of deviation away from the norm is more than three hundred times rarer than normal; an illness that is five units of deviation from the norm is more than a million times rarer!
Further, pharmaceutical companies are under financial pressures to find diseases and satisfy the security analysts. They have been scraping the bottom of the barrel, looking for disease among healthier and healthier people, lobbying for reclassifications of conditions, and fine-tuning sales tricks to get doctors to overprescribe. Now, if your blood pressure is in the upper part of the range that used to be called “normal,” you are no longer “normotensive” but “pre-hypertensive,” even if there are no symptoms in view. There is nothing wrong with the classification if it leads to healthier lifestyle and robust via negativa measures—but what is behind such classification, often, is a drive for more medication. I am not against the function and mission of pharma, rather, its business practice: they should focus for their own benefit on extreme diseases, not on reclassifications or pressuring doctors to prescribe medicines. Indeed, pharma plays on the interventionism of doctors.
We are necessarily antifragile to some dose of radiation—at naturally found levels. It may be that small doses prevent injuries and cancers coming from larger ones, as the body develops some kind of immunity. And, talking about radiation, few wonder why, after hundreds of million of years of having our skins exposed to sun rays, we suddenly need so much protection from them—is it that our exposure is more harmful than before because of changes in the atmosphere, or populations living in an environment mismatching the pigmentation of their skin—or rather, that makers of sun protection products need to make some profits?
The list of such attempts to outsmart nature driven by naive rationalism is long—always meant to “improve” things—with continuous first-order learning, that is, banning the offending drug or medical procedure but not figuring out that we could be making the mistake again, elsewhere. Statins. Statin drugs are meant to lower cholesterol in your blood. But there is an asymmetry, and a severe one. One needs to treat fifty high risk persons for five years to avoid a single cardiovascular event. Statins can potentially harm people who are not very sick, for whom the benefits are either minimal or totally nonexistent. We will not be able to get an evidence-based picture of the hidden harm in the short term (we need years for that—remember smoking) and, further, the arguments currently made in favor of the routine administration of these drugs often lie in a few statistical illusions or even manipulation (the experiments used by drug companies seem to play on nonlinearities and bundle the very ill and the less ill, in addition to assuming that the metric “cholesterol” equates 100 percent with health). Statins fail in their application the first principle of iatrogenics (unseen harm); further, they certainly do lower cholesterol, but as a human your objective function is not to lower a certain metric to get a grade to pass a school-like test, but get in better health. Further, it is not certain whether these indicators people try to lower are causes or manifestations that correlate to a condition—just as muzzling a baby would certainly prevent him from crying but would not remove the cause of his emotions. Metric-lowering drugs are particularly vicious because of a legal complexity. The doctor has the incentive to prescribe it because should the patient have a heart attack, he would be sued for negligence; but the error in the opposite direction is not penalized at all, as side effects do not appear at all as being caused by the medicine. The same problem of naive interpretation mixed with intervention bias applies to cancer detection: there is a marked bias in favor of treatment, even when it brings more harm, because the legal system favors intervention.
Antibiotics. Every time you take an antibiotic, you help, to some degree, the mutation of germs into antibiotic-resistant strains. Add to that the toying with your immune system. You transfer the antifragility from your body to the germ. The solution, of course, is to do it only when the benefits are large. Hygiene, or excessive hygiene, has the same effect, particularly when people clean their hands with chemicals after every social exposure.
Here are some verified and potential examples of iatrogenics (in terms of larger downside outside of very ill patients, whether such downside has been verified or not)4: Vioxx, the anti-inflammatory medicine with delayed heart problems as side effects. Antidepressants (used beyond the necessary cases). Bariatric surgery (in place of starvation of overweight diabetic patients). Cortisone. Disinfectants, cleaning products potentially giving rise to autoimmune diseases. Hormone replacement therapy. Hysterectomies. Cesarean births beyond the strictly necessary. Ear tubes in babies as an immediate response to ear infection. Lobotomies. Iron supplementation. Whitening of rice and wheat—it was considered progress. The sunscreen creams suspected to cause harm. Hygiene (beyond a certain point, hygiene may make you fragile by denying hormesis—our own antifragility). We ingest probiotics because we don’t eat enough “dirt” anymore. Lysol and other disinfectants killing so many “germs” that kids’ developing immune systems are robbed of necessary workout (or robbed of the “good” friendly germs and parasites). Dental hygiene: I wonder if brushing our teeth with toothpaste full of chemical substances is not mostly to generate profits for the toothpaste industry—the brush is natural, the toothpaste might just be to counter the abnormal products we consume, such as starches, sugars and high fructose corn syrup. Speaking of which, high fructose corn syrup was the result of neomania, financed by a Nixon administration in love with technology and victim of some urge to subsidize corn farmers. Insulin injections for Type II diabetics, based on the assumption that the harm from diabetes comes from blood sugar, not insulin resistance (or something else associated with it). Soy milk. Cow milk for people of Mediterranean and Asian descent. Heroin, the most dangerously addictive substance one can imagine, was developed as a morphine substitute for cough suppressants that did not have morphine’s addictive side effects. Psychiatry, particularly child psychiatry—but I guess I don’t need to convince anyone about its dangers. I stop here.
Again, my statements here are risk-management-based: if the person is very ill, there are no iatrogenics to worry about. So it is the…
The cases I have been discussing so far are easy to understand, but some applications are far more subtle. For instance, counter to “what makes sense” at a primitive level, there is no clear evidence that sugar-free sweetened drinks make you lose weight in accordance with the calories saved. But it took thirty years of confusing the biology of millions of people for us to start asking such questions. Somehow those recommending these drinks are under the impression, driven by the laws of physics (naive translation from thermodynamics), that the concept that we gain weight from calories is sufficient for further analysis. This would be certainly true in thermodynamics, as in a simple machine responding to energy without feedback, say, a car that burns fuel. But the reasoning does not hold in an informational dimension in which food is not just a source of energy; it conveys information about the environment (like stressors). The ingestion of food combined with one’s activity brings about hormonal cascades (or something similar that conveys information),…
I was in a gym in Barcelona next to the senior partner of a consulting firm, a profession grounded in building narratives and naive rationalization. Like many people who have lost weight, the fellow was eager to talk about it—it is easier to talk about weight loss theories than to stick to them. The fellow told me that he did not believe in such diets as the low-carbohydrate Atkins or Dukan diet, until he was told of the mechanism of “insulin,” which convinced him to embark on the regimen. He then lost thirty pounds—he had to wait for a theory before taking any action. That was in spite of the empirical evidence showing people losing one hundred pounds by avoiding carbohydrates, without changing their total food intake—just the composition! Now, being the exact opposite of the consultant, I believe that “insulin” as a cause is a fragile theory but that the phenomenology, the empirical effect, is real. Let me introduce the ideas of the postclassical school of the skeptical empiricists. We are built to be dupes for theories. But theories come and go; experience stays. Explanations change all the time, and have changed all the time in history (because of causal opacity, the invisibility of causes) with people involved in the incremental development of ideas thinking they always had a definitive theory; experience remains constant.
An attribution problem arises when the person imputes his positive results to his own skills and his failures to luck. Nicocles, as early as the fourth century B.C., asserts that doctors claimed responsibility for success and blamed failure on nature, or on some external cause. The very same idea was rediscovered by psychologists some twenty-four centuries later, and applied to stockbrokers, doctors, and managers of companies.
In addition, we now know that the craze against fats and the “fat free” slogans result from an elementary mistake in interpreting the results of a regression: when two variables are jointly responsible for an effect (here, carbohydrates and fat), sometimes one of them shows sole responsibility. Many fell into the error of attributing problems under joint consumption of fat and carbohydrates to fat rather than carbohydrates. Further, the great statistician and debunker of statistical misinterpretation David Freedman showed (very convincingly) with a coauthor that the link everyone is obsessing about between salt and blood pressure has no statistical basis. It may exist for some hypertensive people, but it is more likely the exception than the rule.
Life expectancy has increased (conditional on no nuclear war) because of the combination of many factors: sanitation, penicillin, a drop in crime, life-saving surgery, and of course, some medical practitioners operating in severe life-threatening situations. If we live longer, it is thanks to medicine’s benefits in cases that are lethal, in which the condition is severe—hence low iatrogenics, as we saw, the convex cases. So it is a serious error to infer that if we live longer because of medicine, that all medical treatments make us live longer.
Now I speculate the following, having looked closely at data with my friend Spyros Makridakis, a statistician and decision scientist who we introduced a few chapters ago as the first to find flaws in statistical forecasting methods. We estimated that cutting medical expenditures by a certain amount (while limiting the cuts to elective surgeries and treatments) would extend people’s lives in most rich countries, especially the United States. Why? Simple basic convexity analysis; a simple examination of conditional iatrogenics: the error of treating the mildly ill puts them in a concave position. And it looks as if we know very well how to do this. Just raise the hurdle of medical intervention in favor of cases that are most severe, for which the iatrogenics effect is very small. It may even be better to increase expenditures on these and reduce the one on elective ones. In other words, reason backward, starting from the iatrogenics to the cure, rather than the other way around. Whenever possible, replace the doctor with human antifragility. But otherwise don’t be shy with aggressive treatments.
So there are many hidden jewels in via negativa applied to medicine. For instance, telling people not to smoke seems to be the greatest medical contribution of the last sixty years. Druin Burch, in Taking the Medicine, writes: “The harmful effects of smoking are roughly equivalent to the combined good ones of every medical intervention developed since the war.… Getting rid of smoking provides more benefit than being able to cure people of every possible type of cancer.”
from a scientific perspective, it seems that the only way we may manage to extend people’s lives is through caloric restriction—which seems to cure many ailments in humans and extend lives in laboratory animals. But, as we will see in the next section, such restriction does not need to be permanent—just an occasional (but painful) fast might do. We know we can cure many cases of diabetes by putting people on a very strict starvation-style diet, shocking their system—in fact the mechanism had to have been known heuristically for a long time since there are institutes and sanatoria for curative starvation in Siberia. It has been shown that many people benefit from the removal of products that did not exist in their ancestral habitat: sugars and other carbohydrates in unnatural format, wheat products (those with celiac disease, but almost all of us are somewhat ill-adapted to this new addition to the human diet), milk and other cow products (for those of non–Northern European origin who did not develop lactose tolerance), sodas (both diet and regular), wine (for those of Asian origin who do not have the history of exposure), vitamin pills, food supplements, the family doctor, headache medicine and other painkillers. Reliance on painkillers encourages people to avoid addressing the cause of the headache with trial and error, which can be sleep deprivation, tension in the neck, or bad stressors—it allows them to keep destroying themselves in a Procrustean-bed-style life. But one does not have to go far, just start removing the medications that your doctor gave you, or, preferably, remove your doctor—as Oliver Wendell Holmes Sr. put it, “if all the medications were dumped in the sea, it would be better for mankind but worse for the fishes.” My father, an oncologist (who also did research in anthropology) raised me under that maxim (alas, while not completely following it in practice; he cited it enough, though).
I, for my part, resist eating fruits not found in the ancient Eastern Mediterranean (I use “I” here in order to show that I am not narrowly generalizing to the rest of humanity). I avoid any fruit that does not have an ancient Greek or Hebrew name, such as mangoes, papayas, even oranges. Oranges seem to be the postmedieval equivalent of candy; they did not exist in the ancient Mediterranean. Apparently, the Portuguese found a sweet citrus tree in Goa or elsewhere and started breeding it for sweeter and sweeter fruits, like a modern confectionary company. Even the apples we see in the stores are to be regarded with some suspicion: original apples were devoid of sweet taste and fruit corporations bred them for maximal sweetness—the mountain apples of my childhood were acid, bitter, crunchy, and much smaller than the shiny variety in U.S. stores said to keep the doctor away. As to liquid, my rule is drink no liquid that is not at least a thousand years old—so its fitness has been tested. I drink just wine, water, and coffee. No soft drinks. Perhaps the most possibly deceitfully noxious drink is the orange juice we make poor innocent people imbibe at the breakfast table while, thanks to marketing, we convince them it is “healthy.” (Aside from the point that the citrus our ancestors ingested was not sweet, they never ingested carbohydrates without large, very large quantities of fiber. Eating an orange or an apple is not biologically equivalent to drinking orange or apple juice.) From such examples, I derived the rule that what is called “healthy” is generally unhealthy, just as “social” networks are antisocial, and the “knowledge”-based economy is typically ignorant.
I would add that, in my own experience, a considerable jump in my personal health has been achieved by removing offensive irritants: the morning newspapers (the mere mention of the names of the fragilista journalists Thomas Friedman or Paul Krugman can lead to explosive bouts of unrequited anger on my part), the boss, the daily commute, air-conditioning (though not heating), television, emails from documentary…
Note that medical iatrogenics is the result of wealth and sophistication rather than poverty and artlessness, and of course the product of partial knowledge rather than ignorance. So this idea of shedding possessions to go to the desert can be quite potent as a via negativa–style subtractive strategy. Few have considered that money has its own iatrogenics, and that separating some people from their fortune would simplify their lives and bring great benefits in the form of healthy stressors. So being poorer might not be completely devoid of benefits if one does it right. We need modern civilization for many things, such as the legal system and emergency room surgery. But just imagine how by the subtractive perspective, via negativa, we can be better off by getting tougher: no sunscreen, no sunglasses if you have brown eyes, no air-conditioning, no orange juice (just water), no smooth surfaces, no soft drinks, no complicated pills, no loud music, no elevator, no juicer, no … I stop.
I wonder how people can accept that the stressors of exercise are good for you, but do not transfer to the point that food deprivation can have the same effect. But scientists are in the process of discovering the effects of episodic deprivation of some, or all, foods. Somehow, evidence shows, we get sharper and fitter in response to the stress of the constraint. We can look at biological studies not to generalize or use in the rationalistic sense, but to verify the existence of a human response to hunger: that biological mechanisms are activated by food deprivation. And we have experiments on cohorts showing the positive effect of hunger—or deprivation of a food group—on the human body. Researchers rationalize now with the mechanism of autophagy (eating oneself): when deprived of external sources, the theories are that your cells start eating themselves, or breaking down proteins and recombining amino acids to provide material for building other cells. It is assumed by some researchers (for now) that the “vacuum cleaner” effect of autophagy is the key to longevity—though my ideas of the natural are impervious to their theories: as I will show further down, occasional starvation produces some health benefits and that’s that. The response to hunger, our antifragility, has been underestimated. We’ve been telling people to eat a good meal for breakfast so they can face the travails of the day. And it is not a new theory by empirically blind modern-day nutritionists—for instance I was struck by a dialogue in Stendhal’s monumental novel Le rouge et le noir in which the protagonist, Julien Sorel, is told “the work for the day will be long and rough, so let us fortify ourselves with a breakfast” (which in the French of the period was called “the first lunch”). Indeed, the idea of breakfast as a main meal with cereals and other such materials has been progressively shown to be harming humans—I wonder why it took so long before anyone realized that such an unnatural idea needs to be tested; further, the tests show that harm, or, at least, no benefits are derived from breakfast unless one has worked for it beforehand. Let us remember that we are not designed to be receiving foods from the delivery person. In nature, we had to expend some energy to eat. Lions hunt to eat, they don’t eat their meal then hunt for pleasure. Giving people food before they expend energy would certainly confuse their signaling process. And we have ample evidence that intermittently (and only intermittently) depriving organisms of food has been shown to engender beneficial effects on many functions—Valter Longo, for instance, noted that prisoners in concentration camps got less sick in the first phase of food restriction, then broke down later. He tried the result experimentally and found out that mice, in the initial phases of starvation, can withstand high doses of chemotherapy without visible side effects. Scientists use the narrative that starvation causes the expression of a…
Some people claim that we need more fat than carbohydrates; others offer the opposite (they all tend to agree on protein, though few realize we need to randomize protein intake). Both sides still advocate nonrandomness in the mixing and ignore the nonlinearities from sequence and composition. 6 The principal disease of abundance can be seen in habituation and jadedness (what biologists currently call dulling of receptors); Seneca: “To a sick person, honey tastes better.”
(to repeat, forecasts induce risk taking; they are more toxic to us than any other form of human pollution).
So counter to the entire idea of the intellectual and commentator as a detached and protected member of society, I am stating here that I find it profoundly unethical to talk without doing, without exposure to harm, without having one’s skin in the game, without having something at risk. You express your opinion; it can hurt others (who rely on it), yet you incur no liability. Is this fair?
The psychologist Gerd Gigerenzer has a simple heuristic. Never ask the doctor what you should do. Ask him what he would do if he were in your place. You would be surprised at the difference.
Another blatant case of insulation. Sometimes the divorce between one’s “tawk” and one’s life can be overtly and convincingly visible: take people who want others to live a certain way but don’t really like it for themselves. Never listen to a leftist who does not give away his fortune or does not live the exact lifestyle he wants others to follow. What the French call “the caviar left,” la gauche caviar, or what Anglo-Saxons call champagne socialists, are people who advocate socialism, sometimes even communism, or some political system with sumptuary limitations, while overtly leading a lavish lifestyle, often financed by inheritance—not realizing the contradiction that they want others to avoid just such a lifestyle. It is not too different from the womanizing popes, such as John XII, or the Borgias. The contradiction can exceed the ludicrous as with French president François Mitterrand of France who, coming in on a socialist platform, emulated the pomp of French monarchs. Even more ironic, his traditional archenemy, the conservative General de Gaulle, led a life of old-style austerity and had his wife sew his socks.
I developed a friendship over the past few years with the activist Ralph Nader and saw contrasting attributes. Aside from an astonishing amount of personal courage and total indifference toward smear campaigns, he exhibits absolutely no divorce between what he preaches and his lifestyle, none. Just like saints who have soul in their game. The man is a secular saint.
A blatant manifestation of the agency problem is the following. There is a difference between a manager running a company that is not his own and an owner-operated business in which the manager does not need to report numbers to anyone but himself, and for which he has a downside. Corporate managers have incentives without disincentives—something the general public doesn’t quite get, as they have the illusion that managers are properly “incentivized.” Somehow these managers have been given free options by innocent savers and investors. I am concerned here with managers of businesses that are not owner-operated.
Now consider companies like Coke or Pepsi, which I assume are, as the reader is poring over these lines, still in existence—which is unfortunate. What business are they in? Selling you sugary water or substitutes for sugar, putting into your body stuff that messes up your biological signaling system, causing diabetes and making diabetes vendors rich thanks to their compensatory drugs. Large corporations certainly can’t make money selling you tap water and cannot produce wine (wine seems to be the best argument in favor of the artisanal economy). But they dress their products up with a huge marketing apparatus, with images that fool the drinker and slogans such as “125 years of providing happiness” or some such. I fail to see why the arguments we’ve used against tobacco firms don’t apply—to some extent—to all other large companies that try to sell us things that may make us ill.
The mechanism of cheapest-to-deliver-for-a-given-specification pervades whatever you see on the shelves. Corporations, when they sell you what they call cheese, have an incentive to provide you with the cheapest-to-produce piece of rubber containing the appropriate ingredients that can still be called cheese—and do their homework by studying how to fool your taste buds. Actually, it is more than just an incentive: they are structurally designed and extremely expert at delivering the cheapest possible product that meets their specifications. The same with, say, business books: publishers and authors want to grab your attention and put in your hands the most perishable journalistic item available that still can be called a book. This is optimization at work, in maximizing (image and packaging) or minimizing (costs and efforts).
There is a phenomenon called the treadmill effect, similar to what we saw with neomania: you need to make more and more to stay in the same place. Greed is antifragile—though not its victims. Back to the sucker problem in believing that wealth makes people more independent. We need no more evidence for it than what is taking place now: recall that we have never been richer in the history of mankind. And we have never been more in debt (for the ancients, someone in debt was not free, he was in bondage). So much for “economic growth.” At the local level, it looks like we get socialized in a certain milieu, hence exposed to a treadmill. You do better, move to Greenwich, Connecticut, then become a pauper next to a twenty-million-dollar mansion and million-dollar birthday parties. And you become more and more dependent on your job, particularly as your neighbors get big tax-sponsored Wall Street bonuses.
The glass is dead; living things are long volatility. The best way to verify that you are alive is by checking if you like variations. Remember that food would not have a taste if it weren’t for hunger; results are meaningless without effort, joy without sadness, convictions without uncertainty, and an ethical life isn’t so when stripped of personal risks.
Iatrogenics: Harm done by the healer, as when the doctor’s interventions do more harm than good. Generalized Iatrogenics: By extension, applies to the harmful side effects of actions by policy makers and activities of academics.
Hormesis: A bit of a harmful substance, or stressor, in the right dose or with the right intensity, stimulates the organism and makes it better, stronger, healthier, and prepared for a stronger dose the next exposure. (Think of bones and karate.)
Doxastic Commitment, or “Soul in the Game”: You must only believe predictions and opinions by those who committed themselves to a certain belief, and had something to lose, in a way to pay a cost in being wrong.
Lindy Effect: A technology, or anything nonperishable, increases in life expectancy with every day of its life—unlike perishable items (such as humans, cats, dogs, and tomatoes). So a book that has been a hundred years in print is likely to stay in print another hundred years.
TECHNICAL VERSION OF FAT TONY’S “NOT THE SAME ‘TING,’ ” OR THE CONFLATION OF EVENTS AND EXPOSURE TO EVENTS This note will also explain a “convex transformation.” f(x) is exposure to the variable x. f(x) can equivalently be called “payoff from x,” “exposure to x,” even “utility of payoff from x” where we introduce in f a utility function. x can be anything. Example: x is the intensity of an earthquake on some scale in some specific area, f(x) is the number of persons dying from it. We can easily see that f(x) can be made more predictable than x (if we force people to stay away from a specific area or build to some standards, etc.). Example: x is the number of meters of my fall to the ground when someone pushes me from height x, f(x) is a measure of my physical condition from the effect of the fall. Clearly I cannot predict x (who will push me, rather f(x)). Example: x is the number of cars in NYC at noon tomorrow, f(x) is travel time from point A to point B for a certain agent. f(x) can be made more predictable than x (take the subway, or, even better, walk). Some people talk about f(x) thinking they are talking about x. This is the problem of the conflation of event and exposure. This error present in Aristotle is virtually ubiquitous in the philosophy of probability (say, Hacking). One can become antifragile to x without understanding x, through convexity of f(x). The answer to the question “what do you do in a world you don’t understand?” is, simply, work on the undesirable states of f(x). It is often easier to modify f(x) than to get better knowledge of x. (In other words, robustification rather than forecasting Black Swans.) Example: If I buy an insurance on the market, here x, dropping more than 20 percent, f(x) will be independent of the part of the probability distribution of x that is below 20 percent and impervious to changes in its scale parameter. (This is an example of a barbell.)
We use the following deficit example owing to the way calculations by governments and government agencies currently miss convexity terms (and have a hard time accepting it). Really, they don’t take them into account. The example illustrates: (a) missing the stochastic character of a variable known to affect the model but deemed deterministic (and fixed), and (b) F, the function of such variable, is convex or concave with respect to the variable. Say a government estimates unemployment for the next three years as averaging 9 percent; it uses its econometric models to issue a forecast balance B of a two-hundred-billion deficit in the local currency. But it misses (like almost everything in economics) that unemployment is a stochastic variable. Employment over a three-year period has fluctuated by 1 percent on average. We can calculate the effect of the error with the following: Unemployment at 8%, Balance B(8%) = −75 bn (improvement of 125 bn) Unemployment at 9%, Balance B(9%)= −200 bn Unemployment at 10%, Balance B(10%)= −550 bn (worsening of 350 bn) The concavity bias, or negative convexity bias, from underestimation of the deficit is −112.5 bn, since ½ {B(8%) + B(10%)} = −312 bn, not −200 bn. This is the exact case of the inverse philosopher’s stone.
Corporate Finance: In short, corporate finance seems to be based on point projections, not distributional projections; thus if one perturbates cash flow projections, say, in the Gordon valuation model, replacing the fixed—and known—growth (and other parameters) by continuously varying jumps (particularly under fat-tailed distributions), companies deemed “expensive,” or those with high growth, but low earnings, could markedly increase in expected value, something the market prices heuristically but without explicit reason.