Good Calories, Bad Calories
Gary Taubes

Ended: April 26, 2013

Dietary cholesterol, for instance, has an insignificant effect on blood cholesterol. It might elevate cholesterol levels in a small percentage of highly sensitive individuals, but for most of us, it’s clinically meaningless.*5 Nonetheless, the advice to eat less cholesterol—avoiding egg yolks, for instance—remains gospel. Telling people they should worry about cholesterol in their blood but not in their diet has been deemed too confusing.
Kinsell and Edward “Pete” Ahrens of Rockefeller University then demonstrated that the crucial factor in controlling cholesterol was not whether the fat was from an animal or a vegetable, but its degree of “saturation,” as well as what’s known as the chain length of the fats. This saturation factor is a measure of whether or not the molecules of fat—known as triglycerides—contain what can be considered a full quotient of hydrogen atoms, as they do in saturated fats, which tend to raise cholesterol, or whether one or more are absent, as is the case with unsaturated fats, which tend, in comparison, to lower it. This kind of nutritional wisdom is now taught in high school, along with the erroneous idea that all animal fats are “bad” saturated fats, and all “good” unsaturated fats are found in vegetables and maybe fish.
The reality is that both animal and vegetable fats and oils are composed of many different kinds of fats, each with its own chain length and degree of saturation, and each with a different effect on cholesterol. Half of the fat in beef, for instance, is unsaturated, and most of that fat is the same monounsaturated fat as in olive oil. Lard is 60 percent unsaturated; most of the fat in chicken fat is unsaturated as well.
“The method of science is the method of bold conjectures and ingenious and severe attempts to refute them,” said Karl Popper, the dean of the philosophy of science. Popper also noted that an infinite number of possible wrong conjectures exist for every one that happens to be right. This is why the practice of science requires an exquisite balance between a fierce ambition to discover the truth and a ruthless skepticism toward your own work. This, too, is the ideal albeit not the reality, of research in medicine and public health.
By 1980, this link between cancer and low cholesterol was appearing in study after study. The most consistent association was between colon cancer and low cholesterol in men. In the Framingham Study those men whose total cholesterol levels were below 190 mg/dl were more than three times as likely to get colon cancer as those men with cholesterol greater than 220; they were almost twice as likely to contract any kind of cancer than those with cholesterol over 280 mg/dl. This finding was met with “surprise and chagrin,” Manning Feinleib, a National Heart, Lung, and Blood Institute (NHLBI) epidemiologist, told Science.
In reality, those who repudiate a theory that they had once proposed, or a theory that they had accepted enthusiastically and with which they had identified themselves, are very rare. The great majority of them shut their ears so as not to hear the crying facts, and shut their eyes so as not to see the glaring facts, in order to remain faithful to their theories in spite of all and everything. MAURICE ARTHUS, Philosophy of Scientific Investigation, 1921
Wishful science eventually devolves to the point where it is kept alive simply by the natural reluctance of its advocates to recognize or acknowledge error, rather than compelling evidence that it is right. “These are cases where there is no dishonesty involved,” explained the Nobel Prize–winning chemist Irving Langmuir in a celebrated 1953 lecture, “but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions.” Whereas good science would blossom over time, Langmuir noted, this “pathological science” would not. The most concise statement of this philosophy may be an unwritten rule of experimental physics credited originally to Wolfgang Panofsky, a former Manhattan Project physicist and presidential science adviser. “If you throw money at an effect and it doesn’t get larger,” Panofsky said, “that means it is not real.”
In 1964, when the physicist Richard Feynman presented what would become a renowned series of lectures at Cornell University, he observed that it was a natural condition of scientists to be biased or prejudiced toward their beliefs. That bias, Feynman said, would ultimately make no difference, “because if your bias is wrong a perpetual accumulation of experiments will perpetually annoy you until they cannot be disregarded any longer.” They could be disregarded, he said, only if “you are absolutely sure ahead of time” what the answer must be. In the case of Keys’s hypothesis, the annoying evidence was consistently disregarded from the beginning. Because the totality of evidence was defined as only those data that confirmed the hypothesis, Keys’s hypothesis would always appear monolithic. Annoying observations could not force a reanalysis of the underlying assumptions, because each of those observations would be discarded immediately as being inconsistent with the totality of the evidence. This was a self-fulfilling phenomenon. It was unlikely, however, to lead to reliable knowledge about either the cause of heart disease or the routes to prevention. It did not mean the hypothesis was false, but its truth could never be established, either.
Despite the century of debate in the United Kingdom on the merits of white flour and wholemeal flour and the potential evils of sugar, it wouldn’t be until the 1990s that epidemiologists began to delineate between refined and unrefined carbohydrates in their dietary analyses. Even in 1989, when the National Academy of Sciences published its seven-hundred-page Diet and Health report, the authors made little attempt to differentiate refined carbohydrates from unrefined, other than occasionally to note studies in which sugar intake by itself was studied. When Keys linked the low-fat, high-carbohydrate diet of the Japanese in the late 1950s to the extremely low incidence of heart disease, he paid no attention to sugar consumption. Fat consumption in Japan was extremely low, as were heart-disease rates, and so he concluded that the lower the fat the better. But the consumption of sugars in Japan was very low, too—less than forty pounds per person per year in 1963, and still under fifty pounds in 1980—equivalent to the yearly per-capita consumption recorded in the United States or in the United Kingdom a century earlier.
In 1975, Richard Doll and Bruce Armstrong published a seminal analysis of diet and cancer, in which they noted that, the higher the sugar intake in different nations, the higher both the incidence of and mortality from cancer of the colon, rectum, breast, ovary, uterus, prostate, kidney, nervous system, and testicles.*35 Still, epidemiologists fixated on the fat-cancer hypothesis and made no attempt to measure the refined carbohydrates in the populations they studied. As a result, a joint 1997 report of the World Cancer Research Fund and the American Institute for Cancer Research, entitled Food, Nutrition and the Prevention of Cancer, said this: The degree to which starch is refined in diets, particularly when the intake of starch is high, may itself be an important factor in cancer risk, as may the volume of refined starches and sugars in diets. Epidemiological studies have not, however, generally distinguished between degrees of refining or processing of starches, and there are, as yet, no reliable epidemiological data specifically on the effects of refining on cancer risk.
It would be easy, Cleave suggested, to gather together the twenty fattest people in any neighborhood and find that “they wouldn’t have a sweet tooth among them, and they wouldn’t like sugar”—they would all be beer drinkers. “Beer is full of malt sugar and enormously fattening,” he said.
Through the 1960s, Yudkin published the results of a series of experiments implicating sugar in heart disease. He fed sugar and starch to rats, mice, chickens, rabbits, and pigs, and reported that the sugar, depending on the particular animal involved, raised some combination of cholesterol, triglycerides, and insulin levels. Triglycerides are a form of fat molecule found in the blood, and a series of researchers beginning with Pete Ahrens at Rockefeller University and Margaret Albrink of Yale had suggested that triglyceride levels were a better predictor of heart disease than was cholesterol. (Diabetics, as Joslin had noted, all too often died of atherosclerosis, and they, too, inevitably had high levels of triglycerides.) Yudkin also fed high-sugar diets to college students and reported that it raised their cholesterol and particularly their triglycerides; their insulin levels rose, and their blood cells became stickier, which he believed could explain the blood clots that seemed to precipitate heart attacks.
As in the Tokelau study, the dominant approach over the past fifty years toward understanding the chronic diseases of civilization has been to assume that they are only coincidentally related, that each disease has its unique causal factors associated with the Western diet and lifestyle, although dietary fat, saturated fat, serum cholesterol, and excess weight invariably remain prime suspects. The less common approach to this synchronicity of diseases has been to assume, as Peter Cleave did, that related diseases have related or common causes; that they are manifestations of a single underlying disorder. Cleave called it the saccharine disease because he believed sugar and other refined carbohydrates were responsible. By this philosophy, if diabetes, coronary heart disease, obesity, gout, and hypertension appear simultaneously in populations, as they did in the Tokelauan experience, and are frequently found together in the same patients, then they are very likely to be manifestations of a single underlying pathology. If nothing else, Cleave argued, this common-cause hypothesis was the simplest possible explanation for the evidence, and thus the one that should be presumed true until compelling evidence refuted it. This was Occam’s razor, and it should be the guiding principle of all scientific endeavors.
Only in the late 1980s did they begin to come together, when the Stanford diabetologist Gerald Reaven proposed the name Syndrome X to describe the metabolic abnormalities common to obesity, diabetes, and heart disease, all, at the very least, exacerbated by the consumption of sugar, flour, and other easily digestible carbohydrates. Syndrome X included elevated levels of the blood fats known as triglycerides; low levels of HDL cholesterol, now known as the good cholesterol; it included hypertension, and three phenomena that are considered precursors of adult-onset diabetes—chronically high levels of insulin (hyperinsulinemia), a condition known as insulin resistance (a relative insensitivity of cells to insulin), and the related condition of glucose intolerance (an inability to metabolize glucose properly). Over the years, other abnormalities have been added to this list: the presence of predominantly small, dense LDL particles, and high levels of a protein called fibrinogen that increases the likelihood of blood-clot formation. Elevated uric-acid concentrations in the blood, a precursor of gout, have been linked to Syndrome X, as has a state of chronic inflammation, marked by a high concentration in the blood of a protein known as C-reactive protein.
In the last decade, Syndrome X has taken on a variety of names as authorities, institutions, and associations have slowly come to accept its validity. It is often referred to as insulin resistance syndrome. The National Heart, Lung, and Blood Institute belatedly recognized the existence of Syndrome X in 2001, calling it metabolic syndrome. It has even been referred to as insulin resistance/metabolic syndrome X, or MSX, by those investigators attempting to cover all bases.*39 By any name, this metabolic syndrome is as much a disorder of carbohydrate metabolism as is adult-onset diabetes, and is certainly a consequence…
All other hormones, however, are secondary to the role of insulin in energy production, utilization, and storage. Historically, physicians have viewed insulin as though it has a single primary function: to remove and store away sugar from the blood after a meal. This is the most conspicuous function impaired in diabetes. But the roles of insulin are many and diverse. It is the primary regulator of fat, carbohydrate, and protein metabolism; it regulates the synthesis of a molecule called glycogen, the form in which glucose is stored in muscle tissue and the liver; it stimulates the synthesis and storage of fats in fat depots and in the liver, and it inhibits the release of that fat. Insulin also stimulates the synthesis of proteins and of molecules involved in the function, repair, and growth of cells, and even of RNA and DNA molecules, as well. Insulin, in short, is the one hormone that serves to coordinate and regulate everything having to do with the storage and use of nutrients and thus the maintenance of homeostasis and, in a word, life. It’s all these aspects of homeostatic regulatory systems—in particular, carbohydrate and fat metabolism, and kidney and liver functions—that are malfunctioning in the cluster of metabolic abnormalities associated with metabolic syndrome and with the chronic diseases of civilization. As metabolic syndrome implies, and as John Yudkin observed in 1986, both heart disease and diabetes are associated with a host of metabolic and hormonal abnormalities that go far beyond elevations in cholesterol levels and so, presumably, any possible effect of saturated fat in the diet.
Despite the intimate association of these diseases, public-health authorities for the past thirty years have insisted that salt is the dietary cause of hypertension and the increase in blood pressure that accompanies aging. Textbooks recommend salt reduction as the best way for diabetics to reduce or prevent hypertension, along with losing weight and exercising. This salt-hypertension hypothesis is nearly a century old. It is based on what medical investigators call biological plausibility—it makes sense and so seems obvious. When we consume salt—i.e., sodium chloride—our bodies maintain the concentration of sodium in our blood by retaining more water along with it. The kidneys should then respond to the excess by excreting salt into the urine, thus relieving both excess salt and water simultaneously. Still, in most individuals, a salt binge will result in a slight increase in blood pressure from the swelling of this water retention, and so it has always been easy to imagine that this rise could become chronic over time with continued consumption of a salt-rich diet. That’s the hypothesis. But in fact it has always been remarkably difficult to generate any reasonably unambiguous evidence that it’s correct. In 1967, Jeremiah Stamler described the evidence in support of the salt-hypertension connection as “inconclusive and contradictory.” He still called it “inconsistent and contradictory” sixteen years later, when he described his failure in an NIH-funded trial to confirm the hypothesis that salt consumption raises blood pressure in school-age children. The NIH has funded subsequent studies, but little progress has been made. The message conveyed to the public, nonetheless, is that salt is a nutritional evil—“the deadly white powder,” as Michael Jacobson of the Center for Science in the Public Interest called it in 1978. Systematic reviews of the evidence, whether published by those who believe that salt is responsible for hypertension or by those who don’t, have inevitably concluded that significant reductions in salt consumption—cutting our average salt intake in half, for instance, which is difficult to accomplish in the real world—will drop blood pressure by perhaps 4 to 5 mm Hg in hypertensives and 2 mm Hg in the rest of us. If we have hypertension, however, even if just stage 1, which is the less severe form of the condition, it means our systolic blood pressure is already elevated at least 20 mm Hg over what’s considered healthy. If we have stage 2 hypertension, our blood pressure is elevated by at least 40 mm Hg over healthy levels. So cutting our salt intake in half and decreasing our systolic blood pressure by 4 to 5 mm Hg makes little difference.
The “remarkable sodium and water retaining effect of concentrated carbohydrate food,” as the University of Wisconsin endocrinologist Edward Gordon called it, was then explained physiologically in the mid-1960s by Walter Bloom, who was studying fasting as an obesity treatment at Atlanta’s Piedmont Hospital, where he was director of research. As Bloom reported in the Archives of Internal Medicine and The American Journal of Clinical Nutrition, the water lost on carbohydrate-restricted diets is caused by a reversal of the sodium retention that takes place routinely when we eat carbohydrates. Eating carbohydrates prompts the kidneys to hold on to salt, rather than excrete it. The body then retains extra water to keep the sodium concentration of the blood constant. So, rather than having water retention caused by taking in more sodium, which is what theoretically happens when we eat more salt, carbohydrates cause us to retain water by inhibiting the excretion of the sodium that is already there. Removing carbohydrates from the diet works, in effect, just like the antihypertensive drugs known as diuretics, which cause the kidneys to excrete sodium, and water along with it.
Since the late 1970s, investigators have demonstrated the existence of other hormonal mechanisms by which insulin raises blood pressure—in particular, by stimulating the nervous system and the same flight-or-fight response incited by adrenaline. This was first reported by Lewis Landsberg, an endocrinologist who was then at Harvard Medical School and would later become dean of the Northwestern University School of Medicine. Landsberg showed that, by stimulating the activity of the nervous system, insulin increases heart rate and constricts blood vessels, thereby raising blood pressure. The higher the insulin level, the greater the stimulation of the nervous system, Landsberg noted. If insulin levels remained high, so Landsberg’s research suggested, then the sympathetic nervous system would be constantly working to raise blood pressure. The heart-disease research community has paid attention to Landsberg’s work, but has considered it relevant only for the obese. Because obesity is associated with higher insulin levels, and because it’s now believed that obesity causes higher insulin levels (whereas obesity itself is allegedly caused by the consumption of excess calories of all types), any possible link to carbohydrate consumption or “carbohydrate overfeeding” is overlooked. Even Landsberg has concentrated almost exclusively on the obesity-insulin-hypertension connection and ignored the idea that the increase in insulin levels due to excessive carbohydrate consumption, or due to the consumption of refined and easily digestible carbohydrates, might have a similar effect.
The first (Chapter 9) covers the research that directly challenged the fundamental premise of Keys’s hypothesis that cholesterol itself is the critical component in heart disease, and instead implicated triglycerides and the kinds of molecules known as lipoproteins that carry cholesterol through the blood, both of which are effectively regulated by the carbohydrate content of the diet rather than saturated fat. The chapter then explains how this research, despite its refutation of the fat-cholesterol hypothesis, has been assimilated into it nonetheless. The second thread (Chapter 10) follows the evolution of the science of insulin resistance and hyperinsulinemia, the condition of having chronically elevated insulin levels, and how that emerged out of attempts to understand the intimate relationship of obesity, heart disease, and diabetes and led to the understanding of metabolic syndrome and the entire cluster of metabolic and hormonal abnormalities that it entails. The third (Chapter 11) discusses the implications of metabolic syndrome in relation to diabetes and the entire spectrum of diabetic complications. The fourth (Chapter 12) discusses table sugar and high-fructose corn syrup, in particular, and the research suggesting that they have negative health effects that are unique among refined carbohydrate foods. The last section of this history (Chapter 13) discusses how metabolic syndrome, and particularly high blood sugar, hyperinsulinemia, and insulin resistance, have physiological repercussions that can conceivably explain the appearance of even Alzheimer’s disease and cancer. Throughout these five chapters, the science will be more technical than has typically been the case in popular discussions of what we should eat and what we shouldn’t. I believe it is impossible, though, to make the argument that nutritionists for a half century oversimplified the science to the point of generating false ideas and erroneous deductions, without discussing the science at the level of complexity that it deserves.
Eventually, researchers came to identify these different classes of lipoproteins by their density. Of those that appeared to play obvious roles in heart disease, three in particular stood out even in the early 1950s. Two of these are familiar today: the low-density lipoproteins, known as LDL, the bad cholesterol, and the high-density variety, known as HDL, the good cholesterol. (This is an oversimplification, as I will explain shortly.) The third class is known as VLDL, which stands for “very low-density lipoproteins,” and these play a critical role in heart disease. Most of the triglycerides in the blood are carried in VLDL; much of the cholesterol is found in LDL. That LDL and HDL are the two species of lipoproteins that physicians now measure when we get a checkup is a result of the oversimplification of the science, not the physiological importance of the particles themselves.
“While it is true that, for certain individuals, the amount of dietary fat is an important factor,” Gofman explained, “it turns out that there are other more significant factors that need to be considered. Human metabolism is so regulated that factors other than the actual dietary intake of one of these constituents may determine the amount of that constituent that will circulate in the bloodstream. Indeed, important observations have been made which indicate that certain substances in the diet that are not fatty at all may still have the effect of increasing the concentration of the fat-bearing lipoprotein substances in the blood.”
Though Gofman’s studies had demonstrated that the amount of LDL in the blood can indeed be elevated by the consumption of saturated fats, it was carbohydrates, he reported, that elevated VLDL—containing some cholesterol and most of the triglycerides in the blood—and only by restricting carbohydrates could VLDL be lowered. This fact was absolutely critical to the dietary prevention of heart disease, Gofman said. If a physician put a patient with high cholesterol on a low-fat diet, that might lower the patient’s LDL, but it would raise VLDL. If LDL was abnormally elevated, then this low-fat diet might help, but what Gofman called the “carbohydrate factor” in these low-fat diets might raise VLDL so much that the diet would do more harm than good. Indeed, in Gofman’s experience, when LDL decreased, VLDL tended to rise disproportionately. And if VLDL was abnormally elevated to begin with, then prescribing a low-fat, high-carbohydrate diet would certainly increase the patient’s risk of heart disease.
First Fredrickson, Levy, and Lees proposed a simplified classification of lipoproteins (perhaps an oversimplification, they acknowledged), which divided the lipoproteins in the bloodstream into four categories: LDL, which typically carried most of the cholesterol; VLDL, which carried most of the triglycerides; the high-density lipoproteins, HDL; and chylomicrons, which carry dietary fat from the intestine to the fat tissue. Then they proposed a classification scheme for disorders of lipoprotein metabolism, each delineated by a roman numeral, that included both those of abnormally high amounts of LDL cholesterol, which they suggested might be ameliorated by low-fat diets, as well as those characterized by abnormally high triglycerides carried in VLDL, which would be ameliorated by low-carbohydrate diets. Four of the five lipoprotein disorders described in this series were characterized by abnormally elevated levels of triglycerides in the very low-density lipoproteins. For this reason, Fredrickson, Levy, and Lees also warned against the dangers of advocating low-fat diets for all patients, because these diets increased carbohydrate consumption and so would elevate triglycerides and VLDL even further. By far the most common of the five lipoprotein disorders was the one designated Type IV, characterized by elevated VLDL triglycerides—“sometimes considered synonymous with ‘carbohydrate-induced hyperlipemia,’” they wrote—and it had to be treated with a low-carbohydrate diet. “Patients with this syndrome,” Lees later wrote, “form a sizable fraction of the population suffering from coronary heart disease.”
In 1985, Scott Grundy and his colleague Fred Mattson provided what appeared to be the ideal compromise—a dietary means both to lower LDL cholesterol and to raise HDL cholesterol without consuming more carbohydrates or saturated fats. This was monounsaturated fats, such as the oleic acid found in olive oil, and it served to keep the focus on the fat in the diet, rather than the carbohydrates. In the 1950s, Keys had assumed that monounsaturated fats were neutral, because they had no effect on total cholesterol. But this apparent neutrality, as Grundy reported, was due to the ability of these fats simultaneously to raise HDL cholesterol and lower LDL cholesterol. Saturated fats raise both HDL and LDL cholesterol. Carbohydrates lower LDL cholesterol but also lower HDL. Grundy and Mattson’s discovery of the double-barreled effect of monounsaturated fats, and particularly oleic acid, reignited the popular interest in the Mediterranean diet as the ideal heart-healthy diet, though it seemed to be heart-healthy only in some Mediterranean regions and not in others, and such diets, as even Grundy conceded, had never been tested. When they finally were tested in two clinical trials in the 1990s—the Lyon Diet Heart Trial and an Italian study known as GISSI-Prevenzione—both supported the contention that the diet prevented heart attacks, but neither provided evidence that it did so by either raising HDL or lowering LDL, which was how it was now alleged to work.
The observation that monounsaturated fats both lower LDL cholesterol and raise HDL also came with an ironic twist: the principal fat in red meat, eggs, and bacon is not saturated fat, but the very same monounsaturated fat as in olive oil. The implications are almost impossible to believe after three decades of public-health recommendations suggesting that any red meat consumed should at least be lean, with any excess fat removed. Consider a porterhouse steak with a quarter-inch layer of fat. After broiling, this steak will reduce to almost equal parts fat and protein.*48 Fifty-one percent of the fat is monounsaturated, of which 90 percent is oleic acid. Saturated fat constitutes 45 percent of the total fat, but a third of that is stearic acid, which will increase HDL cholesterol while having no effect on LDL. (Stearic acid is metabolized in the body to oleic acid, according to Grundy’s research.) The remaining 4 percent of the fat is polyunsaturated, which lowers LDL cholesterol but has no meaningful effect on HDL. In sum, perhaps as much as 70 percent of the fat content of a porterhouse steak will improve the relative levels of LDL and HDL cholesterol, compared with what they would be if carbohydrates such as bread, potatoes, or pasta were consumed. The remaining 30 percent will raise LDL cholesterol but will also raise HDL cholesterol and will have an insignificant effect, if any, on the ratio of total cholesterol to HDL. All of this suggests that eating a porterhouse steak in lieu of bread or potatoes would actually reduce heart-disease risk, although virtually no nutritional authority will say so publicly. The same is true for lard and bacon.
Our understanding of the nutritional causes of heart disease started with Keys’s original oversimplification that heart disease is caused by the effect of all dietary fat on total serum cholesterol. Total cholesterol gave way to HDL and LDL cholesterol and even triglycerides. All fat gave way to animal and vegetable fat, which gave way to saturated, monounsaturated, and polyunsaturated fat, and then polyunsaturated fats branched into omega-three and omega-six polyunsaturated fats. By the mid-1980s, these new levels of complexity had still not deterred the AHA and NIH…
In 1983, they reported that the disproportionate elevation in the apo B protein in heart-disease patients was due to a disproportionate elevation in the amount of the smallest and densest of the low-density lipoproteins. This explained what Krauss had set out to understand: why two people can have identical LDL-cholesterol levels and yet one develops atherosclerosis and coronary heart disease and the other doesn’t—why LDL cholesterol is only a marginal risk factor for heart disease. If we have low LDL cholesterol, but it’s packaged almost exclusively in small, dense LDL particles—the smaller balloons—that translates to a higher risk of heart disease. If we have high LDL cholesterol, but it’s packaged in a smaller number of large, fluffy LDL particles—the larger balloons—then our heart-disease risk is significantly lower. Small, dense LDL, simply because it is small and dense, appears to be more atherogenic, more likely to cause atherosclerosis. Small, dense LDL can squeeze more easily through damaged areas of the artery wall to form incipient atherosclerotic plaques. Sniderman describes small, dense LDL as the equivalent of “little bits of sand” that get in everywhere and stick more avidly. The relative dearth of cholesterol in these particles may also cause structural changes in the protein that make it easier for it to adhere to the artery wall to begin with. And because small, dense LDL apparently remains in the bloodstream longer than larger and fluffier LDL, it has more time and greater opportunities to do its damage. Finally, it’s possible that LDL has to be oxidized—the biological equivalent, literally, of rusting—before it can play a role in atherosclerosis, and the existing evidence suggests that small, dense LDL oxidizes more easily than the larger, fluffier variety.
The notion that carbohydrates determine the ultimate atherogenicity of lipoproteins is surprisingly easy to explain by the current understanding of fat-and-cholesterol transport. This model also accounts neatly for the observed relationship between heart disease, triglycerides, and cholesterol, and so constitutes another level of the physiological mechanisms underlying the carbohydrate hypothesis. The details are relatively straightforward, but, not surprisingly, they represent a radical shift from the mechanisms envisioned by Keys and others, in which coronary artery disease is caused by the simple process of saturated fat raising total-cholesterol or LDL-cholesterol levels. This is another way in which the subspecialization of medical researchers works against progress. For most epidemiologists, cardiologists, internists, nutritionists, and dieticians, their knowledge of lipoprotein metabolism dates to their medical or graduate-school training. Short of reading the latest biochemistry textbooks or the specialized journals devoted to this research, they have few available avenues (and little reason, as they see it) for keeping up-to-date, and so the current understanding of these metabolic processes escapes them. The details of lipoprotein metabolism circa 2007 remain a mystery to the great proportion of clinicians and investigators involved in the prevention of heart disease. One key fact to remember in this discussion is that LDL and LDL cholesterol are not one and the same. The LDL carries cholesterol, but the amount of cholesterol in each LDL particle will vary. Increasing the LDL cholesterol is not the same as increasing the number of LDL particles. There are two ways to increase the amount of cholesterol in LDL. One is to increase the amount of cholesterol secreted to begin with; the other is to decrease the rate of disposal of cholesterol once it’s been created (which is apparently what happens when we eat saturated fat). Either method will eventually result in elevated LDL cholesterol. Joseph Goldstein and Michael Brown worked out the details of the clearance-and-disposal mechanism in the 1970s, and this work won them the Nobel Prize.
As for secretion, the key point is that most low-density lipoproteins, LDL, begin their lives as very low-density lipoproteins, VLDL. (This was one implication of the observation that both LDL and VLDL are composed of the same apo B protein, and it was established beyond reasonable doubt in the 1970s.) This is why VLDL is now commonly referred to as a precursor of LDL, and LDL as a remnant of VLDL. If the liver synthesizes more cholesterol, we end up with more total cholesterol and so more LDL cholesterol, although apparently not more LDL particles. If the liver synthesizes and secretes more VLDL, we will also end up with more LDL cholesterol but we have more LDL particles as well, and they’ll be smaller and denser. This process is easier to understand if we picture what’s actually happening in the liver. After we eat a carbohydrate-rich meal, the bloodstream is flooded with glucose, and the liver takes some of this glucose and transforms it into fat—i.e., triglycerides—for temporary storage. These triglycerides are no more than droplets of oil. In the liver, the oil droplets are fused to the apo B protein and to the cholesterol that forms the outer membrane of the balloon. The triglycerides constitute the cargo that the lipo-proteins drop off at tissues throughout the body. The combination of cholesterol and apo B is the delivery vehicle. The resulting lipoprotein has a very low density, and so is a VLDL particle, because the triglycerides are lighter than either the cholesterol or the apo B. (In the same way, the more air in the hold of a ship, the less dense the ship and the higher it floats in the water.) For this reason, the larger the initial oil droplet, the more triglycerides packaged in the lipoprotein, the lower its density. The liver then secretes this triglyceride-rich VLDL into the blood, and the VLDL sets about delivering its cargo of triglycerides around the body. Throughout this process, known poetically as the delipidation cascade, the lipoprotein gets progressively smaller and denser until it ends its life as a low-density lipoprotein—LDL. One result is that any factor that enhances the synthesis of VLDL will subsequently increase the number of LDL particles as well. As long as sufficient triglycerides remain in the lipoprotein to be deposited in tissues, this evolution to progressively smaller and denser LDL continues. It’s this journey from VLDL to LDL that explains why most men who have high LDL cholesterol will also have elevated VLDL triglycerides. “It’s the overproduction of VLDL and apo B that is the most common cause of high LDL in our society,” says Ernst Schaefer, director of the lipid-metabolism laboratory at the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University. None of this, so far, is controversial; the details are described in recent editions of biochemistry textbooks.
How this process is regulated is less well established. In Krauss’s model, based on his own research and that of the Scottish lipid-metabolism researcher Chris Packard and others, the rate at which triglycerides accumulate in the liver controls the size of the oil droplet loaded onto the lipoprotein, and which of two pathways the lipoprotein then follows. If triglycerides are hard to come by, as would be the case with diets low in either calories or carbohydrates, then the oil droplets packaged with apo B and cholesterol will be small ones. The ensuing lipoproteins secreted by the liver will be of a subspecies known as intermediate-density lipoproteins—which are less dense than LDL but denser than VLDL—and these will end their lives as relatively large, fluffy LDL. The resulting risk of heart disease will be relatively low, because the liver had few triglycerides to dispose of initially. If the liver has to dispose of copious triglycerides, then the oil droplets are large, and the resulting lipoproteins put into the circulation will be triglyceride-rich and very low-density. These then progressively give up their triglycerides, eventually ending up, after a particularly extended life in the circulation, as the atherogenic small, dense LDL. This triglyceride-rich scenario would take place whenever carbohydrates are consumed in abundance. “I am now convinced it is the carbohydrate inducing this atherogenic [profile] in a reasonable…
total-cholesterol levels and apparent absence of heart disease. Such populations lived on carbohydrate-rich diets out of economic necessity rather than choice. Their diets were predominantly unrefined carbohydrates because that’s what they cultivated and it was all they could afford. As Ahrens had noted, the great proportion of individuals in such populations barely eked out enough calories to survive. This was true not only of Japan in the years after World War II, but of Greece and other areas of the Mediterranean as well. If these populations indeed had low cholesterol and suffered little from heart disease, a relative lack of calories and a near-complete absence of refined carbohydrates would have been responsible, not the low intake of saturated fat. In developed nations—the United States, for…
Even the diabetes community found it easier to accept Reaven’s science than its dietary implications. Reaven’s observations and data “speak for themselves,” as Robert Silverman of the NIH suggested at a 1986 consensus conference on diabetes prevention and treatment. But they placed nutritionists in an awkward position. “High protein levels can be bad for the kidneys,” said Silverman. “High fat is bad for your heart. Now Reaven is saying not to eat high carbohydrates. We have to eat something.” “Sometimes we wish it would go away,” Silverman added, “because nobody knows how to deal with it.” This is what psychologists call cognitive dissonance, or the tension that results from trying to hold two incompatible beliefs simultaneously. When the philosopher of science Thomas Kuhn discussed cognitive dissonance in scientific research—“the awareness of an anomaly in the fit between theory and nature”—he suggested that scientists will typically do what they have invariably done in the past in such cases: “They will devise numerous articulations and ad hoc modifications of their theory in order to eliminate any apparent conflict.” And that’s exactly what happened with metabolic syndrome and its dietary implications. The syndrome itself was accepted as real and important; the idea that it was caused or exacerbated by the excessive consumption of carbohydrates simply vanished.
In 1993, Krauss and Reaven together reported that small, dense LDL was another of the metabolic abnormalities commonly found in Reaven’s Syndrome X. Small, dense LDL, they noted, was associated with insulin resistance, hyperinsulinemia, high blood sugar, hypertension, and low HDL as well. They also reported that the two best predictors of the presence of insulin resistance and the dominance of small, dense LDL are triglycerides and HDL cholesterol—the higher the triglycerides and the lower the HDL, the more likely it is that both insulin resistance and small, dense LDL are present. This offers yet another reason to believe the carbohydrate hypothesis of heart disease, since metabolic syndrome is now considered perhaps the dominant heart-disease risk factor—a “coequal partner to cigarette smoking as contributors to premature [coronary heart disease],” as the National Cholesterol Education Program describes it—and both triglycerides and HDL cholesterol are influenced by carbohydrate consumption far more than by any fat. Nonetheless, when small, dense LDL and metabolic syndrome officially entered the orthodox wisdom as risk factors for heart disease in 2002, the cognitive dissonance was clearly present. First the National Cholesterol Education Program published its revised guidelines for cholesterol testing and treatment. This was followed in 2004 by two conference reports: one describing the conclusions of a joint NIH-AHA meeting on scientific issues related to metabolic syndrome, and the other, in which the American Diabetes Association joined in as well, describing joint treatment guidelines. Scott Grundy of the University of Texas was the primary author of all three documents. When I interviewed Grundy in May 2004, he acknowledged that metabolic syndrome was the cause of most heart disease in America, and that this syndrome is probably caused by the excessive consumption of refined carbohydrates. Yet his three reports—representing the official NIH, AHA, and ADA positions—all remained firmly wedded to the fat-cholesterol dogma. They acknowledge metabolic syndrome as an emerging risk factor for heart disease, but identify LDL cholesterol as “the primary driving force for coronary atherogenesis.” Thus, heart disease in America, as the National Cholesterol Education Program report put it, was still officially caused by “mass elevations of serum LDL cholesterol result[ing] from the habitual diet in the United States, particularly diets high in saturated fats and cholesterol.” There was no mention that carbohydrates might be responsible for causing or exacerbating either metabolic syndrome or the combination of low HDL, high triglycerides, and small, dense LDL, which is described as occurring “commonly in persons with premature [coronary heart disease].*53 In the now established version of the alternative hypothesis—that metabolic syndrome leads to heart disease—the carbohydrates that had always been considered the causative agent had been officially…
DESPITE NEARLY A CENTURY’S WORTH OF therapeutic innovations, the likelihood of a diabetic’s contracting coronary artery disease is no less today than it was in 1921, when insulin was first discovered. Type 2 diabetics can still expect to die five to ten years prematurely, with much of this difference due to atherosclerosis and what Joslin’s Diabetes Mellitus has called an “extraordinarily high incidence” of coronary disease. Diabetes specialists have historically perceived this plague of atherosclerosis among their patients as though it has little relevance to the atherosclerosis and heart disease that affect the rest of us.
The extreme example of this thinking has been the assumption that saturated fat is the nutritional agent of heart disease in diabetics, just as it supposedly is in everyone else. “The frequent cardiovascular complications seen in past years among persons with diabetes,” the 1988 Surgeon General’s Report on Nutrition and Health says, are caused by the “traditional restriction of carbohydrate intake in persons with diabetes” and thus an increased intake of fat, “usually, saturated.” This was the logic that led the American Diabetes Association, from the early 1970s, to recommend that diabetics eat more carbohydrates rather than less, despite a complete absence of clinical trials that might demonstrate that the benefits of doing so outweigh the risks, and the decades of clinical experience establishing carbohydrate restriction as an effective method of controlling blood sugar. If atherosclerosis was accelerated in diabetics, the thinking went, it was accelerated because they ate more saturated fat than nondiabetics. Diabetologists believed they could safely prescribe a carbohydrate-rich diet to their patients, because a diet that is low in fat will be high in carbohydrates.
But the research on metabolic syndrome suggests an entirely different scenario. If the risk of heart disease is elevated in metabolic syndrome and elevated still further with diabetes, then maybe the flow of knowledge about heart disease should proceed from diabetics, who suffer the most extreme manifestation of the disease, to the rest of us, and not the other way around. Maybe diabetics have such extreme atherosclerosis because there is something about the diabetic condition that causes the disease. Perhaps the metabolic abnormalities of the diabetic condition are the essential cause of atherosclerosis and coronary heart disease in everyone, only diabetics suffer to a greater extent.
Now let’s apply the same reasoning to metabolic syndrome and diabetes. Diabetics suffer more virulent atherosclerosis and die of heart disease more frequently than those with metabolic syndrome, and much more frequently than healthy individuals who manifest neither condition. Some aspect of the diabetic condition must be the cause—most likely, either high blood sugar, hyperinsulinemia, or insulin resistance, all three of which will tend to be worse in diabetics than in those with metabolic syndrome. Indeed, the existence of metabolic syndrome tells us that these same abnormalities exist in nondiabetics, although to a lesser extent, and though individuals with metabolic syndrome suffer an increased risk of heart disease, they do so to a lesser extent than diabetics. And because dietary carbohydrates and particularly refined carbohydrates elevate blood sugar and insulin and, presumably, induce insulin resistance, the implication is that eating these carbohydrates increases heart-disease risk not only in diabetics but in healthy individuals. By this reasoning, the atherogenic American diet is a carbohydrate-rich diet. Hence, cognitive dissonance. The logic of this argument has to be taken one step further, however, even if the cognitive dissonance is elevated with it. Both diabetes and metabolic syndrome are associated with an elevated incidence of virtually every chronic disease, not just heart disease. Moreover, the diabetic condition is associated with a host of chronic blood-vessel-related problems known as vascular complications: stroke, a stroke-related dementia called vascular dementia, kidney disease, blindness, nerve damage in the extremities, and atheromatous disease in the legs that often leads to amputation. One obvious possibility is that the same metabolic and hormonal abnormalities that characterize the diabetic condition—in particular, elevated blood sugar, hyperinsulinemia, and insulin resistance—may also cause these complications and the associated chronic diseases. And otherwise healthy individuals, therefore, would be expected to increase their risk of all these conditions by the consumption of refined and easily digestible carbohydrates, which inflict their damage first through their effects on blood sugar and insulin, and then, indirectly, through triglycerides, lipoproteins, fat accumulation, and assuredly other factors as well.
Among the immediate examples that follow from this logic is the particularly disconcerting possibility that insulin itself causes or exacerbates atherosclerosis. Since insulin resistance and hyperinsulinemia characterize Type 2 diabetes, it’s certainly possible that chronically elevated levels of insulin are the cause of the persistently high incidence of atherosclerosis in diabetics, quite aside from any other effects insulin might have on triglycerides, lipoproteins, or blood pressure. And if this is the case, then the excessive secretion of insulin—induced by the consumption of refined carbohydrates and sugars—might be responsible for causing or exacerbating atherosclerosis in those of us who are not diabetic.
This insulin-atherogenesis hypothesis is the simplest possible explanation for the intimate association of diabetes and atherosclerosis: the excessive secretion of insulin accelerates atherosclerosis and perhaps other vascular complications. It also implies, as Stout suggested, that any dietary factor—refined carbohydrates in particular—that increases insulin secretion will increase risk of heart disease. This did not, however, become the preferred explanation. Even Reaven chose to ignore it.*54 But Reaven’s hypothesis proposed that heart disease was caused primarily by insulin resistance through its influence on triglycerides. He considered hyperinsulinemia to be a secondary phenomenon. Stout considered hyperinsulinemia the primary cause of atherosclerosis.
One protein that seems particularly susceptible to glycation and cross-linking is collagen, which is a fundamental component of bones, cartilage, tendons, and skin. The collagen version of an AGE accumulates in the skin with age and, again, does so excessively in diabetics. This is why the skin of young diabetics will appear prematurely old, and why, as the Case Western University pathologist Robert Kohn first suggested, diabetes can be thought of as a form of accelerated aging, a notion that is slowly gaining acceptance. It’s the accumulation and cross-linking of this collagen version of AGEs that causes the loss of elasticity in the skin with age, as well as in joints, arteries, and the heart and lungs. The process can be compared to the toughening of leather. Both the meat and hide of an old animal are tougher and stiffer than those of a young animal, because of the AGE-related cross-linking that occurs inevitably with age. As Cerami explains, the aorta, the main artery running out of the heart, is an example of this stiffening effect of accumulated and cross-linked AGEs. “If you remove the aorta from someone who died young,” says Cerami, “you can blow it up like a balloon. It just expands. Let the air out, it goes back down. If you do that to the aorta from an old person, it’s like trying to inflate a pipe. It can’t be expanded. If you keep adding more pressure, it will just burst. That is part of the problem with diabetes, and aging in general. You end up with stiff tissue: stiffness of hearts, lungs, lenses, joints…. That’s all caused by sugars reacting with proteins.”
But there is no reason to believe that glucose-induced damage is limited only to diabetics, or to those with metabolic syndrome, in whom blood sugar is also chronically elevated. Glycation and oxidation accompany every fundamental process of cellular metabolism. They proceed continuously in all of us. Anything that raises blood sugar—in particular, the consumption of refined and easily digestible carbohydrates—will increase the generation of oxidants and free radicals; it will increase the rate of oxidative stress and glycation, and the formation and accumulation of advanced glycation end-products. This means that anything that raises blood sugar, by the logic of the carbohydrate hypothesis, will lead to more atherosclerosis and heart disease, more vascular disorders, and an accelerated pace of physical degeneration, even in those of us who never become diabetic.
Since the early 1980s, both simple and complex carbohydrates have played a role in determining the glycemic index, which is a measure of how quickly carbohydrates are digested and absorbed into the circulation and so converted into blood sugar. This concept of a glycemic index has had profound consequences on the official and public perception of the risks of starches and sugar in the diet. But it has done so by ignoring the effect of fructose—in sugar and high-fructose corn syrup—on anything other than its ability in the short term to elevate blood sugar and elicit an insulin response.
Paradoxically, the glycemic index appears to have had its most significant influence not on the clinical management of diabetes but on the public perception of sugar itself. The key point is that the glycemic index of sucrose is lower than that of flour and starches—white bread and potatoes, for instance—and fructose is the reason why. The carbohydrates in starches are broken down upon digestion, first to maltose and then to glucose, which moves directly from the small intestine into the bloodstream. This leads immediately to an elevation of blood sugar, and so a high glycemic index. Table sugar, on the other hand—i.e., sucrose—is composed of both glucose and fructose. To be precise, a sucrose molecule is composed of a single glucose molecule bonded to a single fructose molecule. This bond is broken upon digestion. The glucose moves into the bloodstream and raises blood sugar, just as if it came from a starch, but the fructose can be metabolized only in the liver, and so most of the fructose consumed is channeled from the small intestine directly to the liver. As a result, fructose has little immediate effect on blood-sugar levels, and so only the glucose half of sugar is reflected in the glycemic index.
By defining carbohydrate foods as good or bad on the basis of their glycemic index, diabetologists and public-health authorities effectively misdiagnosed the impact of fructose on human health. The key is the influence of glucose or fructose not on blood sugar but on the liver. Glucose goes directly into the bloodstream and is taken up by tissues and organs to use as energy; only 30–40 percent passes through the liver. Fructose passes directly to the liver, where it is metabolized almost exclusively. As a result, fructose “constitutes a metabolic load targeted on the liver,” the Israeli diabetologist Eleazar Shafrir says, and the liver responds by converting it into triglycerides—fat—and then shipping it out on lipoproteins for storage. The more fructose in the diet, the higher the subsequent triglyceride levels in the blood.
As Peter Mayes has explained it, our bodies will gradually adapt to long-term consumption of high-fructose diets, and so the “pattern of fructose metabolism” will change over time. This is why, the more fructose in the diet and the longer the period of consumption, the greater the secretion of triglycerides by the liver. Moreover, fructose apparently blocks both the metabolism of glucose in the liver and the synthesis of glucose into glycogen, the form in which the liver stores glucose locally for later use. As a result, the pancreas secretes more insulin to overcome this glucose traffic-jam at the liver, and this in turn induces the muscles to compensate by becoming more insulin resistant. The research on this fructose-induced insulin resistance was done on laboratory animals, but it confirmed what Reiser at the USDA had observed in humans and published in 1981: given sufficient time, high-fructose diets can induce high insulin levels, high blood sugar, and insulin resistance, even though in the short term fructose has little effect on either blood sugar or insulin and so a very low glycemic index. It has also been known since the 1960s that fructose elevates blood pressure more than an equivalent amount of glucose does, a phenomenon called fructose-induced hypertension.
In the quarter-century since Doll and Peto published their analysis, it has been cited in nearly two thousand journal articles, and yet the fundamental implications have been largely lost. The two most important conclusions in their analysis were that man-made chemicals—in pollution, food additives, and occupational exposure—play a minimal role in human cancers, and that diet played the largest role—causing 35 percent of all cancers, though the uncertainties were considered so vast that the number could be as low as 10 percent or as high as 70 percent.
Yudkin had noted that the five nations with the highest breast-cancer mortality in women in the late 1970s (in descending order: the United Kingdom, the Netherlands, Ireland, Denmark, and Canada) had the highest sugar consumption (in descending order: the United Kingdom, the Netherlands, Ireland, Canada, and Denmark), and those with the lowest mortality rates (Japan, Yugoslavia, Portugal, Spain, and Italy) had the lowest sugar consumption (Japan, Portugal, Spain, Yugoslavia, and Italy).
Those cancers apparently caused by diet or lifestyle and not related to tobacco use are either cancers of the gastrointestinal tract, including colon and rectal cancer, or cancers of what are technically known as endocrine-dependent organs—breast, uterus, ovaries, and prostate—the functions of which are regulated by hormones. This connection between these diet-and life-style-related cancers and hormones has been reinforced by the number of hormone-dependent factors linked to cancers of the breast and the endometrium (the lining of the uterus). All suggest that estrogen plays an important role. All these cancers, with the possible exception of pancreatic and prostate cancer, appear to increase in incidence with weight gain. These associations together imply both a metabolic and a hormonal connection between diet and cancer. This in turn led breast-cancer researchers to focus their attention on the likely possibility that obesity increases the incidence of breast cancer by increasing estrogen production.
Within this Darwinian environment, insulin provides fuel and growth signals to incipient cancer cells. Its more lethal effects, however, might come through the actions of insulin-like growth factor (IGF). Growth hormone itself is secreted by the pituitary gland and works throughout the body; IGF is secreted both by the liver and by tissues and cells throughout the body, and it then works locally, where concentrations are highest. Most tissues require at least two growth factors to grow at an optimal rate, and IGF is almost invariably one of the two, and perhaps the primary regulator. Insulin-like growth factor is sufficiently similar in structure to insulin that it can actually mimic its effects. IGF can stimulate muscle cells to take up blood sugar, just as insulin does, though not as well. Researchers now believe that IGF serves as the necessary intermediary between the growth hormone secreted by the pituitary gland, and the actual amount of food that is available to build new cells and tissues. If insufficient food is available, then IGF levels will stay low even if growth-hormone levels are high, and so cell and tissue growth will proceed slowly if at all. Add the necessary food and IGF levels increase, and so will the rate of growth. Unlike insulin, which responds immediately to the appearance of glucose in the bloodstream and so varies considerably from hour to hour, IGF concentrations in the circulation change only slowly over days or weeks, and thus better reflect the long-term availability of food in the environment.
At a 2003 meeting in London to discuss the latest work on IGF, researchers speculated that the development of cancerous cells and even benign tumors is a natural side effect of aging. What’s not natural is the progression of these cells and tumors to lethal malignancies. Such a transformation requires the chronically high levels of insulin and IGF induced by modern diets. This hypothesis is supported by epidemiological studies linking hyperinsulinemia and elevated levels of IGF to an increased risk of breast, prostate, colorectal, and endometrial cancer. This hypothesis, if not refuted, would constitute a significant shift in our understanding of the development of malignant cancer. It would mean that the decisive factor in malignant cancer is not the accumulation of genetic damage in cells, much of which is unavoidable, but how diets change the environment around cells and tissues to promote the survival, growth, and then metastasis of the cancer cells that do appear. “People were thinking a bit too much that diet could be a risk factor for cancer almost exclusively based on the idea that it contained carcinogenic substances,” explains Rudolf Kaaks, director of the Hormones and Cancer Group at the International Agency for Cancer Research. “Now the idea is that there is a change in the endocrine and growth-factor environment of cells that pushes cells to proliferate further and grow more easily and skip the programmed cell-death events.”
The characteristics that all these long-lived organisms seem to share definitively are reduced insulin resistance, and abnormally low levels of blood sugar, insulin, and insulin-like growth factor. As a result, the current thinking is that a lifelong reduction in blood sugar, insulin, and IGF bestows a longer and healthier life. The reduction in blood sugar also leads to reduced oxidative stress and to a decrease in glycation, the haphazard binding of sugars to proteins, and glycation end-products and all the toxic sequelae that follow. The decrease in insulin and IGF also apparently bestows on the organism an enhanced ability to protect against oxidative stress and to ward off other pathogens.
The enigma of Type 2 diabetes, Neel observed, is that it bestows significant evolutionary disadvantages upon anyone who has it. Diabetic women are more likely to die in childbirth and more likely to have stillbirths than healthy women; their children are more likely to be diabetic than those of healthy women. This implies that any genes that might predispose someone to become diabetic would evolve out of the population quickly, but this did not seem to have happened. One way to reconcile these observations is to imagine a scenario in which having a genetic predisposition to become diabetic is advantageous in some circumstances. (In a similar way, having the gene for sickle-cell anemia, normally a disadvantage, provides protection against malaria, a major advantage in malarial areas, as Neel himself reported.)
The thrifty gene could be the answer only if diabetes was of long duration in the species—and there is no evidence of that. The disease seems to appear only after populations have access to sugar and other refined carbohydrates. In the Pima, diabetes appeared to be “a relatively recent phenomenon,” as Neel himself later noted. When Russell and Hrdlika discussed the health of the Pima in the early 1900s, they made no mention of diabetes, even while noting the presence of such “rare” diseases as lupus, epilepsy, and elephantiasis.*71 As late as 1940, when Elliott Joslin reviewed the medical records of the hospitals and physicians in Arizona, he concluded that the prevalence of diabetes was no higher among the Pima and other local tribes than anywhere else in the United States. Only in the 1950s, in studies from the Bureau of Indian Affairs, was there compelling reason to believe that diabetes had become common. When Neel tested adolescent Yanomamo for the condition known as glucose intolerance, which might indicate a predisposition to diabetes, he found none, so had no reason to believe that diabetes existed before such isolated populations began eating Western foods. The same was true of an isolated tribe of Pima, discovered living in the Sierra Madre Mountains of northern Mexico. “The high frequency of [Type 2 diabetes] in reservation Amerindians,” Neel later explained, “must predominantly reflect lifestyle changes.”
Even the type of fat found in animals and humans is regulated in a way that accommodates differing internal and external environments. The fat in our limbs, for instance, is less saturated than the fat around our organs, and so is less likely to stiffen in cold weather. We will also change the fatty-acid composition of our subcutaneous fat with temperature—the colder it gets, the more unsaturated the fats. This same phenomenon, independent of the type of fat consumed, has been observed in pigs, rats, and hibernators. Another example of the evolutionary specificity of fat deposits can be seen in those desert animals that do not store fat subcutaneously, as humans and most animals do, apparently because it would inhibit heat loss and cooling. So there are fat-rumped and fat-tailed sheep, and fat-tailed marsupial mice, all desert-dwellers that carry their fat almost exclusively in the so-named locations.
The fact that appetite and thus calories consumed will increase to compensate for physical activity, however, was lost along the way. Clinicians, public-health authorities, and even exercise physiologists had taken to thinking and talking about hunger as though it were a phenomenon that was exclusive to the brain, a question of willpower rather than the natural consequence of a physiological drive to replace whatever calories may have been expended. When we are physically active, we work up an appetite. Hunger increases in proportion to the calories we expend, just as restricting the calories in our diet will leave us hungry until we eventually make good the deficit, if not more. The evidence suggests that this is true for both the fat and the lean. It is one of the fundamental observations we have to explain if we’re to understand why we gain weight and how to lose it.
Rubner gets credit for being the first to demonstrate that the law of conservation of energy holds in living organisms. Rubner studied the heat expenditure and respiration of a dog for forty-five days and published his findings in 1891. Eight years later, Francis Benedict and Wilbur Atwater confirmed the observation in humans: the calories we consume will indeed either be burned as fuel—metabolized or oxidized—or they’ll be stored or excreted. The research of Rubner, Benedict, and Atwater is the origin of the pronouncement often made by nutritionists with regard to weight-reducing diets that “calories in are equal to calories out.” As Marian Burros of the New York Times observed, there’s no violating the laws of thermodynamics.
This observation would be confirmed repeatedly: The obese tend to expend more energy than lean people of comparable height, sex, and bone structure, which means their metabolism is typically burning off more calories rather than less. When people grow fat, their lean body mass also increases. They put on muscle and connective tissue and fat, and these will increase total metabolism (although not by the same amount).
This faith in the laws of thermodynamics is founded on two misinterpretations of thermodynamic law, and not in the law itself. When these misconceptions are corrected, they alter our perceptions of weight regulation and the forces at work. The first misconception is the assumption that an association implies cause and effect. Here the context is the first law of thermodynamics, the law of energy conservation. This law says that energy is neither created nor destroyed, and so the calories we consume will be either stored, expended, or excreted. This in turn implies that any change in body weight must equal the difference between the calories we consume and the calories we expend, and thus the positive or negative energy balance. Known as the energy-balance equation, it looks like this: Change in energy stores = Energy intake-Energy expenditure The first law of thermodynamics dictates that weight gain—the increase in energy stored as fat and lean-tissue mass—will be accompanied by or associated with positive energy balance, but it does not say that it is caused by a positive energy balance—by “a plethora of calories,” as Russell Cecil and Robert Loeb’s 1951 Textbook of Medicine put it. There is no arrow of causality in the equation. It is equally possible, without violating this fundamental truth, for a change in energy stores, the left side of the above equation, to be the driving force in cause and effect; some regulatory phenomenon could drive us to gain weight, which would in turn cause a positive energy balance—and thus overeating and/or sedentary behavior. Either way, the calories in will equal the calories out, as they must, but what is cause in one case is effect in the other. All those who have insisted (and still do) that overeating and/or sedentary behavior must be the cause of obesity have done so on the basis of this same fundamental error: they will observe correctly that positive caloric balance must be associated with weight gain, but then they will assume without justification that positive caloric balance is the cause of weight gain. This simple misconception has led to a century of misguided obesity research.
When the law of energy conservation is interpreted correctly, either of two possibilities is allowed. It may be true that overeating and/or physical inactivity (positive caloric balance) can cause overweight and obesity, but the evidence and the observations, as we’ve discussed, argue otherwise. The alternative hypothesis reverses the causality: we are driven to get fat by “primary metabolic or enzymatic defects,” as Hilde Bruch phrased it, and this fattening process induces the compensatory responses of overeating and/or physical inactivity. We eat more, move less, and have less energy to expend because we are metabolically or hormonally driven to get fat.
The second misinterpretation of the law of energy conservation inevitably accompanies the first and is equally unjustifiable. The idea that obesity is caused by the slow accumulation of excess calories, day in and day out, over years or decades, and the associated idea that it can be prevented by reductions in caloric intake and/or increases in physical activity, are both based on an assumption about how the three variables in the energy-balance equation—energy storage, energy intake, and energy expenditure—relate to each other. They assume that energy intake and energy expenditure are what mathematicians call independent variables; we can change one without affecting the other. “We cannot get away from the fact that, given no change in physical activity [my italics], increased food means increased weight,” as John Yudkin phrased it in 1959. “Yet this simple expression of the laws of conservation of mass and of energy is still received with indignation by very many people.” But Yudkin’s purportedly inescapable truth included an assumption that may not be physiologically plausible: “given no change in physical activity.” The question is whether one can actually change energy intake in a living organism without prompting compensatory changes in energy expenditure.
The protein in the diet, as Rubner discovered, dominates this effect. The more protein digested over the amount necessary to maintain tissues and organs, the greater the heat generation. It’s what Rubner called the specific dynamic effect of protein that is usually invoked as the rationale to eat high-protein diets for weight loss; excessive calories lost as heat in the process of digesting and utilizing protein can’t then be stored as fat or used for fuel.
According to Flatt’s calculations, the “metabolic cost” of storing the calories we consume in adipose tissue—the proportion of energy dissipated in the conversion-and-storage process—is only 7 percent for fat, compared with 28 percent for carbohydrates.
By perceiving obesity as an eating disorder, a defect of behavior rather than physiology, and by perceiving excessive hunger as the cause of obesity, rather than a symptom that accompanies the drive to gain weight, those investigators concerned with human obesity had managed to dissociate the perception of hunger and satiety from any underlying metabolic conditions. They rarely considered the possibility that hunger, satiety, and level of physical activity might be symptomatic of underlying physiological conditions. Imagine if diabetologists had perceived the ravenous hunger that accompanies uncontrolled diabetes as a behavioral disorder, to be treated by years of psychotherapy or behavioral modification rather than injections of insulin. These researchers simply never confronted the possibility that the nutrient composition of the diet might have a fundamental effect on eating behavior and energy expenditure, and thus on the long-term regulation of weight.
Also, carbohydrates in these diets can be restricted, but the standard thinking is that they have to remain sufficiently high so that the brain and central nervous system derive all their necessary fuel from this dietary source of glucose. Nutritionists will often insist that 130 grams a day of carbohydrates are the minimal safe amount in a human diet. Though glucose is a primary fuel for the brain, it is not, however, the only fuel, and dietary carbohydrates are not the only source of that glucose. If the diet includes less than 130 grams of carbohydrates, the liver increases its synthesis of molecules called ketone bodies, and these supply the necessary fuel for the brain and central nervous system. If the diet includes no carbohydrates at all, ketone bodies supply three-quarters of the energy to the brain. The rest comes from glucose synthesized from the amino acids in protein, either from the diet or from the breakdown of muscle, and from a compound called glycerol that is released when triglycerides in the fat tissue are broken down into their component fatty acids. In these cases, the body is technically in a state called ketosis, and the diet is often referred to as a ketogenic diet. Whether the diet is ketogenic or anti-ketogenic—representing a difference of a few tens of grams of carbohydrates each day—might influence the response to the diet, complicating the question of whether carbohydrates are responsible for some effect or whether there is another explanation. (Ketosis is often incorrectly described by nutritionists as “pathological.” This confuses ketosis with the ketoacidosis of uncontrolled diabetes. The former is a normal condition; the latter is not. The ketone-body level in diabetic ketoacidosis typically exceeds 200 mg/dl, compared with the 5 mg/dl ketone levels that are typically experienced after an overnight fast—twelve hours after dinner and before eating breakfast—and the 5–20 mg/dl ketone levels of a severely carbohydrate-restricted diet with only 5–10 percent carbohydrates.)
Because it is still common to assume that a meat-rich, plant-poor diet will result in nutritional deficiencies, it’s worth pausing to investigate this issue. The assumption dates to the early decades of the twentieth century, the golden era of research on vitamins and vitamin-deficiency diseases, as one disease after another—scurvy, pellagra, beriberi, rickets, anemia—was found to be caused by a lack of essential vitamins and minerals. This was The Newer Knowledge of Nutrition, as it was called by the Johns Hopkins nutritionist Elmer McCollum; it dictated that the only way to ensure all the essential elements for health was to eat as many types of foods as possible, and nutritionists still hold by this logic today. “A safe rule of thumb,” as it was recently described, “is that the more components there are in a dietary, the greater the probability of balanced intake.” This philosophy, however, was based almost exclusively on studies of deficiency diseases, all of which were induced by diets high in refined carbohydrates and low in meat, fish, eggs, and dairy products. When the Scottish naval surgeon James Lind demonstrated in 1753 that scurvy could be prevented and cured by the consumption of citrus juice, for example, he did so with British sailors who had been eating the typical naval fare “of water gruel sweetened with sugar in the morning, fresh mutton broth, light puddings, boiled biscuit with sugar, barley and raisins, rice and currants.” Pellagra was associated almost exclusively with corn-rich diets, and beriberi with the eating of white rice rather than brown. When beriberi broke out in the Japanese navy in the late 1870s, it was only after the naval fare had been switched from vegetables and fish to vegetables, fish, and white polished rice. The outbreak was brought under control by replacing the white rice with barley and adding meat and evaporated milk. Pellagra, too, could be cured or ameliorated, as Carl Voegtlin demonstrated in 1914, by adding fresh meat, milk, and eggs to a pellagra-causing diet, which in Voegtlin’s experiments constituted primarily wheat bread, cabbage, cornmeal and corn syrup, turnips, potatoes, and sugar. Nutritionists working with lab animals also found that they could induce deficiency diseases by feeding diets rich in refined grains and sugar. Guinea pigs were given scurvy in a series of laboratory experiments in the 1940s when they were fed diets of mostly crushed barley and chickpeas.
What the nutritionists of the 1920s and 1930s didn’t then know is that animal foods contain all of the essential amino acids (the basic structural building blocks of proteins), and they do so in the ratios that maximize their utility to humans.*94 They also contain twelve of the thirteen essential vitamins in large quantities. Meat is a particularly concentrated source of vitamins A, E, and the entire complex of B vitamins. Vitamins D and B12 are found only in animal products (although we can usually get sufficient vitamin D from the effect of sunlight on our skin). The thirteenth vitamin, vitamin C, ascorbic acid, has long been the point of contention. It is contained in animal foods in such small quantities that nutritionists have considered it insufficient and the question is whether this quantity is indeed sufficient for good health. Once James Lind demonstrated that scurvy could be prevented and cured by eating fresh fruits and vegetables, nutritionists assumed that these foods are an absolutely essential dietary source of vitamin C. What had been demonstrated, they will say, is that scurvy is “a dietary deficiency resulting from lack of fresh fruit and vegetables.” To be technically accurate, however, Lind and the nutritionists who followed him in the study of scurvy demonstrated only that the disease is a dietary deficiency that can be cured by the addition of fresh fruits and vegetables. As a matter of logic, though, this doesn’t necessarily imply that the lack of vitamin C is caused by the lack of fresh fruits and vegetables. Scurvy can be ameliorated by adding these to the diet, but the original lack of vitamin C might be caused by other factors. In fact, given that the Inuit and those Westerners living on the Inuit’s vegetable-and fruit-free diet never suffered from scurvy, as Stefansson observed, then other factors must be involved. This suggested another way of defining a balanced diet. It’s possible that eating easily digestible carbohydrates and sugars increases our need for vitamins that we would otherwise derive from animal products in sufficient quantities.
Nutritionists of the era assumed that all-meat diets were unhealthy because (1) excessive meat consumption was alleged to raise blood pressure and cause gout; (2) the monotony of eating only meat—or any other single food—was said to induce a physical sense of revulsion; (3) the absence of fresh fruit and vegetables in these diets would cause scurvy and other deficiency diseases, and (4) protein-rich diets were thought to induce chronic kidney damage, a belief based largely on early research by Louis Newburgh. None of these claims were based on compelling evidence. Newburgh, for instance, had based his conclusions largely on experiments in which he fed excessive quantities of soybean, egg whites, and beef protein to rabbits, which, as critics would later observe, happen to be herbivores. Their natural diet is buds and bark, not their fellow animals, and so there was little scientific value in force-feeding them meat or animal protein. Nonetheless, the dangers of an all-meat diet were considered sufficiently likely that even Francis Benedict, as Stefansson told it, claimed that it was “easier to believe” that Stefansson and all the various members of his expeditions “were lying, than to concede that [they] had remained in good health for several years on an exclusive meat regimen.”
The textbook statements on vitamins would go unrevised, however, despite laboratory research that has confirmed Stefansson’s speculations. Nutritionists would establish by the late 1930s that B vitamins are depleted from the body by the consumption of carbohydrates. “There is an increased need for these vitamins when more carbohydrate in the diet is consumed,” as Theodore Van Itallie of Columbia University testified to McGovern’s Select Committee in 1973. A similar argument can now be made for vitamin C. Type 2 diabetics have roughly 30 percent lower levels of vitamin C in their circulation than do nondiabetics. Metabolic syndrome is also associated with “significantly” reduced levels of circulating vitamin C, which suggests that vitamin-C deficiency might be another disorder of civilization. One explanation for these observations—described in 1997 by the nutritionists Julie Will and Tim Byers, of the Centers for Disease Control and the University of Colorado respectively, as both “biologically plausible and empirically evident”—is that high blood sugar and/ or high levels of insulin work to increase the body’s requirements for vitamin C.
The vitamin-C molecule is similar in configuration to glucose and other sugars in the body. It is shuttled from the bloodstream into the cells by the same insulin-dependent transport system used by glucose. Glucose and vitamin C compete in this cellular-uptake process, like strangers trying to flag down the same taxicab simultaneously. Because glucose is greatly favored in the contest, the uptake of vitamin C by cells is “globally inhibited” when blood-sugar levels are elevated. In effect, glucose regulates how much vitamin C is taken up by the cells, according to the University of Massachusetts nutritionist John Cunningham. If we increase blood-sugar levels, the cellular uptake of vitamin C will drop accordingly. Glucose also impairs the reabsorption of vitamin C by the kidney, and so, the higher the blood sugar, the more vitamin C will be lost in the urine. Infusing insulin into experimental subjects has been shown to cause a “marked fall” in vitamin-C levels in the circulation. In other words, there is significant reason to believe that the key factor determining the level of vitamin C in our cells and tissues is not how much or little we happen to be consuming in our diet, but whether the starches and refined carbohydrates in our diet serve to flush vitamin C out of our system, while simultaneously inhibiting the use of what vitamin C we do have. We might get scurvy because we don’t faithfully eat our fruits and vegetables, but it’s not the absence of fruits and vegetables that causes the scurvy; it’s the presence of the refined carbohydrates.*96 This hypothesis has not been proven, but, as Will and Byers suggested, it is both biologically plausible and empirically evident.
And, once again, there is “little hunger” during prolonged starvation. “In total starvation,” Keys wrote in The Biology of Human Starvation, “the sensation of hunger disappears in a matter of days.” This assessment was confirmed in the early 1960s by Ernst Drenick at UCLA, when he starved eleven obese patients for periods of twelve to 117 days. “The most astonishing aspect of this study,” wrote Drenick and his colleagues in JAMA, “was the ease with which prolonged starvation was tolerated. This experience contrasted most dramatically with the hunger and suffering described by individuals who, over a prolonged period, consume a calorically inadequate diet.” As the editors of JAMA suggested in an accompanying editorial, this absence of hunger made starvation seem to be a viable weight-loss therapy for severely obese patients: “The gratifying weight loss without hunger may bring about the desired immediate results and help establish a normal eating pattern where other dietary restrictions may fail.” The implication is that we will experience no hunger if we eat nothing at all—zero calories—and our cells are fueled by the protein and fat from our muscle and fat tissue. If we break our fast with any amount of dietary protein and fat, we’ll still feel no hunger. But if we add carbohydrates, as Drenick noted, we’ll be overwhelmed with hunger and will now suffer all the symptoms of food deprivation. So why is it when we add carbohydrates to the diet we get hungry, if not irritable, lethargic, and depressed, but this will not happen when we add only protein and fat? How can the amount of calories possibly be the critical factor?
“To turn what is eaten into fat, to move it and to burn it requires dozens of enzymes and the processes are strongly influenced by a variety of hormones,” he explained. Sex hormones, for instance, determine where fat is stored, as evidenced by the differences in fat distribution between men and women. Thyroid hormones, adrenaline, and growth hormone accelerate the release of fatty acids from fat depots, as does a hormone known as glucagon, secreted by the pancreas. “The reverse process,” Astwood said, “the reincorporation of fat into the depots and the conversion of other food to fat, tends to be reduced by these hormones, but to be strongly promoted by insulin.” All of this demonstrated “what a complex role the endocrine system plays in the regulation of fat.”
The likely explanation is the effect of diet on this regulation of fat metabolism and energy balance. Since insulin, as Astwood noted, is the hormone responsible for promoting the incorporation of fat into our adipose tissue and the conversion of carbohydrates into fat, the obvious suspects are refined carbohydrates and easily digestible starches, which have well-documented effects on insulin. This is what Peter Cleave argued, albeit without understanding the underlying hormonal mechanisms at work, and what the geneticist James Neel, father of the thrifty-gene hypothesis, came to believe as well. And it’s the effect of these carbohydrates on insulin that would explain the dietary observations—the futility of calorie restriction, the relative ease of weight loss when carbohydrates are restricted, and perhaps two centuries of anecdotal observations that sweets, starches, bread, and beer are uniquely fattening.
In this hypothesis, obesity is another variation on the theme of insulin dysfunction and diabetes. In Type 1 diabetes, the cause is a lack of insulin. The result is an inability to use glucose for fuel and to retain fat in the fat tissue, leading to internal starvation, as Astwood put it, excessive hunger, and weight loss. In obesity, the cause is an excess of insulin or an inordinate sensitivity to insulin by the fat cells; the result is an overstock of fuel in the adipose tissue and so, once again, internal starvation. But now the symptoms are weight gain and hunger. In obesity, the weight gain occurs with or without satisfying the hunger; in Type 1 diabetes, the weight loss occurs irrespective of the food consumed.
This alternative hypothesis of obesity constitutes three distinct propositions. First, as I’ve said, is the basic proposition that obesity is caused by a regulatory defect in fat metabolism, and so a defect in the distribution of energy rather than an imbalance of intake and expenditure. The second is that insulin plays the primary role in this fattening process, and the compensatory behaviors of hunger and lethargy. The third is that carbohydrates, and particularly refined carbohydrates—and perhaps the fructose content as well, and thus the amount of sugars consumed—are the prime suspects in the chronic elevation of insulin; hence, they are the ultimate cause of common obesity. These latter two propositions—that insulin regulates fat deposition and carbohydrates regulate insulin—have never been controversial, but they’ve been dismissed as irrelevant to obesity, given the ubiquitous belief that obesity is caused by overeating. That, I will argue, was a mistake.
The same cause and effect are evident in Type 1 diabetes mellitus. The inability of diabetics to utilize the food they eat, and particularly the carbohydrates, results in a state of starvation and extreme hunger. Diabetics also urinate more, because the body gets rid of the sugar that accumulates in the bloodstream by allowing it to overflow into the urine, and this is why diabetics will be abnormally thirsty as well.
Since the excessive accumulation of fat in the fat tissue is the problem in obesity, we need to understand this primary control mechanism. This means, first of all, that we have to appreciate the difference between triglycerides and free fatty acids. They’re both forms fat takes in the human body, but they play very different roles, and these are tied directly to the way the oxidation and storage of fats and carbohydrates are regulated. When we talk about the fat stored in the adipose tissue or the fats in our food, we’re talking about triglycerides. Oleic acid, the monounsaturated fat of olive oil, is a fatty acid, but it is present in oils and meats in the form of a triglyceride. Each triglyceride molecule is composed of three fatty acids (the “tri”), linked together on a backbone of glycerol (the “glyceride”). Some of the triglycerides in our fat tissue come from fat in our diet. The rest come from carbohydrates, from a process known as de novo lipogenesis, which is Latin for “the new creation of fat,” a process that takes place both in the liver and, to a lesser extent, in the fat tissue itself. The more carbohydrates flooding the circulation after a meal, the more will be converted to triglycerides and stored as fat for future use (perhaps 30 percent of the carbohydrates in any one meal). “This lipogenesis is regulated by the state of nutrition,” explained Wertheimer in an introductory chapter to the Handbook of Physiology: “it is decreased to a minimum in carbohydrate deficiency and accelerated considerably during carbohydrate availability.”*114 A second critical point is that while the fat is stored as triglycerides it enters and exits the fat cells in the form of fatty acids—actually, free fatty acids, to distinguish them from the fatty acids bound up in triglycerides—and it’s these fatty acids that are burned as fuel in the cells. As triglycerides, the fat is locked into the fat cells, because triglycerides are too big to slip through the cell membranes. They have to be broken down into fatty acids—the process technically known as lipolysis—before the fat can escape into the circulation. The triglycerides in the bloodstream must also be broken down into fatty acids before the fat can diffuse into the fat cells. It’s only reconstituted into triglycerides, a process called esterification, once the fatty acids have passed through the walls of the blood vessels and the fat-cell membranes and are safely inside. This is true for all triglycerides, whether they originated as fat in the diet or were converted from carbohydrates in the liver. Inside the fat cells, triglycerides are continuously broken down into their component fatty acids and glycerol (i.e., in lipolysis), and fatty acids and glycerol are continuously reassembled into triglycerides (i.e., esterified)—a process known as the triglyceride/fatty-acid cycle. Any fatty acids that are not immediately repackaged back into triglycerides will slip out of the fat cell and back into the…
Insulin stimulates the transport of glucose into the fat cells, thereby effectively controlling the production of glycerol phosphate, the fixing of free fatty acids as triglycerides, and all that follows. The one fundamental requirement to increase the flow of fatty acids out of adipose tissue—to increase lipolysis—and so decrease the amount of fat in our fat tissue, is to lower the concentration of insulin in the bloodstream. In other words, the release of fatty acids from the fat cells and their diffusion into the circulation require “only the negative stimulus of insulin deficiency,” as Yalow and Berson wrote. By the same token, the one necessary requirement to shut down the release of fat from the fat cells and increase fat accumulation is the presence of insulin. When insulin is secreted, or the level of insulin in the circulation is abnormally elevated, fat accumulates in the fat tissue. When insulin levels are low, fat escapes from the fat tissue, and the fat deposits shrink.
Just a few more details are necessary to understand why we get fat. The first is that the amount of glycerol phosphate available to the fat cells to accumulate fat—to bind the fatty acids together into triglycerides and lock them into the adipose tissue—also depends directly on the carbohydrates in the diet. Dietary glucose is the primary source of glycerol phosphate. The more carbohydrates consumed, the more glycerol phosphate available, and so the more fat can accumulate. For this reason alone, it may be impossible to store excess body fat without at least some carbohydrates in the diet and without the ongoing metabolism of these dietary carbohydrates to provide glucose and the necessary glycerol phosphate.
Regarding the potential dangers of sugar in the diet, it is important to keep in mind that fructose is converted more efficiently into glycerol phosphate than is glucose. This is another reason why fructose stimulates the liver so readily to convert it to triglycerides, and why fructose is considered the most lipogenic carbohydrate. Fructose, however, does not stimulate the pancreas to secrete insulin, so glucose is still needed for that purpose. This suggests that the combination of glucose and fructose—either the 50–50 mixture of table sugar (sucrose) or the 55–45 mixture of high-fructose corn syrup—stimulates fat synthesis and fixes fat in the fat tissue more than does glucose alone, which comes from the digestion of bread and starches.
Moreover, fat cells remain sensitive to insulin long after muscle cells become resistant to it. Once muscle cells become resistant to the insulin in the bloodstream, as Yalow and Berson explained, the fat cells have to remain sensitive to provide a place to store blood sugar, which would otherwise either accumulate to toxic levels or overflow into the urine and be lost to the body. As insulin levels rise, the storage of fat in the fat cells continues, long after the muscles become resistant to taking up any more glucose. Nonetheless, the pancreas may compensate for this insulin resistance, if it can, by secreting still more insulin. This will further elevate the level of insulin in the circulation and serve to increase further the storage of fat in the fat cells and the synthesis of carbohydrates from fat. It will suppress the release of fat from the fat tissue. Under these conditions—lipid trapping, as the geneticist James Neel described it—obesity begins to look preordained. Weights will plateau, as Dennis McGarry suggested in Science in 1992, only when the fat tissue becomes insulin-resistant as well, or when the fat deposits enlarge to the point where the forces working to release the fat and burn it for fuel—such as the increased concentration of fatty acids inside the fat cells—once again balance out the effect of the insulin itself.
A critical enzyme in this fat-distribution process is known technically as lipoprotein lipase, LPL, and any cell that uses fatty acids for fuel or stores fatty acids uses LPL to make this possible. When a triglyceride-rich lipoprotein passes by in the circulation, the LPL will grab on, and then break down the triglycerides inside into their component fatty acids. This increases the local concentration of free fatty acids, which flow into the cells—either to be fixed as triglycerides if these cells are fat cells, or oxidized for fuel if they’re not. The more LPL activity on a particular cell type, the more fatty acids it will absorb, which is why LPL is known as the “gatekeeper” for fat accumulation. Insulin, not surprisingly, is the primary regulator of LPL activity, although not the only one. This regulation functions differently, as is the case with all hormones, from tissue to tissue and site to site. In fat tissue, insulin increases LPL activity; in muscle tissue, it decreases activity. As a result, when insulin is secreted, fat is deposited in the fat tissue, and the muscles have to burn glucose for energy. When insulin levels drop, the LPL activity on the fat cells decreases and the LPL activity on the muscle cells increases—the fat cells release fatty acids, and the muscle cells take them up and burn them.
The probable explanation is that as women of childbearing age get heavier and more of them become diabetic, they pass the metabolic consequences on to their children through what is known technically as the intrauterine environment. The nutrient supply from mother to developing child passes across the placenta in proportion to the nutrient concentration in the mother’s bloodstream. If the mother has high blood sugar, then the developing pancreas in the fetus will respond to this stimulus by overproducing insulin-secreting cells. “The baby is not diabetic,” explains Boyd Metzger, who studies diabetes and pregnancy at Northwestern University, “but the insulin-producing cells in the pancreas are stimulated to function and grow in size and number by the environment they’re in. So they start over functioning. That in turn leads to a baby laying down more fat, which is why the baby of a diabetic mother is typified by being a fat baby.” This is also the most likely explanation for why children born to women who gain excessive weight during pregnancy also tend to be fatter. As Laura Riley, medical director of labor and delivery at Massachusetts General Hospital, told the Boston Globe in response to the Harvard study, she now tells her patients, “If you overdo it during pregnancy, you’re setting yourself up for a bigger baby,” and that, in turn, means “you are setting your baby up for potentially a lifetime of weight problems.” Gillman and his colleagues described the problem this way: “Our observation of a trend of increasing weight among young infants may portend continued increase in childhood and adult obesity.” But if fatter mothers are more likely to make fatter babies, and fatter babies are more likely to make fatter mothers, which is also a well-documented observation, then this is another vicious cycle. It suggests that, once a generation of adolescents and adults start eating the highly refined carbohydrates and sugars now ubiquitous in our diets, even their children will feel the effect, and perhaps their children’s children as well. The extreme instance of this phenomenon today is the Pima Indians, whose incidence of diabetes is among the highest of any population in the world. In 2000, NIH investigators reported that Pima born to mothers who were diabetic have a two-to threefold increased risk themselves of becoming diabetic as adults, and so have a two-to threefold increased risk of passing diabetes on to their own children—of “perpetuating the cycle,” as the NIH investigators explained. The “vicious cycle” of the “diabetic intrauterine environment,” they wrote, can explain much of the post–World War II increase in Type 2 diabetes among the Pima, and may also “be a factor in the alarming rise of this disease nationally.”
Another way to phrase this is that anything that induces fatty acids to escape from the fat tissue and then be burned as fuel will promote satiety by providing fuel to the tissues. Anything that induces lipogenesis, or fat synthesis and storage, will promote hunger by removing the available fuel from the circulation. And so hypophagia and hyperphagia, satiety and hunger, Le Magnen wrote, are “indirect and passive consequences” of “the neuroendocrine pattern of fat mobilization or synthesis.” By the mid-1970s, Le Magnen had demonstrated that insulin is the driver of this diurnal cycle of hunger, satiety, and energy balance. At the beginning of waking hours, the insulin response to glucose—the “insulin secretory responsiveness,” Le Magnen called it—is enhanced, and it’s suppressed during sleep. This pattern is “primarily responsible” for the fat accumulation during the waking hours and the fat mobilization during the sleeping hours. “The hyperinsulin secretion in response to food” during the period when the animals are awake and eating, and the “opposite train” when they are asleep, he explained, produces “a successive fall and elevation” of the level of fatty acids in the blood on a twenty-four-hour cycle—twelve hours during which the fatty acids are depressed and glucose is the primary fuel, and then twelve hours in which they’re elevated and fat is the primary fuel. Both hunger, or the urge to eat, and satiety, or the inhibition of eating, are compensatory responses to these insulin-driven cycles of fat storage followed by fat mobilization. Insulin secretion is released in the morning upon waking and drives us to eat, Le Magnen concluded, and it ebbs after the last meal of the day to allow for prolonged sleep without hunger.
The hypothesis is based on three fundamental propositions. The first, as Friedman and Stricker explained, is that the supply of fuel to all body tissues must always remain “adequate for them to function during all physiological conditions and even during prolonged food deprivation.” The second proposition is Hans Krebs’s revelation from the 1940s that each of the various metabolic fuels—protein, fats, and carbohydrates—is equally capable of supplying energy to meet the demands of the body. The third is that the body has no way of telling the difference between fuels from internal sources—the fat tissue, liver glycogen, muscle protein—and fuels that come from external sources—i.e., whatever we eat that day. With these propositions in mind, the simplest possible explanation for feeding behavior is that we eat to maintain this flow of energy to cells—to maintain “caloric homeostasis”—rather than maintain body fat stores or some preferred weight. If the cells themselves are receiving sufficient fuel to function, the size of the fat reserves is a secondary concern. As Friedman and Stricker explained, “Hunger appears and disappears according to normally occurring fluctuations in the availability of utilizable metabolic fuels, regardless of which fuels they are and how full the storage reserves.” In 1993, the Princeton physiological psychologist Bartley Hoebel described the hypothesis in terms that echoed the origins of the theory in the work of Claude Bernard: “The primitive goal of feeding behavior,” Hoebel explained, “is to maintain constancy of the nutrient concentration of the milieu intérieur.” From this perspective, we’re not much more complicated than insects, which will seek out food and consume it until their guts are full. External taste receptors signal whether they’ve come upon something they can benefit from eating; gut receptors signal when sufficient food has been consumed to inhibit the hunger. The role of the brain is to integrate the sensory signals from the gut and the taste receptors and couple them to motor reflexes to initiate eating behavior or inhibit it. In both flies and mosquitoes, if the neural connection between gut and brain is severed, the insect loses its hunger inhibitor and continues to eat until its gut literally ruptures. As Edward Stricker explained in The New England Journal of Medicine in 1978, hunger is little more than a disturbing stimulus, like an itch, that “feeding behavior removes or attenuates.” Satiety, on the other hand, “is more than the absence of hunger; it is the active suppression of interest in food and of feeding behavior.” The primary difference between humans and insects, by this logic, is that we have two primary fuel tanks (three if we include glycogen stored in the liver, and four if we include protein in the muscles), and they effectively have one. In our case, fuel is stored initially in the gut for the short term, and then in the fat tissue for the medium to longer term. The fat tissue…
The implication of this hypothesis is that both weight gain and hunger will be promoted by factors that work to deposit fatty acids in the fat tissue and inhibit their mobilization—i.e., anything that elevates insulin. Satiety and weight loss will be promoted by factors that increase the release of fatty acids from the fat tissue and direct them to the cells of the tissues and organs to be oxidized—anything that lowers insulin levels. Le Magnen himself demonstrated this in his animal experiments. When he infused insulin into rats, it lengthened the fat-storage phase of their day-night cycle, and it shortened the fat-mobilization-and-oxidation phase accordingly. Their diurnal cycle of energy balance was now out of balance: the rats accumulated more fat during their waking hours than they could mobilize and burn for fuel during their sleeping hours. They no longer balanced their overeating with an equivalent phase of undereating. Not only were their sleep-wake cycles disturbed, but the rats would be hungry during the daytime and continue to eat, when normally they would be living off the fat they had stored at night.*134 Indeed, when Le Magnen infused insulin into sleeping rats, they immediately woke and began eating, and they continued eating as long as the insulin infusion continued. When during their waking hours he infused adrenaline—a hormone that promotes the mobilization of fatty acids from the fat tissue—they stopped eating. If this hypothesis holds for humans, it means we gain weight because our insulin remains elevated for longer than nature or evolution intended, and so we fail to balance the inevitable fat deposition with sufficient fat oxidation. Our periods of satiety are shortened, and we are driven to eat more often than we should. If we think of this system in terms of two fuel supplies, the immediate supply in the gut and the reserve in our fat deposits, both releasing fuel into the circulation for use by the tissues, then insulin renders the fat deposits temporarily invisible to the rest of the body by shutting down the flow of fatty acids out of the fat cells, while signaling the cells to continue burning glucose instead. As long as insulin levels remain elevated and the fat cells remain sensitive to the insulin, the use of fat for fuel is suppressed. We store more calories in this fat reserve than we should, and we hold on to these calories even when they’re required to supply energy to the cells. We can’t use this fat to forestall the return of hunger. “It is not a paradox to say that animals and humans that become obese gain weight because they are no longer able to lose weight,” as Le Magnen wrote.
Le Magnen described this first wave of insulin as increasing “the metabolic background of hunger.” In other words, this wave of insulin shuts down the mobilization of fat from the adipose tissue and stores away blood glucose in preparation for the imminent arrival of more. This leaves the circulation relatively depleted of nutrients. As a result, hunger increases. And this makes the food seem to taste even better. “In man,” suggested Le Magnen, “it is reflected by the increased feeling of hunger at the beginning of a meal expressed in the popular adage in French: L’appétit vient en mangeant”—i.e., “the appetite comes while eating.” As the meal continues and our appetite is satisfied, the metabolic background of hunger ebbs with the flood of nutrients into the circulation, and so the perceived palatability of the food wanes as well. Palatability, by this logic, is a learned response, conditioned largely by hunger, which in turn is a response to the pattern of insulin secretion and the availability of fatty acids and/or glucose in the circulation.
This cephalic secretion of insulin in preparation for the act of eating provides yet another mechanism that may work to induce hunger, weight gain, and obesity in a world of palatable foods, which could mean, of course, simply those foods that induce excessive insulin secretion to handle the unnaturally easy digestibility of their carbohydrates. The idea was suggested in 1977 by the psychologist Terry Powley, who was then at Yale and is now at Purdue University. Powley was discussing the obesity-inducing effect of lesions in the hypothalamus and speculated that the lesions cause the animal to hypersecrete insulin when just thinking about, smelling, or tasting food, and this amplifies its perception of hunger and palatability. The result would be what Powley called a “self-perpetuating situation”—i.e., a vicious cycle. “Rather than secreting quantities of insulin and digestive enzymes appropriate for effective utilization of the ingested material,” Powley wrote, “the lesioned animal over-secretes and must then ingest enough calories to balance the hormonal and metabolic adjustments.”
Sugar (sucrose) is a special case. Just like cocaine, alcohol, nicotine, and other addictive drugs, sugar appears to induce an exaggerated response in that region of the brain known as the reward center—the nucleus accumbens. This suggests that the relatively intense cravings for sugar—a sweet tooth—may be explained by the intensity of the dopamine secretion in the brain when we consume sugar. When the nucleus accumbens “is excessively activated by sweet food or powerful drugs,” says Bartley Hoebel of Princeton, “it can lead to abuse and even addiction. When this system is under-active, signs of depression ensue.” Rats can be easily addicted to sugar, according to Hoebel, and will demonstrate the physical symptoms of opiate withdrawal when forced to abstain. Whether the addiction is in the brain or the body or both, the idea that sugar and other easily digestible carbohydrates are addictive also implies that the addiction can be overcome with sufficient time, effort, and motivation, which is not the case with hunger itself (except perhaps in the chronic condition of anorexia). Avoiding carbohydrates will lower insulin levels even in the obese, and so ameliorate the hyperinsulinemia that causes the carbohydrate craving itself. “After a year to eighteen months, the appetite is normalized and the craving for sweets is lost,” said James Sidbury, Jr., about the effects on children of his carbohydrate-restricted diet. “This change can often be identified within a specific one to two week period by the individual.” If the more easily digestible carbohydrates are indeed addictive, this changes the terms of all discussions about the efficacy of carbohydrate-restricted diets. That someone might find living without starches, flour, and sugar to be difficult, and that there might be physical symptoms accompanying the withdrawal process, does not speak to the possibility that they might be healthier and thinner for the effort. No one would argue that quitting smoking (or any other addictive drug) is not salutary, even though ex-smokers invariably miss their cigarettes, and many will ultimately return to smoking, the addiction eventually getting the better of them. The same may be true for these carbohydrates. It also makes us question the admonitions that carbohydrate restriction cannot “generally be used safely,” as Theodore Van Itallie wrote in 1979, because it has “potential side effects,” including “weakness, apathy, fatigue, nausea, vomiting, dehydration, postural hypotension, and occasional exacerbation of preexisting gout.” The important clinical question is whether these are short-term effects of carbohydrate withdrawal, or chronic effects that might offset the benefits of weight loss. The same is true for the occasional elevation of cholesterol that will occur with fat loss—a condition known as transient hypercholesterolemia—and that is a consequence of the fact that we store cholesterol along with fat in our fat cells. When fatty acids are…
As I emerge from this research, though, certain conclusions seem inescapable to me, based on the existing knowledge: 1. Dietary fat, whether saturated or not, is not a cause of obesity, heart disease, or any other chronic disease of civilization. 2. The problem is the carbohydrates in the diet, their effect on insulin secretion, and thus the hormonal regulation of homeostasis—the entire harmonic ensemble of the human body. The more easily digestible and refined the carbohydrates, the greater the effect on our health, weight, and well-being. 3. Sugars—sucrose and high-fructose corn syrup specifically—are particularly harmful, probably because the combination of fructose and glucose simultaneously elevates insulin levels while overloading the liver with carbohydrates. 4. Through their direct effect on insulin and blood sugar, refined carbohydrates, starches, and sugars are the dietary cause of coronary heart disease and diabetes. They are the most likely dietary causes of cancer, Alzheimer’s disease, and the other chronic diseases of civilization. 5. Obesity is a disorder of excess fat accumulation, not overeating, and not sedentary behavior. 6. Consuming excess calories does not cause us to grow fatter, any more than it causes a child to grow taller. Expending more energy than we consume does not lead to long-term weight loss; it leads to hunger. 7. Fattening and obesity are caused by an imbalance—a disequilibrium—in the hormonal regulation of adipose tissue and fat metabolism. Fat synthesis and storage exceed the mobilization of fat from the adipose tissue and its subsequent oxidation. We become leaner when the hormonal regulation of the fat tissue reverses this balance. 8. Insulin is the primary regulator of fat storage. When insulin levels are elevated—either chronically or after a meal—we accumulate fat in our fat tissue. When insulin levels fall, we release fat from our fat tissue and use it for fuel. 9. By stimulating insulin secretion, carbohydrates make us fat and ultimately cause obesity. The fewer carbohydrates we consume, the leaner we will be. 10. By driving fat accumulation, carbohydrates also increase hunger and decrease the amount of energy we expend in metabolism and physical activity.
The most dramatic alterations in human diets in the past two million years, unequivocally, are (1) the transition from carbohydrate-poor to carbohydrate-rich diets that came with the invention of agriculture—the addition of grains and easily digestible starches to the diets of hunter-gatherers; (2) the increasing refinement of those carbohydrates over the past few hundred years; and (3) the dramatic increases in fructose consumption that came as the per-capita consumption of sugars—sucrose and now high-fructose corn syrup—increased from less than ten or twenty pounds a year in the mid-eighteenth century to the nearly 150 pounds it is today.
It is not the case, despite public-health recommendations to the contrary, that carbohydrates are required in a healthy human diet. Most nutritionists still insist that a diet requires 120 to 130 grams of carbohydrates, because this is the amount of glucose that the brain and central nervous system will metabolize when the diet is carbohydrate-rich. But what the brain uses and what it requires are two different things. Without carbohydrates in the diet, as we discussed earlier (see Chapter 19), the brain and central nervous system will run on ketone bodies, converted from dietary fat and from the fatty acids released by the adipose tissue; on glycerol, also released from the fat tissue with the breakdown of triglycerides into free fatty acids; and on glucose, converted from the protein in the diet. Since a carbohydrate-restricted diet, unrestricted in calories, will, by definition, include considerable fat and protein, there will be no shortage of fuel for the brain. Indeed, this is likely to be the fuel mixture that our brains evolved to use, and our brains seem to run more efficiently on this fuel mixture than they do on glucose alone. (A good discussion of the rationale for a minimal amount of carbohydrates in the diet can be found in the 2002 Institute of Medicine [IOM] report, Dietary Reference Intakes. The IOM sets an “estimated average requirement” of a hundred grams of carbohydrates a day for adults, so that the brain can run exclusively on glucose, “without having to rely on a partial replacement of glucose by [ketone bodies].” It then sets the “recommended dietary allowance” at 130 grams to allow margin for error. But the IOM report also acknowledges that the brain will be fine without these carbohydrates, because it runs perfectly well on ketone bodies, glycerol, and the protein-derived glucose.)
Whether a carbohydrate-restricted diet is deficient in essential vitamins and minerals is another issue. As we also discussed (see Chapter 19), animal products contain all the amino acids, minerals, and vitamins essential for health, with the only point of controversy being vitamin C. And the evidence suggests that the vitamin C content of meat products is more than sufficient for health, as long as the diet is indeed carbohydrate-restricted, with none of the refined and easily digestible…
A calorie-restricted diet that cuts all calories by a third, as John Yudkin noted, will also cut essential nutrients by a third. A diet that prohibits sugar, flour, potatoes, and beer, but allows eating to satiety meat, cheese, eggs, and green vegetables will still include the…
A direct test of the carbohydrate hypothesis asks the opposite question: not whether the absence of refined and easily digestible carbohydrates and sugars causes weight loss and is safe, but whether the presence of these carbohydrates causes weight gain and chronic disease. Such a trial would ideally be done with lean, healthy individuals, or with a spectrum of subjects from lean through obese, including those with metabolic syndrome and Type 2 diabetes. They would be randomized into two groups, one of which would consume the sugary and starchy elements of food and one of which would not, and then we would see what happens. We might randomly assign a few thousand individuals to eat the typical American diet of today—including its 140–50 pounds of sugar and high-fructose corn syrup a year, nearly 200 pounds of flour and grain, 130-plus pounds of potatoes, and 27 pounds of corn—and we could assign an equal number to eat a diet of mostly animal products (meat, fish, fowl, eggs, cheese) and leafy green vegetables. Since the latter diet would be relatively high in fat and saturated fat and calorically dense, the conventional wisdom is that it would cause heart disease and, perhaps, obesity and diabetes. So this would test the dietary-fat/cholesterol hypothesis of heart disease, as well as the carbohydrate hypothesis.