Infinite Powers: How Calculus Reveals the Secrets of the Universe
Steven Strogatz

Ended: Dec. 8, 2019

Feynman asked Wouk if he knew calculus. No, Wouk admitted, he didn’t. “You had better learn it,” said Feynman. “It’s the language God talks.”
it’s a mysterious and marvelous fact that our universe obeys laws of nature that always turn out to be expressible in the language of calculus as sentences called differential equations. Such equations describe the difference between something right now and the same thing an instant later or between something right here and the same thing infinitesimally close by.
But why should the universe respect the workings of any kind of logic, let alone the kind of logic that we puny humans can muster? This is what Einstein marveled at when he wrote, “The eternal mystery of the world is its comprehensibility.” And it’s what Eugene Wigner meant in his essay “On the Unreasonable Effectiveness of Mathematics in the Natural Sciences” when he wrote, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”
The Infinity Principle   To shed light on any continuous shape, object, motion, process, or phenomenon—no matter how wild and complicated it may appear—reimagine it as an infinite series of simpler parts, analyze those, and then add the results back together to make sense of the original whole.
Back in the fourth century BCE, the Greek philosopher Aristotle warned that sinning with infinity in this way could lead to all sorts of logical trouble. He railed against what he called completed infinity and argued that only potential infinity made sense. In the context of chopping a line into pieces, potential infinity would mean that the line could be cut into more and more pieces, as many as desired but still always a finite number and all of nonzero length. That’s perfectly permissible and leads to no logical difficulties. What’s verboten is to imagine going all the way to a completed infinity of pieces of zero length. That, Aristotle felt, would lead to nonsense—as it does here, in revealing that zero times infinity can give any answer. And so he forbade the use of completed infinity in mathematics and philosophy. His edict was upheld by mathematicians for the next twenty-two hundred years.
In 1899, the father of quantum theory, a German physicist named Max Planck, realized that there was one and only one way to combine these fundamental constants to produce a scale of length. That unique length, he concluded, was a natural yardstick for the universe. In his honor, it is now called the Planck length. It is given by the algebraic combination Planck length = √ ħG / c3 . When we plug in the measured values of G, ħ, and c, the Planck length comes out to be about 10–35 meters, a stupendously small distance that’s about a hundred million trillion times smaller than the diameter of a proton. The corresponding Planck time is the time it would take light to traverse this distance, which is about 10–43 seconds. Space and time would no longer make sense below these scales. They’re the end of the line.
Yet the geometry of curved shapes takes us only so far. We also need to know how things move in this world—how human tissue shifts after surgery, how blood flows through an artery, how a ball flies through the air. On this, Archimedes was silent. He gave us the science of statics, of bodies balancing on levers and floating stably in water. He was a master of equilibrium. The territory ahead concerned the mysteries of motion.
When Archimedes died, the mathematical study of nature nearly died along with him. Eighteen hundred years passed before a new Archimedes appeared. In Renaissance Italy, a young mathematician named Galileo Galilei picked up where Archimedes had left off.
The word planet means “wanderer.” In antiquity the planets were known as the wandering stars; instead of maintaining their places in the sky, like the fixed stars in Orion’s Belt and the ladle of the Big Dipper, which never moved relative to one another, the planets appeared to drift across the heavens. They progressed from one constellation to another as the weeks and months went by. Most of the time they moved eastward relative to the stars, but occasionally they appeared to slow down, stop, and go backward, westward, in what astronomers called retrograde motion.
In 1962 Brian Josephson, then a twenty-two-year-old graduate student at the University of Cambridge, predicted that at temperatures close to absolute zero, pairs of superconducting electrons could tunnel back and forth through an impenetrable insulating barrier, a nonsensical statement according to classical physics. Yet calculus and quantum mechanics summoned these pendulum-like oscillations into existence—or, to put it less mystically, they revealed the possibility of their occurrence. Two years after Josephson predicted these ghostly oscillations, the conditions needed to conjure them were set up in the laboratory and, indeed, there they were. The resulting device is now called a Josephson junction. Its practical uses are legion. It can detect ultra-faint magnetic fields a hundred billion times weaker than that of the Earth, which helps geophysicists hunt for oil deep underground. Neurosurgeons use arrays of hundreds of Josephson junctions to pinpoint the sites of brain tumors and locate the seizure-causing lesions in patients with epilepsy. The procedures are entirely noninvasive, unlike exploratory surgery. They work by mapping the subtle variations in magnetic field produced by abnormal electrical pathways in the brain. Josephson junctions could also provide the basis for extremely fast chips in the next generation of computers and might even play a role in quantum computation, which will revolutionize computer science if it ever comes to pass.
From a modern perspective, there are two sides to calculus. Differential calculus cuts complicated problems into infinitely many simpler pieces. Integral calculus puts the pieces back together again to solve the original problem.
Although calculus culminated in Europe, its roots lie elsewhere. In particular, algebra came from Asia and the Middle East. Its name derives from the Arabic word al-jabr, meaning “restoration” or “the reunion of broken parts.” These are the kinds of operations needed to balance equations and solve them, such as canceling a number being subtracted from one side of an equation by adding it to both sides, in effect restoring what was broken.
Remarkably, all other equations involving quadratic terms in x and y but no higher powers give curves of just four possible types: parabolas, ellipses, hyperbolas, or circles. And that’s it. (Except for some degenerate cases that yield lines, points, or no graph at all, but these are rare oddities that we can safely ignore.)
In launching his attack, Newton was adhering to a traditional distinction between analysis and synthesis. In analysis, one solves a problem by starting at the end, as if the answer had already been obtained, and then works back wishfully toward the beginning, hoping to find a path to the given assumptions. It’s what kids in school think of as working backward from the answer to figure out how to get there. Synthesis goes in the other direction. It starts with the givens, and then, by stabbing in the dark, trying things, you are somehow supposed to move forward to a solution, step by logical step, and eventually arrive at the desired result. Synthesis tends to be much harder than analysis because you don’t ever know how you’re going to get to the solution until you do. The ancient Greeks regarded synthesis as carrying more logical force, more persuasive power, than analysis. Synthesis was considered the only valid way to prove a result; analysis was a practical way to find the result. If you wanted a rigorous demonstration, you had to do synthesis. That’s why, for example, Archimedes used his analytical method of balancing shapes on seesaws to find his theorems but then switched to the synthetic method of exhaustion to prove them.
But he did make one contribution to applied mathematics of lasting importance: he was the first person to deduce a law of nature from a deeper law by using calculus as a logical engine. Just as Maxwell would do with electricity and magnetism two centuries later, Fermat translated a hypothetical law of nature into the language of calculus, started the engine, and fed the law in, and out popped another law, a consequence of the first one. In so doing, Fermat, the accidental scientist, initiated a style of reasoning that has dominated theoretical science ever since.
Fermat had applied his embryonic version of differential calculus to physics. No one had ever done that before. And in so doing, he showed that light travels in the most efficient way—not the most direct way, but the fastest. Of all the possible paths light can take, it somehow knows, or behaves as if it knows, how to get from here to there as quickly as possible. This was an important early clue that calculus was somehow built into the operating system of the universe. The principle of least time was later generalized to the principle of least action, where action has a technical meaning that we needn’t go into here. This optimization principle—that nature behaves in the most economical way, in a certain precise sense—was found to correctly predict the laws of mechanics. In the twentieth century, the principle of least action was extended to general relativity and quantum mechanics and other parts of modern physics.
The log of a product is the sum of the logs.
In other words, when we multiply two numbers together and then take their log, the result is the sum (not the product!) of their individual logs. In that sense, logarithms replace multiplication problems with addition problems, which are much easier. This is why logarithms were invented. They sped up calculations tremendously. Instead of having to deal with Herculean multiplication problems, square roots, cube roots, and the like, such calculations could be turned into addition problems and then solved with the help of a lookup table known as a table of logarithms. The idea of logarithms was in the air in the early seventeenth century, but much of the credit for popularizing them goes to the Scottish mathematician John Napier, who published his Description of the Wonderful Rule of Logarithms in 1614. A decade later, Johannes Kepler enthusiastically used the new calculational tool when he was compiling astronomical tables about the positions of the planets and other heavenly bodies. Logarithms were the supercomputers of their era.
To reiterate the main point, the thing that makes e special is that the rate of change of ex is ex. Hence, as the graph of this exponential function soars higher and higher, its slope always tilts to match its current height. The higher it gets, the steeper it climbs. In the jargon of calculus, ex is its own derivative. No other function can say that. It’s the fairest of them all—at least as far as calculus is concerned.
There are three central problems in calculus. They are shown schematically on the diagram below. The forward problem: Given a curve, find its slope everywhere. The backward problem: Given a curve’s slope everywhere, find the curve. The area problem: Given a curve, find the area under it.
I’m going into all this because these real-world waves offer a glimpse, as through a glass darkly, of a remarkable property of sine waves, namely, when a variable follows a perfect sine-wave pattern, its rate of change is also a perfect sine wave timed a quarter of a cycle ahead. This self-regeneration property is unique to sine waves. No other kinds of waves have it. It could even be taken as a definition of sine waves.
When Leibniz introduced the word calculus in this context in 1673, he spoke of “a calculus” and sometimes, more affectionately, “my calculus.” He was using the word in its generic sense, a system of rules and algorithms for performing computations. Later, after his system was brought to a high polish, its accompanying article was upgraded to the definite, and the field became known as the calculus. But now, sad to say, its articles and possessives have all gone away. What remains is calculus, humdrum and gray.
In the early days of life, organisms were relatively simple. They were single-celled creatures, something like the bacteria of today. That era of unicellular life continued for about three and a half billion years, dominating most of the Earth’s history. But around half a billion years ago, an astonishing diversity of multicellular life burst forth in what biologists call the Cambrian explosion. In just a few tens of millions of years—an evolutionary split second—many of the major animal phyla suddenly emerged.
More generally, an ordinary differential equation describes how something (the position of a planet, the concentration of a virus) changes infinitesimally as the result of an infinitesimal change in something else (such as an infinitesimal increment of time). What makes such an equation “ordinary” is that there is exactly one something else, one independent variable. Curiously, it doesn’t matter how many dependent variables there are. As long as there is only one independent variable, the differential equation is considered ordinary. For example, it takes three numbers to pinpoint the position of a spacecraft moving in three-dimensional space. Call those numbers x, y, and z. They indicate where the spacecraft is at a given time by locating it left or right, up or down, front or back, and thus telling us how far away it is from some arbitrary reference point called the origin. As the spacecraft moves, its x, y, and z coordinates change from moment to moment. Thus, they’re functions of time. To emphasize their time dependence, we could write them as x(t), y(t), and z(t).
So why are sine waves so well suited to the solution of the wave equation and the heat equation and other partial differential equations? Their virtue is that they play very nicely with derivatives. Specifically, the derivative of a sine wave is another sine wave, shifted by a quarter cycle. That’s a remarkable property. It’s not true of other kinds of waves. Typically, when we take the derivative of a curve of any kind, that curve will become distorted by being differentiated. It won’t have the same shape before and after. Being differentiated is a traumatic experience for most curves. But not for a sine wave. After its derivative is taken, it dusts itself off and appears unfazed, as sinusoidal as ever. The only injury it suffers—and it isn’t even an injury, really—is that the sine wave shifts in time. It peaks a quarter of a cycle earlier than it used to.
To a physicist, what’s remarkable about sine waves (in the context of the vibration and heat flow problems) is that they form standing waves. They don’t travel along the string or the rod. They remain in place. They oscillate up and down but never propagate. Even more remarkably, standing waves vibrate at a unique frequency. That’s a rarity in the world of waves. Most waves are a combination of many frequencies, just as white light is a combination of all the colors of the rainbow. In that respect, a standing wave is pure, not a mixture.
The C stands for computerized and the T stands for tomography, meaning the process of visualizing something by cutting it into slices. A CT scan uses x-rays to image an organ or a tissue one slice at a time. When a patient is placed in a CT scanner, x-rays are sent through the person’s body at many different angles and recorded by a detector on the other side.
Calculus, to me, is defined by its credo: to solve a hard problem about anything continuous, slice it into infinitely many parts and solve them. By putting the answers back together, you can make sense of the original whole. I’ve called this credo the Infinity Principle.
Evolution solved the packaging problem with spools, the same solution we use when we need to store a long piece of thread. The DNA in cells is wound around molecular spools made of specialized proteins called histones. To achieve further compaction, the spools are linked end to end, like beads on a necklace, and then the necklace is coiled into ropelike fibers that are themselves coiled into chromosomes. These coils of coils of coils compact the DNA enough to fit it into the cramped quarters of the nucleus.
But spools were not nature’s original solution to the packaging problem. The earliest creatures on Earth were single-celled organisms that lacked nuclei and chromosomes. They had no spools, just as today’s bacteria and viruses don’t. In such cases, the genetic material is compacted by a mechanism based on geometry and elasticity. Imagine pulling a rubber band tight and then twisting it from one end while holding it between your fingers. At first, each successive turn of the rubber band introduces a twist. The twists accumulate, and the rubber band remains straight until the accumulated torsion crosses a threshold. Then the rubber band suddenly buckles into the third dimension. It begins to coil on itself, as if writhing in pain. These contortions cause the rubber band to bunch up and compact itself. DNA does the same thing. This phenomenon is known as supercoiling. It is prevalent in circular loops of DNA.
The study of the geometry and topology of DNA has been a thriving industry ever since. Mathematicians have used knot theory and tangle calculus to elucidate the mechanisms of certain enzymes that can twist DNA or cut it or introduce knots and links into it. These enzymes alter the topology of DNA and hence are known as topoisomerases. They can break strands of DNA and reseal them, and they are essential for cells to divide and grow. They have proved to be effective targets for cancer-chemotherapy drugs. The mechanism of action is not completely clear, but it is thought that by blocking the action of topoisomerases, the drugs (known as topoisomerase inhibitors) can selectively damage the DNA of cancer cells, which causes them to commit cellular suicide. Good news for the patient, bad news for the tumor.
Chaotic systems are finicky. A little change in how they’re started can make a big difference in where they end up. That’s because small changes in their initial conditions get magnified exponentially fast. Any tiny error or disturbance snowballs so rapidly that in the long term, the system becomes unpredictable. Chaotic systems are not random—they’re deterministic and hence predictable in the short run—but in the long run, they’re so sensitive to tiny disturbances that they look effectively random in many respects. Chaotic systems can be predicted perfectly well up to a time known as the predictability horizon. Before that, the determinism of the system makes it predictable. For example, the horizon of predictability for the entire solar system has been calculated to be about four million years. For times much shorter than that, like the single year it takes our Earth to go around the sun, everything behaves like clockwork. But once we move past a few million years, all bets are off. The subtle gravitational perturbations among all the bodies in the solar system accumulate until we can no longer forecast the system accurately.
There’s something astonishing about this, philosophically speaking. The differential equations and integrals of quantum electrodynamics are creations of the human mind. They are based on experiments and observations, certainly, so they have reality built into them to that extent. Yet they are products of the imagination nonetheless. They are not slavish imitations of reality. They are inventions. And what is so astonishing is that by making certain scribbles on paper and doing certain calculations with methods analogous to those developed by Newton and Leibniz but souped up for the twenty-first century, we can predict nature’s innermost properties and get them right to eight decimal places. Nothing that humanity has ever predicted is as accurate as the predictions of quantum electrodynamics. I think this is worth mentioning because it puts the lie to the line you sometimes hear, that science is like faith and other belief systems, that it has no special claim on truth. Come on. Any theory that agrees to one part in a hundred million is not just a matter of faith or somebody’s opinion. It didn’t have to match to eight decimal places. Plenty of theories in physics have turned out to be wrong. Not this one. Not yet, at least. No doubt it’s a little bit off, as every theory always is, but it sure comes close to the truth.
In the years since then, positrons have been put to work saving lives. They underlie PET scans (PET stands for positron emission tomography), a form of medical imaging that allows doctors to see regions of abnormal metabolic activity in soft tissues in the brain or other organs. In a noninvasive fashion that requires no surgery or other dangerous intrusions into the skull, PET scans can help locate brain tumors and detect the amyloid plaques associated with Alzheimer’s disease. So here is another sterling example of calculus as the handmaiden to something marvelously practical and important. Because calculus is the language of the universe as well as the logical engine for extracting its secrets, Dirac was able to write down a differential equation for the electron that told him something new and true and beautiful about nature. It led him to conjure up a new particle and realize that it ought to exist. Logic and beauty demanded it. But not on their own—they had to align with known facts and mesh with known theories. When all of that was stirred into the pot, it was almost as if the symbols themselves brought the positron into existence.