Global Warming and Diversity Heating
From Temperature to Shannon’s Index
Most people think life on Earth runs on the Sun’s heat. The Earth is already close to balance with the Sun. What drives life is the flow of low-entropy energy, the difference between sunlight arriving and radiation leaving. That difference builds forests and oceans, molecules and societies. It also connects directly to how we measure biodiversity.
Galileo’s Thermoscope
In the early seventeenth century, Galileo used a glass bulb with a long thin neck, the end dipped into water. As the air inside expanded or contracted, the water column rose or fell. It was one of the first instruments to register heat, showing changes through movement in the liquid.
Instrument makers refined the idea during the following century. Daniel Fahrenheit introduced mercury thermometers in 1714, choosing his freezing and boiling points from experiments with ice, salt, and brine. Anders Celsius proposed his centigrade scale in 1742, based on the freezing and boiling of water, later reversed by his colleagues. William Thomson, later Lord Kelvin, described absolute zero in 1848, anchoring temperature in physical law rather than in the properties of any particular material.
Engines and Entropy
By the early 1800s, steam engines had become the beating heart of industry. Trains crossed landscapes with plumes of smoke, and factories filled with the hiss of pistons and boilers. Engineers knew these machines wasted much of their fuel as heat, but no one could say why.
In France, a 28-year-old engineer named Sadi Carnot set out to solve the puzzle. In 1824 he published Reflections on the Motive Power of Fire. The book sold poorly at the time, but its argument was revolutionary: the efficiency of an engine had nothing to do with its design or materials. What mattered was the temperature difference between the hot steam driving it and the cooler reservoir where the exhaust was released. Carnot had put his finger on a law that applied to every machine, real or imagined.
A generation later, Rudolf Clausius expanded on this work. He wanted to put the principle into a universal mathematical form, and in doing so, introduced the concept of entropy:
\[dS = \frac{\delta Q}{T}\]Entropy, he explained, was a property of state, something every physical system carried within it. In 1865 he gave the idea a name, borrowing from the Greek word for “transformation.”
William Thomson, better known as Lord Kelvin, took the next step. He constructed an absolute temperature scale with zero at the unreachable limit of no thermal motion. With this, the concepts of heat, entropy, and temperature were bound together into the laws of thermodynamics — rules that every engine, every planet, and every living system must obey.
Boltzmann and Gibbs
By the late nineteenth century, physics in Vienna was alive with debate about the microscopic world. Ludwig Boltzmann argued that the laws of thermodynamics could be explained by statistics, if one counted all the ways particles could be arranged. He called these microstates, and showed that entropy measured their multiplicity:
\[S = k \log W\]The equation, later carved on his tombstone in Vienna’s Zentralfriedhof, captured the bridge between the microscopic and the macroscopic. Temperature could be written as
\[\frac{1}{T} = \frac{\partial S}{\partial E},\]showing that heat was tied to the growth of possible microstates with energy.
Yet Boltzmann’s approach faced resistance. How could one link the chaos of countless particles to the smooth laws of thermodynamics? The answer came from across the Atlantic. At Yale, Josiah Willard Gibbs introduced the concept of ensembles, probability distributions that describe all possible microstates consistent with a given macrostate.
Gibbs’s work gave Boltzmann’s ideas a general framework, showing how to treat gases, liquids, and solids under the same mathematics. Though his papers in the 1870s and 1880s attracted little attention in the United States, they were later translated in Europe and recognized as the backbone of statistical mechanics. Together, Boltzmann’s counting and Gibbs’s ensembles built the modern understanding of entropy.
Shannon’s Entropy
By the mid-twentieth century, the puzzles of entropy had leapt from physics into a very different world: communication. Bell Labs in New Jersey was then one of the most innovative research centers in the world. Its engineers were wrestling with telegraph clicks, noisy phone lines, and the growing complexity of electrical signals.
In this environment, Claude Shannon published A Mathematical Theory of Communication in 1948. His idea was that a message, no matter whether it was letters on a page or pulses on a wire, could be described statistically. What mattered was the probability of each symbol and the uncertainty of the next one. Shannon wrote the measure of that uncertainty in a form strikingly similar to Boltzmann’s:
\[H = -\sum_i p_i \log p_i\]The parallels were unmistakable. In physics, a macrostate was the bulk properties of a gas, and the microstates were the countless arrangements of molecules. In Shannon’s formulation, the macrostate was the overall distribution of letters or signals, while the microstates were the individual sequences consistent with it. Entropy measured the richness of possibilities in both cases.
The theory provided a universal way to quantify information and showed the ultimate limits of communication. It explained why some codes could compress messages and why noise always imposed a cost. Shannon had, in effect, taken the mathematics of disorder from physics and revealed it as the mathematics of information.
Shannon himself was an unusual figure. Beyond his equations, he built mechanical mice that could learn to navigate mazes, tinkered with gadgets in his home workshop, and was often seen unicycling through the halls of Bell Labs while juggling. His eccentricity masked a unifying insight: whether in molecules, messages, or machines, entropy set the boundaries of what was possible.
Diversity as Entropy
Shannon’s formula soon found life beyond telegraphs and telephone wires. In ecology, researchers were facing a parallel problem: how to describe the complexity of living communities. Counting species, what ecologists call richness, gave only part of the picture. Two forests might each contain the same number of species, yet one could be evenly balanced while the other was dominated by a single tree. The numbers alone did not capture the difference in structure.
Shannon’s entropy became the natural tool. It treated individuals as symbols and species as categories, measuring how uncertain one would be in predicting the species of the next individual. The closer the community was to balance, the higher the entropy. The more it was dominated, the lower.
A simple example shows the contrast. Imagine two communities, each with four species:
- Community A: every species has 25 individuals.
- Community B: one species has 85, the other three just 5 each.
Both have richness = 4, yet their entropies are very different:
- $H_A = 1.386$
- $H_B = 0.543$
Community A contains many possible arrangements of individuals across species. Community B is overwhelmingly shaped by one dominant form. The mathematics that Shannon had used to measure uncertainty in communication now measured the balance of diversity in ecosystems.
Whittaker’s Scaling
By the 1960s, ecology was shifting from simple species lists to quantitative ways of describing whole landscapes. Robert Whittaker, an American ecologist working in Oregon and later at Cornell, introduced a framework for thinking about diversity at different spatial scales. His idea was that richness inside a single plot did not tell the whole story. To understand ecosystems, one had to measure how diversity accumulated across space.
Whittaker distinguished three levels.
- Alpha (α): diversity within a single plot or community.
- Gamma (γ): diversity across an entire region that contains many plots.
- Beta (β): the turnover that links them, defined as
This framework made it possible to separate local richness from regional richness and to quantify how much communities differ from one another.
A simple example illustrates the point. Imagine three small plots: one with species A and B, one with B and C, and one with C and D. Each plot has $\alpha = 2$. Looking at the region as a whole, there are four species in total, so $\gamma = 4$. The turnover is then $\beta = 2$.
Each plot adds something that the others lack. Diversity at the regional level is greater than diversity at the local level, just as entropy in physics grows when independent subsystems are combined. Whittaker’s scaling showed that the mathematics of difference and accumulation applied not only to gases and signals but also to forests and fields.
Global Warming and Diversity Heating
Life on Earth depends on a constant imbalance. High-energy sunlight streams in, and lower-energy infrared radiation escapes back into space. The difference between them is what keeps the planet alive. It is a flow of low-entropy energy that allows molecules to assemble, cells to function, forests to grow, and societies to persist.
This imbalance drives order into the biosphere. Plants capture photons, store them as sugars, and pass them up the food chain. Ecosystems build on the availability of these concentrated flows, using them to maintain structure against the pull of disorder.
Global warming alters this balance. As greenhouse gases trap infrared radiation, the Earth’s energy budget shifts. The flow of usable energy narrows, reducing the contrast that sustains complexity. At the same time, biological diversity is eroding. Species vanish, communities become more uniform, and the number of possible ecological configurations shrinks.
When diversity collapses, entropy rises. Communities flatten. Complexity gives way to sameness. Global warming is therefore not only about higher temperatures. It is also about the steady loss of differences — the very differences that make life possible.