Afterword
In chapter 1, I asked you to type 3 + 5 + 2 + 8 into a calculator and think about what happens. We told two stories: one where a running total changes step by step, and one where the expression already has a value and evaluation just reveals it. Both stories compute the same answer, and the two models behind them (Turing’s machine, Church’s lambda calculus) turned out to be exactly equivalent. I want to show you that this keeps happening, at every level, all the way down.
Curry and Howard, working independently and decades apart, noticed that the rules of typed lambda calculus are identical to the rules of natural deduction in logic. A function of type A -> B is a proof that A implies B; a pair of type (A, B) is a proof that both A and B are true; a program that type-checks is a theorem that holds. The thing you write and the thing it means turn out to be the same formal object, with a mechanical translation between them: a type checker is a proof checker, and a compiler is a proof transformer.
Programs are instructions for machines, and proofs are arguments about truth. Why would the formal systems describing these two activities be the same system in different notation? Curry published on combinatory logic in the 1930s without knowing that Howard would arrive at the same structure from natural deduction three decades later. Nobody planned the correspondence; it was already there in the mathematics.
If types are propositions and programs are proofs, what is equality? We normally treat it as a yes-or-no judgement, but Voevodsky proposed that equality is a structure: two values are equal if there is a path connecting them, and that path is itself a type, which can have its own paths (equalities between equalities). Types become spaces, equalities become continuous deformations, and the entire framework of homotopy theory (the branch of topology that studies shapes up to continuous deformation) maps onto type theory. A path in a topological space is a geometric object; in homotopy type theory, that same path is a term you can write down, type-check, and compute with. Syntax and semantics collapse into each other again, one level below Curry-Howard.
Each time this happens, something that looked like a metaphor (“programs are like proofs,” “types are like spaces”) turns out to be an identity, and the only thing separating the two sides was notation. Homotopy type theory is young and where it leads is anyone’s guess, but there is one more thing worth seeing.
In 1945, Eilenberg and Mac Lane introduced category theory as bookkeeping for mathematics. They wanted to track the relationships between algebra, topology, and logic, and they found that the objects mattered less than the maps between them. A functor is a map that preserves structure, a monoid is an operation that is associative and has an identity element, and a natural transformation converts one functor into another. Saunders Mac Lane called it “abstract nonsense,” and he meant it as a compliment.
You have been using category theory since chapter 5 without the vocabulary. map(x, sqrt) returning a list of the same length is a functor; composing functions with |> is a category where objects are types and arrows are functions; folding a list with Reduce() is a monoid; reshaping data with pivot_longer() and pivot_wider() without losing information is a natural isomorphism.
If you go looking you will find more of these connections, between R and Haskell, between type theory and topology, between the code you write on a Tuesday afternoon and mathematics that is older than computers. I have been down this rabbit hole for years and I am nowhere near the bottom.
Thank you for reading.