Afterword

Chapter 1 put four numbers on a table and told two stories about adding them. Mr. State held a running total. Mr. Fold pressed pairs together until one number remained. Both got 18. That split between stepping through state and folding over structure ran through every chapter that followed: for loops and Reduce(), Turing machines and lambda calculus. Church and Turing proved in 1936 that the two models compute exactly the same class of functions. Any question you can ask of one, you can ask of the other.

Curry published on combinatory logic in the 1930s. Howard arrived at the same structure from proof theory three decades later, without knowing about Curry’s work. What they both found was that typed lambda calculus and natural deduction in logic follow the same rules. A function of type A -> B is a proof that A implies B; a pair of type (A, B) is a proof that both A and B hold. If a program type-checks, the types constitute a theorem, and the type checker is verifying the proof. The correspondence was not something anyone set out to build. Two formal systems, designed for completely different purposes, turned out to be the same system.

Voevodsky pushed the question one level deeper. He proposed that equality is not a yes-or-no judgement but a structure: two values are equal if there is a path connecting them, and that path is itself a type, which can have its own paths (equalities between equalities). The entire framework of homotopy theory (the branch of topology that studies shapes up to continuous deformation) maps onto type theory. A path in a topological space is a geometric object; in homotopy type theory, that same path is a term you can type-check and compute with. What looked like a metaphor (“programs are like proofs,” “types are like spaces”) kept turning out to be an identity. Homotopy type theory is young, and where it leads is open.

In 1945, Eilenberg and Mac Lane introduced category theory to track the relationships between algebra, topology, and logic. What they found was that the objects mattered less than the maps between them. Saunders Mac Lane called it “abstract nonsense.” He meant it as a compliment. The vocabulary (functors, monoids, natural transformations) is forbidding, but you have been using the structures since chapter 5. map(x, sqrt) returning a list of the same length is a functor. Composing functions with |> gives you a category where objects are types and arrows are functions. Reduce() over an associative operation with an identity element is a monoid. pivot_longer() and pivot_wider() without losing information is a natural isomorphism.

If you go looking you will find more of these connections, between R and Haskell, between type theory and topology, between the code you write on a Tuesday afternoon and mathematics that predates computers by centuries.

The connections are still forming. R 4.1 gave the language a native pipe (|>) and a one-character lambda (\(x)). R 4.4 moved the null-coalescing operator (%||%) into base R and patched the internal dispatch machinery so that S7, a class system built entirely out of functions, could sit on top of it. duckplyr routes dplyr verbs through DuckDB without changing a line of user code, because the verbs were designed as a functional interface: data frame in, data frame out, no side effects. The Polars data frame engine, written in Rust, connects to R through extendr, and Rust’s ownership model maps onto R’s copy-on-modify semantics closely enough that the two languages share data without defensive copying. The tidyverse’s AI toolkit (ellmer for calling language models, ragnar for retrieval-augmented generation, mcptools for exposing R functions as tools that language models can call) is built on the same pattern: pure functions composed into pipelines. Church published the lambda calculus in 1936. Ninety years later, the architecture has not needed replacing.

Thank you for reading.