(function(x) x + 1)(5)
#> [1] 630 R as mathematics
Take R and remove everything except function() and the ability to call functions. No numbers. No booleans. No if. No list(). No <- for naming recursive calls. Starting from that barren landscape, try to build integers, conditionals, pairs, lists, and recursion.
Surprisingly, it works. Every one of those things can be reconstructed from function() alone. R’s function() keyword is Church’s λ, its function application is beta reduction, and its closures are the scoping discipline that Scheme borrowed from lambda calculus and passed on to R (Chapter 2). The double <- function(x) x * 2 from Chapter 5, the recursive descents in Chapter 22, the closures that captured variables and carried them somewhere new – all of them are expressions in a formal system that predates computers by decades.
This chapter runs the experiment.
30.1 Lambda calculus in one page
The lambda calculus, invented by Alonzo Church in 1936, has exactly three things:
- Variables:
x - Abstraction (making a function):
λx. body - Application (calling a function):
(f a)
That is the entire language. No numbers, no booleans, no if, no loops. Church showed that this is enough to express any computation (the same class of functions Turing’s machine can compute; see Section 9.6).
In R, abstraction is function(x) body and application is f(a). The translation is mechanical:
| Lambda calculus | R |
|---|---|
λx. x |
function(x) x |
λx. x + 1 |
function(x) x + 1 |
(λx. x + 1)(5) |
(function(x) x + 1)(5) |
λx. λy. x + y |
function(x) function(y) x + y |
(function(x) function(y) x + y)(3)(4)
#> [1] 7The second example is currying: a two-argument function rewritten as a function that returns a function. function(x) function(y) x + y takes x, then returns a new function that takes y and adds them. This is the same pattern from Section 7.4 and Section 18.4. It has a name and a history.
R can do this because Ihaka and Gentleman gave it Scheme’s lexical scoping (Section 2.5). When the inner function function(y) x + y executes, it looks up x in the environment where it was created, not where it was called, which is precisely what lambda calculus requires: a variable refers to the nearest enclosing abstraction that binds it. Java added lambda expressions in 2014, C++ in 2011, and Python’s lambda is restricted to a single expression. R has had full, unrestricted lambdas since 1993 – not as an added feature, but as the foundation.
So what can you build on that foundation, using nothing else?
Exercises
- Write the lambda calculus expression
λf. λx. f(f(x))in R and apply it tosqrtand256. What do you get? - Write a curried
powerfunction:power <- function(n) function(x) x^n. Createsquare <- power(2)andcube <- power(3). Test them. - What happens if you call
(function(x) function(y) x)(1)(2)? Why? Think about which variable the inner function captures.
30.2 Church booleans
Suppose all you have is function(). How do you represent TRUE and FALSE?
Church’s answer is elegant to the point of disbelief: a boolean is a function that chooses. Give it two arguments; TRUE picks the first, FALSE picks the second.
ch_true <- function(a) function(b) a
ch_false <- function(a) function(b) bThese are curried two-argument functions. ch_true takes a, returns a function that takes b and ignores it, handing back a. ch_false does the opposite: it ignores the first argument and returns the second.
ch_true("yes")("no")
#> [1] "yes"
ch_false("yes")("no")
#> [1] "no"That is already if/else. A Church boolean is a conditional: call it with the “then” branch and the “else” branch, and it selects one. No dispatch mechanism, no keyword, no syntax. The boolean itself does the selecting.
ch_if <- function(cond) function(then_val) function(else_val) {
cond(then_val)(else_val)
}
ch_if(ch_true)("heads")("tails")
#> [1] "heads"
ch_if(ch_false)("heads")("tails")
#> [1] "tails"ch_if does almost nothing; it just applies the boolean to the two branches. The boolean does the work. In a sense, there is no if in Church encoding. There are only functions that choose.
Now the logical connectives:
ch_and <- function(p) function(q) p(q)(p)
ch_or <- function(p) function(q) p(p)(q)
ch_not <- function(p) function(a) function(b) p(b)(a)Watch ch_and(p)(q): if p is true, it picks its first argument, which is q (so the result depends on whether q is also true); if p is false, it picks its second argument, which is p itself (false). The logic is embedded in the selection behavior of the booleans themselves, not in any external machinery.
# TRUE AND FALSE -> FALSE
ch_and(ch_true)(ch_false)("yes")("no")
#> [1] "no"
# TRUE OR FALSE -> TRUE
ch_or(ch_true)(ch_false)("yes")("no")
#> [1] "yes"
# NOT TRUE -> FALSE
ch_not(ch_true)("yes")("no")
#> [1] "no"Church booleans make a quiet point about R: if/else is redundant. Conditional logic can be built from functions alone, because R’s built-in if is syntactic sugar for something that function() can already do. A language whose core primitive is powerful enough to subsume its own control flow has a different character from one that added lambdas later.
Exercises
- Implement
ch_xor(exclusive or) using only Church booleans. Test it on all four input combinations. - Implement
ch_eqthat checks whether two Church booleans are equal. (Hint:xorfollowed bynotworks, or find a direct encoding.) - Write a function
to_r_boolthat converts a Church boolean back to R’sTRUE/FALSE.
30.3 Church numerals
Numbers are harder to believe. A Church numeral n is a function that takes a function f and returns f composed with itself n times: zero applies f zero times (returns x unchanged), one applies f once, two applies f twice, and so on, each number literally counting the layers of function composition that define it.
zero <- function(f) function(x) x
one <- function(f) function(x) f(x)
two <- function(f) function(x) f(f(x))
three <- function(f) function(x) f(f(f(x)))To see what a Church numeral “means,” pass it a concrete function and a starting value:
three(function(x) x + 1)(0)
#> [1] 3Three applications of “add one” to zero gives three. That is the conversion from Church numeral to R integer:
to_int <- function(n) n(function(x) x + 1)(0)
to_int(zero)
#> [1] 0
to_int(two)
#> [1] 2
to_int(three)
#> [1] 3The successor function adds one more application of f:
succ <- function(n) function(f) function(x) f(n(f)(x))Read it this way: succ(n) returns a new numeral that, given f and x, first applies f n times (that is n(f)(x)), then applies f one more time on top.
four <- succ(three)
to_int(four)
#> [1] 4Build higher numbers:
five <- succ(four)
six <- succ(five)
to_int(six)
#> [1] 6Successor is straightforward. You might expect addition to follow the same logic, just doing succ repeatedly. But look at how it actually works:
add <- function(m) function(n) function(f) function(x) m(f)(n(f)(x))
to_int(add(two)(three))
#> [1] 5Multiplication: mult(m)(n) means “apply f n times, repeated m times.” Composing n copies of f gives a function that applies f n times; applying that composition m times gives m * n.
mult <- function(m) function(n) function(f) m(n(f))
to_int(mult(two)(three))
#> [1] 6
to_int(mult(three)(three))
#> [1] 9Exponentiation falls out naturally:
power <- function(m) function(n) n(m)
to_int(power(two)(three)) # 2^3 = 8
#> [1] 8
to_int(power(three)(two)) # 3^2 = 9
#> [1] 9power(m)(n) applies m (itself a function-repeater) n times. Two applied three times: 2^3. Sit with this one. The operation that felt most abstract – exponentiation – has the shortest definition, because “applying a function-repeater repeatedly” is what exponentiation means. But building numbers up is only half the story. Can we inspect them? Can we ask whether a numeral is zero?
The test is short:
is_zero <- function(n) n(function(x) ch_false)(ch_true)
is_zero(zero)("yes")("no")
#> [1] "yes"
is_zero(three)("yes")("no")
#> [1] "no"zero ignores f and returns x, so it returns ch_true. Any nonzero numeral applies f at least once, and since f always returns ch_false, a single application replaces ch_true with ch_false.
We can also convert R integers to Church numerals:
from_int <- function(n) {
result <- zero
for (i in seq_len(n)) result <- succ(result)
result
}
to_int(from_int(5))
# [1] 5
to_int(add(from_int(3))(from_int(4)))
# [1] 7Exercises
- Compute
mult(from_int(4))(from_int(5))and convert back to an integer. - What does
power(from_int(2))(from_int(10))give? (This is 2^10 built from nothing but nested function calls.) - The predecessor function (subtracting one) is famously tricky in Church encoding. Look it up, implement it in R, and test it. This was hard enough that Church himself struggled with it; his student Kleene figured it out, reportedly while getting anesthesia at the dentist.
30.4 Church pairs and lists
A pair is a container that holds two values. In Church encoding, a pair is a function that takes a selector and applies it to both values:
pair <- function(a) function(b) function(f) f(a)(b)
fst <- function(p) p(function(a) function(b) a)
snd <- function(p) p(function(a) function(b) b)pair(a)(b) stores a and b inside a closure, and fst and snd extract them by passing the appropriate selector (which, if you look closely, is just ch_true and ch_false again):
p <- pair("hello")("world")
fst(p)
#> [1] "hello"
snd(p)
#> [1] "world"From pairs, you can build linked lists. A list is either empty (nil) or a pair of a head and a tail:
nil <- function(f) function(x) x # same shape as zero
is_nil <- function(l) l(function(h) function(t) function(x) ch_false)(ch_true)
cons <- function(h) function(t) pair(h)(t)
head_of <- function(l) fst(l)
tail_of <- function(l) snd(l)How does is_nil work? Recall that a Church-encoded pair pair(a)(b) takes a selector f and returns f(a)(b). We pass a selector that ignores both arguments and returns ch_false, then pass ch_true as the default. If l is nil, it behaves like zero: ignores the selector and returns the default (ch_true). If l is a cons cell, the selector runs, ignoring head and tail, returning ch_false. The structure of the data does the dispatching.
Build a list and walk it:
my_list <- cons("a")(cons("b")(cons("c")(nil)))
head_of(my_list)
# [1] "a"
head_of(tail_of(my_list))
# [1] "b"
head_of(tail_of(tail_of(my_list)))
# [1] "c"Three nested function calls, and you have a linked list. No vectors, no list(), no memory allocation beyond closures. But one thing is still missing: none of these structures can refer to themselves.
Exercises
- Write a
length_offunction that counts the elements in a Church-encoded list. Useto_intand Church numerals, or use R’s integers for simplicity. - Write
map_listthat applies a function to every element of a Church list and returns a new Church list. - Implement
reverse_list. (Hint: fold from the left, consing onto an accumulator that starts asnil.)
30.5 The Z combinator
In Chapter 22, we previewed fixed-point combinators. Now the full construction.
A recursive function refers to itself by name, but in lambda calculus, there are no names. There is no <-. So how do you define factorial? You need a way to express recursion using only anonymous functions.
The trick is the fixed-point combinator. A fixed point of a function g is a value x such that g(x) = x. If g is a “template” for a recursive function (a function that takes itself as an argument and returns the recursive body), then finding its fixed point gives you the recursive function you wanted.
The Y combinator does this in lazy languages like Haskell:
Y = λf. (λx. f(x x))(λx. f(x x))
But R is strict: it evaluates arguments before passing them, so x(x) would loop forever. The Z combinator wraps the self-application in a lambda to defer it:
Z <- function(f) {
(function(x) f(function(v) x(x)(v)))(
function(x) f(function(v) x(x)(v))
)
}Read it like this: Z(f) creates a function x that, when called, applies f to a delayed version of the recursive call. The function(v) x(x)(v) wrapper means “don’t call x(x) yet; wait until v is provided.” That delay breaks the infinite loop.
Use it to define factorial without ever naming the function:
fact <- Z(function(self) function(n) {
if (n == 0) 1 else n * self(n - 1)
})
fact(5)
#> [1] 120
fact(10)
#> [1] 3628800self is not a name the function gives itself. It is a parameter, injected by Z. The function never refers to fact; it only refers to self, which Z arranges to be the function itself.
Fibonacci:
fib <- Z(function(self) function(n) {
if (n <= 1) n else self(n - 1) + self(n - 2)
})
sapply(0:10, fib)
#> [1] 0 1 1 2 3 5 8 13 21 34 55A recursive list-sum, using nothing but Church-encoded lists and the Z combinator:
church_sum <- Z(function(self) function(lst) {
is_nil(lst)(
function(dummy) 0
)(
function(dummy) head_of(lst) + self(tail_of(lst))
)("go")
})
nums <- cons(10)(cons(20)(cons(30)(nil)))
church_sum(nums)
# [1] 60The function(dummy) wrappers need explaining. is_nil(lst) returns a Church boolean, which selects between two arguments, but if we passed 0 and head_of(lst) + self(tail_of(lst)) directly, R would evaluate both branches eagerly (it is a strict language), and the recursive branch would run even on the empty list, looping forever. Wrapping each branch in function(dummy) defers evaluation: the Church boolean selects one wrapper, and the ("go") call at the end forces the selected branch to execute. The "go" argument itself is unused; it is just the trigger that makes the chosen function(dummy) run. This is the same thunking trick that distinguishes the Z combinator from the Y combinator: in a strict language, you wrap computations in lambdas to delay them until they are actually needed.
No for, no while, no Recall(), no named recursion. Just function(), application, and the Z combinator. The pieces are starting to accumulate.
The Z combinator is not practical R code. Its value is conceptual: it proves that recursion is not a primitive operation. Self-reference can be derived from anonymous functions and application alone. That R can express this in working, runnable code – not as a thought experiment but as something you can paste into a console and run – is a direct consequence of its Scheme heritage.
Exercises
- Define a recursive
lengthfunction usingZthat counts the elements of a Church list (without converting to R types). - Use
Zto write a recursivemapover Church lists. - Why does the Y combinator not work in R? Write it out and try calling
Y(function(self) function(n) if (n == 0) 1 else n * self(n - 1))(5). What happens?
30.6 The punchline
Here is the inventory. Starting from nothing but function() and function application:
- Booleans:
ch_true,ch_false,ch_and,ch_or,ch_not - Conditionals:
ch_if(which turned out to be trivial, because booleans are conditionals) - Natural numbers:
zero,succ, and arithmetic (add,mult,power) - Pairs and lists:
pair,fst,snd,cons,nil - Recursion: the Z combinator
No R integers, no R booleans, no if, no list(), no named recursive functions. Every one of those features reconstructed from function() alone.
This is Church’s result from 1936. When Chapter 1 said that R descends from Church’s model of computation, this is what that means concretely: function() is a universal computational primitive, and everything else is convenience built on top. The requirements are specific: first-class functions, closures, and unrestricted anonymous functions. R has all three, inherited from Scheme. Python’s lambda is restricted to a single expression, which makes multi-line Church encodings awkward; Java’s lambdas cannot close over mutable local variables; C has function pointers but no closures.
The connection between R and mathematics extends further, into the type system itself.
30.7 Types as propositions
There is a correspondence between logic and programming that is not merely analogy. In 1969, William Howard noticed that the rules of constructive logic map exactly onto the rules of typed lambda calculus:
| Logic | Programming |
|---|---|
| Proposition | Type |
| Proof | Program |
| A implies B | Function type A -> B |
| A and B | Pair type (A, B) |
| A or B | Sum type (either A or B) |
| True | Unit type (a type with one value) |
| False | Empty type (a type with no values) |
When you write a function with signature A -> B, you are constructing a proof that if you have an A, you can produce a B. A function integer -> character proves that integers can be converted to characters. If no function of type A -> B can be written, the proposition “A implies B” has no proof.
R’s type system is too loose to enforce this correspondence strictly (every R function can error, which is like a proof that cheats). Languages like Haskell, Agda, and Coq take it seriously enough to use programs as machine-checked proofs of mathematical theorems. But the connection is there in R. Every function you write is, in a formal sense, a little proof.
The Curry-Howard correspondence (named for Haskell Curry, who noticed the connection earlier, and Howard, who formalized it) reframes types and functions entirely. A type is not a tag that says “this is a number” – it is a proposition about what values are possible. A function is not a block of code – it is evidence that a transformation exists. What happens when you start looking at the structure of those propositions and transformations?
30.8 Functors, monoids, and friends
Three patterns have appeared repeatedly in this book, and they all have names that sound intimidating. The names are beside the point. What matters is that these patterns are not coincidences. Category theory provides a vocabulary for patterns that recur across different mathematical structures, and once you see the vocabulary, you start noticing that R’s functional design has been using these structures all along.
Functors. A functor is a mapping between categories that preserves structure. In programming terms: if you have a container and a function, a functor lets you apply the function to the contents without changing the container’s shape.
x <- list(1, 4, 9, 16)
lapply(x, sqrt)
#> [[1]]
#> [1] 1
#>
#> [[2]]
#> [1] 2
#>
#> [[3]]
#> [1] 3
#>
#> [[4]]
#> [1] 4lapply() applied sqrt to each element and gave back a list of the same length. The container’s shape did not change; only the values inside it did. Strictly, the functor is the list construction itself – it sends a type T to “list of T” and sends a function f to “apply f to each element.” lapply() (and purrr’s map()) are the fmap of that functor: the part that lifts a function into the container.
What makes a functor trustworthy is that it obeys two laws. The identity law says mapping a do-nothing function changes nothing:
identical(lapply(x, identity), x)
#> [1] TRUEThe composition law says mapping two functions in sequence gives the same result as mapping their composition in one step:
identical(
lapply(lapply(x, sqrt), log), # map sqrt, then map log
lapply(x, \(v) log(sqrt(v))) # map the composition directly
)
#> [1] TRUEA function that mapped elements but scrambled their relationships would break pipelines. lapply |> lapply |> lapply chains work reliably because both laws hold. mutate() looks similar on the surface, but it does not satisfy these laws in general – column expressions can reference other columns, introduce side effects, and break composition. It is a useful tool, not a functor.
Monoids. A monoid is a set with an associative binary operation and an identity element. Addition on numbers: (a + b) + c = a + (b + c), identity is 0. String concatenation: identity is "". Reduce() works on any monoid (Chapter 21), and that is not an accident; Reduce() is the monoid pattern.
Reduce(paste0, c("a", "b", "c"))
#> [1] "abc"
Reduce(`+`, 1:5)
#> [1] 15It needs associativity so the result is the same regardless of how intermediate computations are grouped, and it needs an identity element to handle the empty case. But not every binary operation qualifies. Subtraction is not associative:
(5 - 3) - 1
#> [1] 1
5 - (3 - 1)
#> [1] 3Different answers. The operations that fold cleanly are precisely the monoids, and the operations Reduce() fails on are the ones that violate the monoid laws. If you find yourself fighting with Reduce(), check whether your operation is actually a monoid. If it is not, the folding direction will matter and the results will surprise you.
Natural transformations. A natural transformation converts one functor into another while preserving the relationships between elements. The defining property is the naturality condition: if you apply a function to the contents and then transform the container, you get the same result as transforming the container first and then applying the function. The transformation commutes with the functor’s action.
as.list() on an atomic vector is a natural transformation from the vector functor to the list functor:
x <- c(1, 4, 9)
# map then convert
a <- as.list(sqrt(x))
# convert then map
b <- lapply(as.list(x), sqrt)
identical(a, b)
#> [1] TRUEThe order does not matter. That commutativity is the naturality condition, and it is what makes as.list() well-behaved: it changes the container without disturbing the relationship between the function and the data inside.
pivot_longer() and pivot_wider() (Section 16.2, Section 16.3) go further: they are invertible, which makes them natural isomorphisms. Reversibility is a stronger property than naturality alone; it means no information is lost in the reshaping. You can pivot long and then pivot wide and arrive back where you started, because the transformation preserves everything.
None of this requires learning category theory. But these patterns keep showing up because they reflect mathematical structure, and R’s functional design makes them visible. Church encodings prove this at the foundational level. For day-to-day work, R has practical tools that put mathematical computation within reach.
Exercises
paste0withReduce:Reduce(paste0, c("a", "b", "c"))gives"abc". What is the identity element forpaste0as a monoid? Verify by adding it as theinitargument:Reduce(paste0, character(0), accumulate = FALSE).- Why is subtraction not a monoid? (Hint: check associativity.) What does this imply about using
Reducewith-? lapply(list(1:3, 4:6, 7:9), rev)appliesrevinside each list element. Verify both functor laws (identity and composition) for this example.
30.9 Practical mathematics in R
Church encodings prove R can do mathematics from nothing. But can it do mathematics that matters? Symbolic algebra, arbitrary precision, number theory: each requires capabilities that most R users never discover, because the packages that provide them sit outside the usual statistical workflow.
Symbolic mathematics. The Ryacas package connects R to the Yacas computer algebra system:
library(Ryacas)
# Symbolic differentiation
yac_str("D(x) x^3 + 2*x") # "3*x^2+2"
# Symbolic integration
yac_str("Integrate(x) x^2") # "x^3/3"
# Simplification
yac_str("Simplify((x^2-1)/(x-1))") # "x+1"Where R’s floating-point arithmetic gives you 0.1 + 0.2 != 0.3 (Section 6.2), symbolic computation works with exact representations. There is no rounding because there are no decimals; expressions stay symbolic until you ask for a numerical result.
Arbitrary precision. The gmp package provides big integers and big rationals:
library(gmp)
# Big integers
factorial(as.bigz(100)) # all 158 digits, exact
# Big rationals: 1/3 stays 1/3, no floating-point error
as.bigq(1, 3) + as.bigq(1, 6) # 1/2, exactlyNumber theory. The numbers package has prime factorization, GCD, modular arithmetic, and related tools:
library(numbers)
primeFactors(2310) # 2, 3, 5, 7, 11
GCD(48, 36) # 12
isPrime(104729) # TRUEProject Euler. If you want to sharpen your R skills on mathematical puzzles, Project Euler is an excellent source. The first few problems are approachable with base R:
# Euler problem 1: sum of multiples of 3 or 5 below 1000
x <- 1:999
sum(x[x %% 3 == 0 | x %% 5 == 0])
#> [1] 233168# Euler problem 6: difference between sum-of-squares and square-of-sum
n <- 1:100
sum(n)^2 - sum(n^2)
#> [1] 25164150Both solutions are one-liners because R’s vectorized operations (Chapter 1) make arithmetic on sequences natural. The expression-based style that R inherited from lambda calculus maps directly onto mathematical notation. That inheritance has a history worth tracing.
Exercises
- Solve Project Euler problem 2: find the sum of even Fibonacci numbers below four million. (Hint: generate Fibonacci numbers with a while loop or
Reduce, filter, sum.) - Install
gmpand computefactorial(as.bigz(200)). How many digits does the result have? (Usenchar(as.character(...)).) 1/3 + 1/3 + 1/3 == 1returnsTRUEin R, but0.1 + 0.1 + 0.1 == 0.3returnsFALSE. Why? (Review Section 6.2 if needed.)
30.10 The language is still moving
Base R is still absorbing functional idioms, and each addition shrinks the gap between what the language offers and what the lambda calculus requires.
R 4.1 (2021) introduced two changes. The first was \(x) x + 1 as shorthand for function(x) x + 1. The backslash is not decoration. Church wrote λ; R now writes \, and the one-character syntax makes anonymous functions cheap enough to use anywhere: inside lapply(), inside Reduce(), as arguments to Map(). Before R 4.1, the nine characters of function(x) were a tax on every lambda expression. The shorthand removed it.
The second was the native pipe |>. Before R 4.1, the pipe came from the magrittr package (%>%), which means it was a function that manipulated the call, not a language primitive. The native pipe is syntax: x |> f() is parsed by R’s interpreter as f(x) before evaluation begins. Function composition is now part of R’s grammar, not a library feature.
R 4.4 (2024) moved %||% from rlang into base R. x %||% y returns x if x is not NULL, otherwise y. This is the null-coalescing operator that C# has had since 2003, Swift since 2014, and Kotlin since 2016. In functional terms, it is a default combinator: given a value that might be absent, provide a fallback. It replaces the four-line if (is.null(x)) y else x with a single infix operator.
Anonymous functions, composition, and null handling — three operations that appear in every functional language — moved into R’s core syntax between 2021 and 2024. The language is still compressing toward its functional core.
30.11 The historical thread
Each idea in this chapter passed through specific hands. The chain is worth tracing because each link solved a problem that the previous link could not.
Could you reduce all of reasoning to calculation? In 1679, a man who had already co-invented calculus thought you could. Gottfried Wilhelm Leibniz imagined a calculus ratiocinator, a formal system that would make thinking mechanical. The dream was premature by two and a half centuries. By the 1870s, mathematicians had spent decades making their own foundations rigorous — formalizing limits, infinity, the number line. A German mathematician named Gottlob Frege saw that logic could be next. His Begriffsschrift (1879) introduced quantifiers, variables, and functions as logical primitives, the first formal notation for logic itself. Modern mathematical logic begins here, but Frege’s system was too heavy. Church stripped it to three constructs (variable, abstraction, application) and showed in 1936 that everything could be built back from those alone. The Church numerals and Church booleans in this chapter are his constructions.
That was the theory. The question was whether anyone could make it run. McCarthy answered it in 1960 with LISP, the first programming language where (lambda (x) (+ x 1)) was a direct encoding of Church’s λx. x + 1. But LISP had dynamic scoping, which meant closures did not work the way the theory said they should. Steele and Sussman fixed that in 1975 with Scheme: lexical scoping, first-class closures, and the “Lambda the Ultimate” papers that showed lambda expressions could replace goto, assignment, and most control structures.
In the 1970s, a statistician at Bell Labs who wanted to run a regression still had to write a Fortran program, compile it, and wait for output. Chambers designed S (1976) to change that: a language for statistical computing where you could type an expression and see the result immediately. It had functions and expressions at its core but borrowed its computational style from Fortran. The statistical vocabulary (data frames, formulas, vectorized operations) was new. When Ihaka and Gentleman reimplemented S at the University of Auckland in 1993, they made one decision that changed everything: they replaced S’s scoping rules with Scheme’s lexical scoping. That single choice is why closures work in R, why function factories work, why every Church encoding in this chapter runs without modification.
So the experiment worked. Starting from function() alone, we built integers, booleans, pairs, lists, and recursion. No primitive needed adding; each one assembled from the previous. Church proved it was possible in 1936. McCarthy made it run in 1960. Ihaka and Gentleman, by choosing Scheme’s scoping rules, made it run in R without anyone asking them to. The barren landscape from the opening turned out to have everything buried in it already.
30.12 References and sources
Lambda calculus in R:
- Alonzo Church, “An Unsolvable Problem of Elementary Number Theory” (1936). The paper that started it all.
- Hindley & Seldin, Lambda-Calculus and Combinators: An Introduction (2008). Accessible textbook covering Church encoding, beta reduction, fixed points.
- Michaelson, An Introduction to Functional Programming Through Lambda Calculus (2011). Gentle path from lambda calculus to practical FP.
- Benjamin Pierce, Types and Programming Languages (2002), chapters 5-9. Church booleans, numerals, pairs, recursion via fixed-point combinators.
Curry-Howard correspondence:
- Howard, “The Formulae-as-Types Notion of Construction” (1969/1980). Types are propositions, programs are proofs.
- Wadler, “Propositions as Types” (2015, Communications of the ACM). The most readable introduction.
Category theory connections:
- Bartosz Milewski, Category Theory for Programmers (2019, free online). Accessible introduction with Haskell examples, translatable to R.
- Mac Lane, Categories for the Working Mathematician (1971). The standard reference.
Recreational mathematics in R:
- Project Euler. Mathematical puzzles solvable in any language.
Ryacaspackage: symbolic math via Yacas CAS.gmppackage: arbitrary precision arithmetic.numberspackage: prime factorization, GCD, modular arithmetic.
Historical:
- Leibniz (1679): the dream of a calculus of reasoning.
- Frege (1879): formal logic, Begriffsschrift.
- Church (1936): lambda calculus.
- Curry (1934 onward): combinatory logic, Curry-Howard correspondence.
- McCarthy (1960): LISP, first practical lambda calculus.
- Steele & Sussman (1975): Scheme, “Lambda the Ultimate” papers.
- Chambers (1976 onward): S language.
- Ihaka & Gentleman (1993): R, S reimplemented with Scheme’s scoping.