If you've been following this blog, you're probably well-aware of the Church-Turing thesis. Wikipedia defines this as "Every function which would naturally be regarded as computable can be computed by a Turing machine" (or, equivalently, lambda calculus or any number of a large set of machines proven to be equivalent to Turing machines). Of course, this is unprovable, since we can't construct the set of computable functions, since that would require a solution to the halting problem. (No, an oracle for the halting problem wouldn't help here one bit.)

There are a number of variants on this, one of which is the Strong Church-Turing Thesis. This, in part, states that anything which can be efficiently calculated (in polynomial time) can be efficiently calculated on a Turing machine, or, again, in lambda calculus. Quantum computers, if possible and if BPP < BQP, may prove this wrong. But in a world of computers run by mostly classical physics, this holds true.

So, jump to Lambda Calculus 101. Look, here's a cool little lambda! It can do everything! See, it can represent pairs, booleans, and even numbers! For numbers, let's use these Church numbers; woohoo, they're so simple, just use function composition! As Wikipedia explains:

0 ≡ λf.λx. x

1 ≡ λf.λx. f x

2 ≡ λf.λx. f (f x)

3 ≡ λf.λx. f (f (f x))

Etcetera. The basic operations aren't too hard, either (except for the predecessor function, which is really weird). They just take a bit of thought:

plus ≡ λm.λn.λf.λx. m f (n f x)

succ ≡ λn.λf.λx. f (n f x)

mult ≡ λm.λn.λf. n (m f)

exp ≡ λm.λn. n m

pred ≡ λn.λf.λx. n (λg.λh. h (g f)) (λu. x) (λu. u)

See the article Church encoding for more explanation. If you haven't seen this before, it probably doesn't make any sense. Another good explanation, outside of Wikipedia's, is available here [PDF] by Robert Cartwright. (Only the first three pages are actually about the numerals, but the rest are also interesting.)

Taking addition as the simplest example, you might notice something a little fishy about the computational complexity. The addition of two Church numerals has the complexity O(N+M) where M and M are the values of the two numbers. In the computer you're using right now, the complexity of the algorithm in use is actually O(n+m) where n and m are the number of

*digits*of each of the input numbers. (In practice, this is thought of as a constant time operation, but that's appropriate only if the numbers involved are reasonably small, for example in the range 0-2^32.) This difference may sound trivial, but it's not. If the Church numeral addition algorithm were held up to the same standard (where n and m represent the number of digits in the base 2 representation of the number) the complexity is actually O(2^(n+m)). It has exponential complexity, so with large numbers, it will take an extremely large number of steps. Try to write an RSA implementation using Church numerals, and you'll see how slow it runs.

But what about the Strong Church-Turing Thesis? Shouldn't it be possible to have a polynomial-time algorithm for this operation, since it's possible within another system? Of course it is! Church came before people were even thinking about complexity, and it was mostly irrelevant to the point of constructing Church numerals, so we can't really fault him for not having come up with a more efficient system. Probably the reason no one else (as far as I can tell) has come up with this is that it's too obvious (and useless) once you understand lambda calculus.

Probably the simplest basic strategy for this is to represent numbers as a linked list of true and false (representing 1 and 0), starting at the least significant digit. Without too much difficulty, lists and booleans can be represented in lambda calculus, though it'd be really, really annoying to actually construct the addition operation.

But I was bored in chem and bio today, so here it is, in full untested glory, an efficient implementation of addition in lambda calculus. It probably wouldn't be good enough for implementing RSA, but at least it's asymptotically optimal (I think).

true ≡ λt.λf. t

false ≡ λt.λf. f

or ≡ λa.λb. a true b

and ≡ λa.λb. a b false

not ≡ λa.a false true

cons ≡ λcar.λcdr.λpair.λnone. pair car cdr

nil ≡ λpair.λnone. none

0 ≡ nil

1 ≡ cons true nil

2 ≡ cons true (cons false nil)

3 ≡ cons true (cons true nil)

4 ≡ cons true (cons false (cons false nil))

Y ≡ λf.(λx.f (x x)) (λx.f (x x))

succ ≡ Y λsu. λn.

n (λcar.λcdr. car

(cons false (su cdr))

(cons true cdr)) 1)

addbool ≡ λnum.λbool. bool (succ num) num

horn' ≡ λa.λb.λc. and a (not (or b c))

horn ≡ λa.λb.λc.

or (and a (and b c))

(or (horn' a b c)

(or (horn' b a c) (horn' c a b)))

anytwo ≡ λa.λb.λc.

or (and a b)

(or (and b c) (and a c))

add ≡ (Y λadd. λcarry.λa.λb.

a (λcara.λcarb.

b (λcarb.λcdrb.

cons (horn cara carb carry)

(add (anytwo cara carb carry) cdra cdrb))

(addbool carry a))

(addbool carry b)

) false

Have fun with this. For more lambda experiments, see Oleg's lambda page. (For those who don't know, Oleg is crazy genius Haskell/Scheme programmer god. He implemented lambda calculus

*in hygenic Scheme macros*.) These things have interesting mathematical properties, unlike my code above, which is just confusing. At least it's an example of the Strong Church-Turing Thesis at work, albeit not a very good one. If you have any questions, or for some reason test it and notice it doesn't work, or have anything else to say, please comment.

**in contuation-passing style**