Sunday, April 22, 2007

Is Factor Turing-complete?

A rational person might respond, "Of course it's Turing complete, what an idiotic question. Why don't you go work on Unicode again?" Ignoring the second complaint, I think it's actually an interesting question. There's been a lot of theoretical writings about Joy mostly by Manfred von Thun and now about Cat by Christopher Diggins, but so far no rigorous definition of semantics. I'd like to change this.

As I see it, there are two ways to look at concatenative semantics. The first way is more widely theorized about. In both, there is a set of words and a set of possible data within the type system. In most analyses dealing with Joy (perhaps the finest of which is A Theory of Concatenative Conbinators by Brent Kerby), quotations are seen as lists of words, data and other quotations which can freely be consed or unconsed. This exposes the full power of most concatenative languages over applicative languages or concatenative languages without quotations, like Forth. But, of course, it doesn't make them any more Turing-complete. Using this strategy, Kerby found that two combinators called k and cake could do everything--but everything was undefined.

The other way of looking at it--the dark side, perhaps--is second-class quotations without introspection which can't be included in data structures. It can only be passed around on the stack. This, to some degree, resembles the compilable subset of Factor, since it is easier to compile quotations that are known at compiletime. The simplest minimal set of combinators for this would be over, dip and drop (called zap in A Theory of Concatenative Combinators and pop in Joy). I'm not sure, but I think that to be Turing-complete, you'd need a little more. Conditionals ([ drop ] is false and [ nip ] is true) and recursion (dup call) are possible with these three words, which might make it complete. But if not, then four more words for manipulation of lists could be added: ?, cons, uncons and f. Lists of lists of lists of f can represent anything! But are these needed? I'm not sure. Pi-calculus is equivalent to lambda calculus, as proven by Robin Milner in something I can't fully understand, despite the fact that there's no obvious way to represent data or even conditionals (is it sending data to different channels? I'm not sure) there.

Anyway, no matter which you choose, there are some interesting reduction semantics that you can attach to concatenative languages that lack side effects like these. I got the idea from Robbert van Dalen's Enchilada (that's rapido from #concatenative). Traditionally, the semantics of stack languages have been described in terms of each word being a function from [a tuple of] stack[s] to [a tuple of] stack[s]. But who needs stacks? They're an implementation detail! Let's just apply successive reductions to a quotation like all those fancy professors in lambda and pi calculus do! We don't even need to care about evaluation order; we can leave it undefined, or define other orders beyond the traditional front-to-back don't-evaluate-inside-quotations order. I haven't fully thought this through, though. Aside from the pesky questions of halting and side effects, evaluation order doesn't matter because the composition operation is associative (it forms a monoid, as Manfred von Thun explained).

Just to make things interesting, I'm gonna call dip U (for under), drop P (for pop) and over O. Below is the definition of a word N which gives the not of a Church boolean. Afterwards, I show the reduction of it, and the reduction rules for finding it. Note that the below definitions do not define fixed points, which would complicate semantics, but merely abbreviations.


-- Abstract syntax: --
Word := [Term] | O | D | U
Term := W*

-- Reduction rules: --
Where q, r, s range over quotations (data)
and A ranges over actions (by which I mean sequences of words and quotations)
The following rules may be implied in any order within a term

q r O
------
q r q

q D
---

q [A] U
-------
A q

Apply these rules until no further reduction is possible

-- A few derived combinators: --
D = []O[P]U -- dup
C = DUP -- call
I = [P]U -- nip
S = O[I]U -- swap
R = [S]US -- rot
T = [P] -- true
F = [I] -- false
N = FTRC -- not

-- Hey, why bother with all of those helpers? All at once now! --
-- For clarity, I'll substitute things incrementally --
D = []O[P]U -- dup
C = []O[P]UUP -- call
I = [P]U -- nip
S = O[[P]U]U -- swap
R = [O[[P]U]U]U O[[P]U]U -- rot
T = [P] -- true
F = [[P]U] -- false
N = [[P]U] [P] [O[[P]U]U]U O[[P]U]U []O[P]UUP -- not


-- A simple example: --
For any two quotations q and r:

q r S
q r O[[P]U]U
q r q [[P]U]U
q r [P]U q
q P r q
r q

Thus we have proven that swap does what we want it to. For rot, we have

q r s R
q r s [S]US
q r S s S
r q s S
r s q

which is the behavior we wanted. Similarly, for C (call) on action A we have

[A]C
[A][]O[P]UUP
[A][][A][P]UUP
[A][]P[A]UP
[A][A]UP
A[A]P
A

These can be used in the reduction below.

-- The actual reduction: --
(when applied to [P] which is true, so the result should be [[P]U] or false)

First, using previously examined words:
[P] N
[P] [[P]U] [P] R C
[[P]U] [P] [P] C
[[P]U] [P] P
[[P]U]

Now, all at once, in just 15 steps!
[P] [[P]U] O[[P]U]U [P] O[[P]U]U []O[P]UUP
[P] [[P]U] [P] [[P]U]U [P] O[[P]U]U []O[P]UUP
[P] [[P]U] [P]U [P] [P] O[[P]U]U []O[P]UUP
[P] P [[P]U] [P] [P] O[[P]U]U []O[P]UUP
[[P]U] [P] [P] O[[P]U]U []O[P]UUP
[[P]U] [P] [P] [P] [[P]U]U []O[P]UUP
[[P]U] [P] [P] [P]U [P] []O[P]UUP
[[P]U] [P] P [P] [P] []O[P]UUP
[[P]U] [P] [P] []O[P]UUP
[[P]U] [P] [P] [] [P] [P]UUP
[[P]U] [P] [P] [] P [P] UP
[[P]U] [P] [P] [P] UP
[[P]U] [P] P [P] P
[[P]U] [P] P
[[P]U]


Have fun deciphering all that! There are three things that I want to prove in this system. The first is trivial: for any algorithm, there is an infinite number of programs which achieve the same reduction (this can be done by putting []P wherever you feel like, most easily at the end). The second thing is almost trivial: these three combinators can produce not only any permutation, but also any arbitrary duplication or deletion moved around to anywhere, for example abcd--dcaca. This can be done first by showing that these transformations can be represented first by a series of duplications and deletions particular points, and then rotations between certain elements and the top. Then, it can be shown that those operations can be implemented with only O, P and U. The third and final proof would be non-trivial: show that O, P and U are Turing-complete. I'm not even sure whether they are or not, but I suspect they are. An injection from SK-calculus or lambda to this would probably be the simplest way to prove this. This is harder than it sounds, as SK-calculus is usually thought of containing function-values created at "runtime" (eg K x evaluates to \_.x) while no such direct equivalent exists in {O,P,U}.

Of course, it'd be interesting to do this with cake and k too. But I haven't thought about that as much. There's a translation from a sort of "lambda" defined on that in A Theory of Concatenative Combinators, but it's not very well-defined, and may not be a general enough translation to the base of {i, dip, cons, dup, zap} to prove that it's Turing-complete. I hope Kerby works on this more in the future.

Update: In an earlier version, I had D = O[P]U instead of D = []O[P]U, which was incredibly stupid and caused me all kinds of problems. (It all reduced to nothing.) Now it's fixed, and the explanation is clearer, hopefully. Also, I added the potential problems at the end.

1 comment:

Unknown said...

I think the second reduction rule (pop) is mistyped. To be consistant with the rest of the article, it should be written :

q P
---