Wednesday, March 19, 2008

Three open language design problems in Factor

Factor is progressing at a surprisingly rapid pace for its small developer base. The compiler and virtual machine make code run relatively fast and getting faster, and there's a large and growing standard library. As far as language design methods go, Factor is fairly unique. With most languages, when they get to a certain state of maturity, code gets written and backwards compatibility for that code needs to be maintained so that that code base still works.

In Factor, nearly all code written is distributed in extra/ with Factor itself. We can keep making changes to the language and update the entire code base whenever we want. It's pretty amazing, and allows us to remove unused things and make major changes in the API like the recent one with encodings. So the resulting language design and standard library can be as clean as some abstract, mostly a priori language standard like Haskell98 or R5RS while being as useful as we want. (Of course, after 1.0, stability will be a major goal.)

The library is progressing amazingly. But within this context, there are a few things that need to be worked out language-design-wise.

Mixing currying with macros

Macros (by which I mean the things you define with MACRO:, described here) are really an optimization. You could, instead of using MACRO:, do a normal word definition with call at the end: Macros just take a number of things from the stack and generate a quotation which is called. The thing you put in the body of the definition is everything but the call. The thing macros do is move this calculation of the quotation as early as possible. If it's possible, the quotation is calculated when the optimizing compiler passes through the code; otherwise, it's calculated at runtime and the result is memoized. This way, it's only calculated once for any given input.

Macros work great as a way to describe things like undo, cond and case. But what if we want to use locals in a cond? Or curry something onto the quotation used by undo? These are both instances of the same problem, since locals boil down to currying and a quotation transformation and so does undo.

Consider the following code:

:: foo ( bar -- baz )
{ [ bar 1 = ] [ 2 ] }
{ [ t ] [ bar 1 + ] }
} cond ;

The double colon (::) indicates that this definition uses locals, and that bar is a lexically scoped local variable taken from the stack. The code I wrote won't work, because the locals implementation doesn't know how to deal with cond. When dealing with ordinary quotations, it can curry the locals that get used onto the front, and do the typical locals transformation inside the quotation. But cond takes an array from the stack. What should it do there?

Similarly, consider this code:

: second-if-tagged ( pair first-tag -- second )
[ swap 2array ] curry undo ;

This code will work as expected, but the inverse of the quotation has to be compiled separately on each run. Even worse, there's a memory leak as macros memoize their input: the first-tag would never be freed.

It should be that undo and cond specify some way to deal with currying and the transformation that locals use. But how should this be structured?

Formalizing protocols

Right now, mixins are used for two things. One, in things like plain-writer (in the latest development sources) implement a bunch of generic words. Others, like immutable-sequence, have a bunch of generic words defined so that instances of it are expected to implement it. Some things like virtual-sequence, do both.

For things like sequence, there's only an implicit replationship with the generic words that are associated with them. In extra/delegate, there's a formally-defined set of generic words, but it's not associated with the sequence mixin.

It'd be even nicer if we could make some static verification tools based on this. Just to throw some kind of error, some time before runtime, if a necessary method isn't defined. But it's a little harder to do this than it sounds. For example, for sequences, there is a default method for both nth and nth-unsafe. These are mutually recursive, so that only one has to be implemented. But every sequence has to implement at least one of these to be valid. How should this be represented? It couldn't be done automatically. There are also some things, like like, whose implementation is completely optional, and length, whose implementation is mandatory.

A soft type system

Factor is dynamically typed, and to some degree, this is a strength. Inherent in any fully static type system is a limitation preventing certain programs from being run. It's much easier to have a dynamic interactive development environment that uses dynamic typing. If Factor had a static type system, it's unlikely that it could have had the gradual development of programing concepts that it did.

But, at the same time, there are some advantages to static typing. For one, it opens up a bunch of optimization opportunities for the compiler. The Factor optimizing compiler already infers some things about types. This happens in two ways. The compiler infers the stack effect of arbitrary quotations, that is, how many things a block of code takes and leaves on the stack. This is exposed to the user through the infer. word. A weakness of this is that stack effects can only be inferred if all quotations can be inlined at compiletime, and deeper knowledge about quotations could fix this.

The other way type inference happens is that the compiler eliminates type checks and method dispatch where it can prove that something is of a particular class. This happens in a more ad-hoc way and is not exposed to the user because it doesn't infer which types are required. If there were a stronger type system, then it might be possible to remove even more type checks, and it might be possible to automatically insert what's now done by the hints mechanism.

Another advantage is that a user-visible static type system can reject programs that won't work, and can provide useful metadata about programs based on their type. Factor can currently reject programs whose stack effect comment doesn't match the stack effect that's inferred. Usually, once you get the hang of stack programming, you don't need this, but occasionally it still comes in handy. Type checking does the same thing.

I'm not sure if Factor will ever have a type system that is this expressive, but I think it's pretty cool that in Haskell, you can have a function which dispatches (at compiletime) off its expected return value. This is seriously cool, and without it, monads would be a bit more awkward.

Anyway, none of these things represents an urgent need, but there's also something to consider: if we make a static type system for Factor, we don't want to restrict the flexibility of Factor at all. There would need to be a number of things. First, a staged approach similar to the way metaprogramming currently works. Second, the type system would have to be a soft typing system, which can infer many things but doesn't reject everything that it can't infer. It can warn the user if it can tell that there'll be a type mismatch, though. Neither of these things has been attempted in a concatenative language yet.

[Update: A commenter pointed out that it's also possible to have explicit declarations and no inference. A version this is planned to happen before 1.0, in the form of a better syntax on top of multimethods. Basically, you have a generic word with only one method, and that method is for the types of arguments that you specify. This allows for earlier failure and some generic dispatch/type check eliminations. It's still not clear how more general, before-runtime type checking (with or without inference) can be done, though.]


There are a bunch more language design problems to come, but these three seem to be the biggest right now. It's possible that these won't be solved, and for the last one, I'm not sure if that'd be too horrible. I hope that the currying problem can be resolved before Factor 1.0, and I expect everything to have some sort of resolution by Factor 2.0. The sooner we solve the problems, the easier it will be to update the code to conform with the solution. I just wish I had good enough ideas to figure them out right now.


Anonymous said...


There is a middle ground in type checking between static type checking and dynamic type checking. Orthogonal to this is type inference.

What it requires is the ability for the user to specify what types he wants at any point with the ability to to say that what he wants can be a value of any type. Note, that this is not saying anything about what he expects, which is a slightly different situation, though in the same vein.

What it does bring to the fore is that for this to be extremely expressive would require that types themselves would be values to be passed around in expressions. This would mean that code can be written to manipulate the "parsing" of the source to generate fully determined code where needed and dynamically dispatched code where required or desired.

I have found that the concept of a type variable as used in some of the functional languages of the ML ilk have actually sabotaged the expressiveness of the languages because they treat types as something special and not as another domain.

The fundamental weakness of the current static type systems is the underlying assumption that you get into a knot with having types as values because they "believe" that it means that the type Type must contain itself. They fundamentally believe that "the map is the territory" and not realise that "the map represents the territory".

The above is an aside. Factor has the capability of being expressive in this way if an uniform way of dealing with types as values can be incorporated into Factor.


Bruce Rennie
(God's Own Country Downunder)

Anonymous said...

"Factor is progressing at a surprisingly rapid pace for its small developer base."

Yes, I also have the same impression and it let me thinking of it. I agree that working on one code base reduces the synchronisation overhead to the point that it becomes easy enough to change fundamental things in the language. Having unit tests is also a factor that permits that. But I also believe that Slava and other participating people are well educated, and it helps to find the best solutions quickly. What's nice is that those facts also permit the community to extract useful idioms and make the language more and more powerful (in terms of abstraction) over time, which itself also helps reducing the time needed to implement new features. What is left as an unknown, is at what code-base size those rules won't apply anymore. I hope it won't be too soon :-p

Slava Pestov said...

Dan, resolving the issue of macros applied to curried quotations is on my list for 1.0. Multi-methods and optional type declarations for tuple slots will bring us closer to the third goal. As for formalizing protocols, you can experiment with that if you want: I don't really have any good ideas in this department.

Anonymous said...

Dispatching on expected return type? That sounds very much like Perl's notion of "context" -- functions that do something different in a context that expects an array vs. a scalar. It can make for very confusing code.

Daniel Ehrenberg said...

Slava, that's great, though of course optional type declarations don't form a type system that can tell the user about types before runtime.

Anonymous, Haskell's system is, in my limited experience, not all that confusing if used correctly. You dispatch off a return type within the larger context of a type class, which is well-organized and well-specified. Without this, the return function for monads would somehow have to be annotated with the expected return type, and this would diminish the genericity of programs written for monads (or make it more awkward). Also, the translation to an explicit dictionary is fairly straightforward.