Friday, March 2, 2007

Computers can't program themselves!

Note to coders: If you've done much in programming, you'll probably find this blog entry completely pointless and boring. You'd probably be well not reading it. Also, you may have noticed that my blog entries are way too long. This is no exception.

In 11th grade, I had an excellent economics teacher. He really got me interested in the topic, and helped me take the AP even though it wasn't an AP course. But the one thing that always annoyed me was the claim that, "By 2023, computing power in the world will exceed human brain power. This means that computers will program themselves!" I would always try to tell him, no, that's completely implausible, but could never convince him. When I mentioned this to a friend, he said, "They'll program themselves, but not until 2050." This view is far to common for me to resist refuting it here.

Computers programming themselves is at once impossible and already happening. It's already happening in the sense of compilers and libraries. Because of compilers, programmers don't write in ones and zeros that computers understand; they write in a language that describes things more directly, and then the computer "programs itself," converting that programmer-intelligible code into computer code. Similarly, libraries (reusable source code components) let programmers write drastically less. It's gotten so advanced that, to a large degree, programming is assembling preexisting components.

But it can't get much further than this. Text files for source code may disappear (I doubt it), but the programmer never will. An increase in computing power won't do it. All computers are basically Turing-complete*, so they can already perform every operation that any other computer could possibly perform. This may be at a slower rate, but the same amount of things are possible. No physical device can do more than what a Turing machine (or an equivalent machine, like the computer you're reading this on) can do. Even if, in 2023 or 2050, there is a trillion times more computing power than there is today, no new computations will be available; they'll just be faster. So if it will ever be possible for computers to program themselves any more than they already do, it's already possible. Someone just has to program it.

But advances in artificial intelligence or similar technologies won't make computers program themselves either. No matter what happens, computers still have to be told what to do. What they are told to do could be some complex behavior, such as behaving like a neural net (this isn't as useful as it sounds) or find rules and make decisions based on statistical analysis of a situation and past responses (this is a bit more useful). But the most useful operations have nothing to do with AI or learning. These include things like sorting a list, multiplying two matrices, and displaying a GUI window. These can only be done by a algorithmic process: directly or indirectly, someone has to tell the computer which element in the list to compare to which, what matrix elements multiply with what, or where to place the pixels in the window on the screen. Furthermore, effective programmers need to understand these algorithms, at least a little bit, in order to combine them in an efficient way. There is no way to avoid this eventuality. The best we can do is build detailed, useful libraries, so code only has to be written once, and make a good, high-level programming language so programmers can express their ideas in a simple, logical way.

This isn't to say that programming will always stay the way it is now. Programming in modern languages like Java, Haskell or Factor today is radically different from programming in C or assembler 25 years ago**, and from punch cards 25 years before that. In 25 years, things will be very different, possibly even with new input devices (such as graphical programming or voice commands; these have problems too, but that's another blog post). But computers won't ever program themselves as long as they exist; there will always be someone who has to directly tell them what to do.

As an author, I'm not very persuasive. So to anyone I haven't convinced, I will extend a bet: for any amount, in any finite time frame, I bet you that computers will not program themselves.

* Actually, only computers with an infinite supply of memory are truly Turing-complete, because there is an infinite number of possible algorithms. But in practice, computers can do pretty much whatever we want them to do, and this ability will not change significantly with time.

** Those languages are still used today, but not usually used as the primary language for constructing an application.


Hypercubed said...

Interesting article. I agree and disagree at the same time. It all boils down to the definition of programming.

How about the day when computer AI is good enough to simulate the human mind (the human mind is simply an advanced turning machine is it not?). At that point some intellectual jobs can be assigned to the computer AI including programming (manual jobs are already being assigned to machines) . Ok, yes, somebody needs to instruct the computer to write a program that does X, Y, Z but that person is no more a programmer then your average manager. When a project manage writes a design document is he programming?

But, let us take it the next step. Humans can now have their mind simulated inside a computer and manual labor can be done by machines. At that point what is the point of having biological bodies? Biological humans could cease to exist but the AI community inside the machines can continue. These AIs could be programming, building, and maintaining themselves. But will they?

At this point I don't think the computers are programming in the traditional usage of the term. At least they are not coding? The computers will have the ability to manipulate their programming much like biological creatures adapt and evolve only at a much greater level of self reflection. Is this programming? Not unless you say we program our own brains.

In the end I think it is a non-question. By the time computers are able to program themselves that activity will no longer be called programming.

Daniel Ehrenberg said...

Call me a pessimist, but even though our brains probably no more than universal Turing machine, I really doubt we'll ever get to that level of AI where a computer is basically identical to a human brain. The best way to approximate this would be to have a declarative language where, basically, a design document is a program. I don't think this is unachievable. The most realistic way to make perfect, unambiguous design documents is to use a specialized language, even if a perfect brain simulator exists. This may be what programming evolves into in the coming years. But someone will still have to write these specialized specifications.

But you may be right. I, for one, would be very happy if we had artificial human brains within my lifetime. I'm not sure how keen I'd be on the idea of transferring my brain to a computer (I once took a course on philosophy of mind, which basically made me petrified that, somehow, I'll loose my consciousness). It'd still be a great opportunity, but I just don't think it'll happen. Hopefully, I'm just a pessimist.

Anonymous said...

I have a comment about this. You did actually have to learn all kinds of things too (I can't say you got programmed but...), by gathering information you received from your different senses. You started imitating things you saw other people doing as a child. U learned not to touch fire by putting your hand on it and feeling it hurt...

If you make a decision you are also going trough some basic steps evaluating the pro's and contras of a possible way to take.

A machine could easily learn some of these things by gathering information from the internet, the only problem would be that a computer program can only take decisions if it has some basic steps to go through that could be altered by itself.

Anonymous said...

Charming question

Anonymous said...

Opulently I assent to but I about the post should have more info then it has.

Anonymous said...

I know i am commenting on a blog entry 4 years old but it is an interesting topic to me.

What about GP? Granted a criteria for fitness has to be defined, the computer still is finding a solution automatically. In a sense it is programming itself, and possibly doing it in a totally unique way never before done (the possibility is there anyway). So the only intervention by a human is defining the fitness to judge the performance of each competing program in the pool.

Anonymous said...

All that is missing from a computer programming itself is "Purpose" Once you can define a purpose and a set of rules for achieving that purpose in an environment where the rules and boundarie can change you then have real learning. What we call Wisdom might be more difficult to teach, as it is with humans. I guess it boils down to "Purpose" am i achieving it and how successfully.
Isn't that what all humans brains do?