Note to coders: If you've done much in programming, you'll probably find this blog entry completely pointless and boring. You'd probably be well not reading it. Also, you may have noticed that my blog entries are way too long. This is no exception.
In 11th grade, I had an excellent economics teacher. He really got me interested in the topic, and helped me take the AP even though it wasn't an AP course. But the one thing that always annoyed me was the claim that, "By 2023, computing power in the world will exceed human brain power. This means that computers will program themselves!" I would always try to tell him, no, that's completely implausible, but could never convince him. When I mentioned this to a friend, he said, "They'll program themselves, but not until 2050." This view is far to common for me to resist refuting it here.
Computers programming themselves is at once impossible and already happening. It's already happening in the sense of compilers and libraries. Because of compilers, programmers don't write in ones and zeros that computers understand; they write in a language that describes things more directly, and then the computer "programs itself," converting that programmer-intelligible code into computer code. Similarly, libraries (reusable source code components) let programmers write drastically less. It's gotten so advanced that, to a large degree, programming is assembling preexisting components.
But it can't get much further than this. Text files for source code may disappear (I doubt it), but the programmer never will. An increase in computing power won't do it. All computers are basically Turing-complete*, so they can already perform every operation that any other computer could possibly perform. This may be at a slower rate, but the same amount of things are possible. No physical device can do more than what a Turing machine (or an equivalent machine, like the computer you're reading this on) can do. Even if, in 2023 or 2050, there is a trillion times more computing power than there is today, no new computations will be available; they'll just be faster. So if it will ever be possible for computers to program themselves any more than they already do, it's already possible. Someone just has to program it.
But advances in artificial intelligence or similar technologies won't make computers program themselves either. No matter what happens, computers still have to be told what to do. What they are told to do could be some complex behavior, such as behaving like a neural net (this isn't as useful as it sounds) or find rules and make decisions based on statistical analysis of a situation and past responses (this is a bit more useful). But the most useful operations have nothing to do with AI or learning. These include things like sorting a list, multiplying two matrices, and displaying a GUI window. These can only be done by a algorithmic process: directly or indirectly, someone has to tell the computer which element in the list to compare to which, what matrix elements multiply with what, or where to place the pixels in the window on the screen. Furthermore, effective programmers need to understand these algorithms, at least a little bit, in order to combine them in an efficient way. There is no way to avoid this eventuality. The best we can do is build detailed, useful libraries, so code only has to be written once, and make a good, high-level programming language so programmers can express their ideas in a simple, logical way.
This isn't to say that programming will always stay the way it is now. Programming in modern languages like Java, Haskell or Factor today is radically different from programming in C or assembler 25 years ago**, and from punch cards 25 years before that. In 25 years, things will be very different, possibly even with new input devices (such as graphical programming or voice commands; these have problems too, but that's another blog post). But computers won't ever program themselves as long as they exist; there will always be someone who has to directly tell them what to do.
As an author, I'm not very persuasive. So to anyone I haven't convinced, I will extend a bet: for any amount, in any finite time frame, I bet you that computers will not program themselves.
* Actually, only computers with an infinite supply of memory are truly Turing-complete, because there is an infinite number of possible algorithms. But in practice, computers can do pretty much whatever we want them to do, and this ability will not change significantly with time.
** Those languages are still used today, but not usually used as the primary language for constructing an application.