**Q:**How can you make such a bold, stupid claim?

**A:**Because Turing assumed an infinite space to solve problems in. In the real world, there is a finite number of bits we can work with. Let's call this n. If a problem can be solved in the physical universe, it must be solvable using n or fewer bits of memory. There are 2

^{n}possible states the bits can be in, so after 2

^{n}memory writes, either an answer must be produced or the bits are in a state that they've been in exactly, an infinite loop. Since n is a constant, 2

^{n}is a constant, so we know everything runs in worst-case O(1) time. For a more practical simulation, take n to be the number of bits in addressable memory, including virtual memory.

**Q:**How can you check if an algorithm will halt or not in constant time?

**A:**Divide the available memory in two. Half will be used for solving the algorithm, and half will be a counter. Increment the counter on each memory write the algorithm memory space. If this counter overflows before the algorithm halts, we know there is an infinite loop. (Note: This can only solve things that take a maximum of half of the available memory, which is a smaller class of programs.)

Remember the big thing that Turing proved: we can't have a program which tests if other programs halt, because if we did have that program, and ran it on a program which halted if it didn't halt and didn't halt if it halted (by the first program's test), then the halting test program couldn't halt or there would be a contradiction. This implicitly rests on the idea that program memory is unbounded. To run this program (the second one) using the halting test method described above would require an unbounded amount of memory because it would result in an unbounded recursion. We can just say that the program crashes, since it runs out of memory. In this world, not everything that's computable can be computed, but everything halts (or repeats a previous memory state exactly, which can be caught, as explained above).

**Q:**What do you mean by unbounded? And halt?

**A:**By "unbounded" I mean countable (meaning isomorphic as a set to the naturals). Basically, there is no limit, but no element itself is infinite. By "halt" I mean that the algorithm will stop processing and come to an answer. In this theoretical world of algorithms, note that there's no addtional external input once the algorithm begins.

**Q:**But the system you describe isn't Turing-complete!

**A:**Yes. The finite universe isn't strictly Turing-complete, either, and I think this is interesting to note. For example, there's no way we could model another universe larger than our own, down to quantum states, from this universe.

**Q:**So my computer science professor was lying to me all along! Computational complexity is useless. Why didn't they tell you this?

**A:**Not quite. Since this constant is so large (even when we shrink it down to 2

^{the size of the available memory}), we can actually get a tighter upper bound than O(1) if we use, say, O(m) for the same algorithm. This is why you sometimes have to be careful what your constants are in analysis; sometimes something which looks asymptotically faster is actually slower in not just small but

*all*cases. Anyway, your professor probably didn't tell you this because it's both obvious and vacuous.

**Q:**So, if you were just tricking me, does this idea mean anything?

**A:**Actually, it comes up fairly often. Say you're doing arithmetic on integers. Each operation takes constant time, right? Well, it does if you're using machine integers. That's because machine integers are of a certain fixed maximum size. The reason we can say it takes constant time is the same reason that the universe can calculate everything in constant time! In fact, even if we use bignums, if we say "this is only going on in numbers less that can be represented in 256 bits," it

*still*makes some sense to say that things go on in constant time. It's only when things are unbounded that it makes total sense. Sometimes you have to look at very large data points to find that, say, n

^{log4 3}grows faster than n log

^{2}n. If the size of the input is bounded at 10 billion, there's no clear winner.

Think about sorting algorithms. If we have k possible items in an array of length n, we can sort the array in O(n) time and O(k) space using counting sort. Say we're sorting an array of machine integers, and we have no idea what the distribution is. Great! Just make an array of integers which has one counter for each machine integer. Now iterate through the list and increment the array at the integer's index when you encounter that integer. What's the problem? Oh yeah, we just used up more than all of the addressable memory. So just because we can, theoretically, construct something that will solve our problem in linear (or, with the original example) constant time doesn't mean it'll work.

**Update**: Added another question. Note to readers: The headline and first paragraph aren't to be taken seriously or directly! This is a joke to demonstrate a point, not a claim of mathematical discovery, and the original solution to the halting problem is an eternal mathematical truth.

## 12 comments:

that's a fun post but not quite true. In particular, your computing device isn't powerful enough to compute the factorial of some extremely big natural number. For the same reason, the `proof' of solving the halting problem in O(1) is wrong. Your machine may overflow for terminating stuff that just need more states.

to put it more simply, your computing device isn't turing complete because a TM must have finitely many states, but an unbounded number of them and yours doesn't.

To both commenters: Yes, it's not Turing-complete, that's the point. The universe is not Turing-complete because it is bounded.

First, you are more into PR than into CS :) I mean you start with debunking Prof Turing and then it turns out that you prove (or claim to do so) statements about something else which doesn't relate to the Entscheidungsproblem as such... But that's OK, I see the fun value of it ;)

(Maybe you could emphasize more that the essence of your argument operates at the

metalevel, ie. you dispute the adequateness of the mathematical model itself.)However, I think you've made a

mistake at the technical level.

That is, how could you ensure that your arbitrary algorithm, defined for

nbits will work the same way when it can use onlyn/2bits? Or am I misundertanding something about your take on the limited halting problem?dzsekijo,

Good point! I forgot to say that explicitly about the halting test method, but it's updated now. So, strictly speaking, that method can't handle everything.

You're right to think that this is more a joke than a serious "debunking" of Turing. I hope most people understand that.

Good morning Daniel,

One major flaw is the belief (as stated) that the universe is bounded. This is conjecture and not proven. I believe there are three possibilities simply put.

1. Bounded finite

2. Unbounded finite

3. Unbounded infinite

In terms of finding out out which it is we can cannot at this time prove any way factually.

Hence, any statement about whether or not the universe is Turing complete must be a philosophical argument not factual.

Enjoy your day.

Bruce Rennie

(God's Own Country Downunder)

Bruce, regardless of whether the universe is infinite, it's implausible to have a working algorithm use more matter than exists in, say, a square light year. So one question may be philosophical, but for practical purposes things are far from Turing-complete.

Hey Dan.

I just rediscovered your blog.

Damn, you are brilliant.

Tootles!

-Sahar

Daniel,

Going Philosophical, being implausible is a viewpoint only. There is no requirement for a Turing machine to finish a computation in our lifetimes. Additionally, you are relying on an assumption that communication is limited to the speed of light. There is possibility that there are processes that "communicate" at speeds that make c look like it is standing still. Laboratory tests have been proposed but as far as I know none have been done as yet.

All theories that we use in engineering and science are approximations (simplifications) to reality, simple models that for the most part work, using the concept of "near enough is good enough". Quite different models can give you similar results in the small and very different results in the large.

Don't get caught up in the standard mindset of the theory being reality it is only a simplified model of what is around us.

So back to the universe being a Turing machine, unless we have a long enough time, we can only speculate as you have done in your original article and have fun at the same time.

Please don't take my comments as denigrating your efforts, it is good to see a young mind speculating on the nature of reality but don't get closed minded. University does tend to do this to people, it happened before my days at uni, it happened when I was at uni and it has happened since then and it is still happening today.

Be prepared to argue your case but also be prepared to alter your perspective if your case is shown to be wrong.

By the way, I have enjoyed your articles on Unicode and Factor, they have interesting and entertaining. Thank you for your efforts.

Enjoy your day,

or as we say it here

To die is a goodie, so avagreydie

and for any Aussies reading this, speak it.

Bruce Rennie

(God's Own Country Downunder)

Bruce, yes, philosophically, theoretically and mathematically, my position is untenable.

Daniel,

Thank you, I had a good laugh at your last comment. Looks like you've got a good sense of humour.

regards

Bruce Rennie

(God's Own Country Downunder)

Daniel, Here is how I reason: actual infinities cannot exist. You cannot point to an infinitely long rod. The infinite exists only as a potential to increase or decrease. The actual is always finite.

So the total number of programs possible in the universe is finite, because the universe is actual.

Since we are dealing with a finite set of programs, cantor's diagonalization type proof Turing used does not work anymore and therefore there exists a solution for halting problem for finite set of programs.

So I agree with you.

What I wish to know is whether we can let a process observe reality, come up with algorithms that simulate that reality (approximately) and then solve the halting problem for that set of algorithms, and then create a program which contradicts that solution. I think if we do this fast enough we can generate an appearance of "free-will". Because then you would have an algorithm which tries not so efficiently to predict its own future and is sometimes capable of avoiding the future it predicts by acting differently.

Post a Comment