Alan Turing wasn't quite right: anything that can be executed in the physical universe runs in constant time, and it's simple to design a mechanism which tests if an algorithm will halt or not which runs in constant time (assuming it takes less than half of the universe to run).
Q: How can you make such a bold, stupid claim?
A: Because Turing assumed an infinite space to solve problems in. In the real world, there is a finite number of bits we can work with. Let's call this n. If a problem can be solved in the physical universe, it must be solvable using n or fewer bits of memory. There are 2n possible states the bits can be in, so after 2n memory writes, either an answer must be produced or the bits are in a state that they've been in exactly, an infinite loop. Since n is a constant, 2n is a constant, so we know everything runs in worst-case O(1) time. For a more practical simulation, take n to be the number of bits in addressable memory, including virtual memory.
Q: How can you check if an algorithm will halt or not in constant time?
A: Divide the available memory in two. Half will be used for solving the algorithm, and half will be a counter. Increment the counter on each memory write the algorithm memory space. If this counter overflows before the algorithm halts, we know there is an infinite loop. (Note: This can only solve things that take a maximum of half of the available memory, which is a smaller class of programs.)
Remember the big thing that Turing proved: we can't have a program which tests if other programs halt, because if we did have that program, and ran it on a program which halted if it didn't halt and didn't halt if it halted (by the first program's test), then the halting test program couldn't halt or there would be a contradiction. This implicitly rests on the idea that program memory is unbounded. To run this program (the second one) using the halting test method described above would require an unbounded amount of memory because it would result in an unbounded recursion. We can just say that the program crashes, since it runs out of memory. In this world, not everything that's computable can be computed, but everything halts (or repeats a previous memory state exactly, which can be caught, as explained above).
Q: What do you mean by unbounded? And halt?
A: By "unbounded" I mean countable (meaning isomorphic as a set to the naturals). Basically, there is no limit, but no element itself is infinite. By "halt" I mean that the algorithm will stop processing and come to an answer. In this theoretical world of algorithms, note that there's no addtional external input once the algorithm begins.
Q: But the system you describe isn't Turing-complete!
A: Yes. The finite universe isn't strictly Turing-complete, either, and I think this is interesting to note. For example, there's no way we could model another universe larger than our own, down to quantum states, from this universe.
Q: So my computer science professor was lying to me all along! Computational complexity is useless. Why didn't they tell you this?
A: Not quite. Since this constant is so large (even when we shrink it down to 2the size of the available memory), we can actually get a tighter upper bound than O(1) if we use, say, O(m) for the same algorithm. This is why you sometimes have to be careful what your constants are in analysis; sometimes something which looks asymptotically faster is actually slower in not just small but all cases. Anyway, your professor probably didn't tell you this because it's both obvious and vacuous.
Q: So, if you were just tricking me, does this idea mean anything?
A: Actually, it comes up fairly often. Say you're doing arithmetic on integers. Each operation takes constant time, right? Well, it does if you're using machine integers. That's because machine integers are of a certain fixed maximum size. The reason we can say it takes constant time is the same reason that the universe can calculate everything in constant time! In fact, even if we use bignums, if we say "this is only going on in numbers less that can be represented in 256 bits," it still makes some sense to say that things go on in constant time. It's only when things are unbounded that it makes total sense. Sometimes you have to look at very large data points to find that, say, nlog4 3 grows faster than n log2 n. If the size of the input is bounded at 10 billion, there's no clear winner.
Think about sorting algorithms. If we have k possible items in an array of length n, we can sort the array in O(n) time and O(k) space using counting sort. Say we're sorting an array of machine integers, and we have no idea what the distribution is. Great! Just make an array of integers which has one counter for each machine integer. Now iterate through the list and increment the array at the integer's index when you encounter that integer. What's the problem? Oh yeah, we just used up more than all of the addressable memory. So just because we can, theoretically, construct something that will solve our problem in linear (or, with the original example) constant time doesn't mean it'll work.
Update: Added another question. Note to readers: The headline and first paragraph aren't to be taken seriously or directly! This is a joke to demonstrate a point, not a claim of mathematical discovery, and the original solution to the halting problem is an eternal mathematical truth.