6

John Fremlin's blog: The halting problem is widely misinterpreted

 3 years ago
source link: http://john.freml.in/halting-problem-misconceptions
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Posted 2009-07-31 22:00:00 GMT

A few years ago, I was consulting on a failing IT project and someone non-technical suggested using an automated system to determine whether the program as written was correct, when a professor from the local university chimed in saying that it was impossible, because it would require solving the halting problem.

The professor was wrong (though, as it wasn't relevant to the discussion at hand, I didn't bring it up).

This illustrates a common misconception about the halting problem, which in fact is not an obstacle in theory (in practice, it's too much hard work to check a program). The halting problem states that for any halting checker (a program that takes as input another program and outputs a boolean indicating whether or not it terminates) it is possible to find a program that will cause the halting checker to either not terminate or to produce a wrong answer.

Many people, including this professor, misconstrue this to mean that it is impossible to decide (i.e. write a program that outputs) whether or not a given program terminates. It doesn't state that at all. If you have a halting checker and someone gives you a program that causes it to fail, then you can simply improve your halting checker — trivially, by recognising that input program and outputting the correct answer. However, given the new halting checker, even if you improved it with very clever new algorithms, your adversary can find a new input program for which it will fail (the normal proof of the undecidability of the halting problem explicitly constructs this problematic program from the halting checker itself).

Notice that in this definition program is sometimes specified as Turing machine and associated input — as we rarely use Turing machines but most programming languages are equivalently powerful and as a program can specify its initial input, it is a clearer just to say program.

But note also that the size of the state (in practice, RAM and diskspace) of the machine is not bounded. However it will be for any given physical computer — by its RAM and harddisk space perhaps (or for any possible physical computer, by the computational capacity of the universe, if that is finite).

If the state is bounded, then the halting problem can definitely be solved — as the program has a finite number of states and its next state is solely determined by the previous, then we could simply run a simulation of the program until it either repeats a state or halts, which it must do in a number of steps bounded by the number of states plus one (of course, even for a small machine with a nanosecond step time, this could take longer than the life-expectancy of the sun, and for a machine with just a handful of megabytes could take astronomically longer).

So, in theory, we can make a system to accurately decide whether or not a program running on a given will halt, but impractically slowly. As always the problem is performance: in this case, developing an efficient system that gives answers for real programs before the sun cools down.

All the interesting work in computation theory that is yet to be done is coming up with useful systems that are less powerful than a Turing machine. Type systems are an excellent example of this - they're really just a mechanism that prevents you from expressing certain kinds of non-terminating computations. Research into safe code for distributed systems is a pretty exciting area that is largely overlapping - things like temporal tail recursion in Croquet (limiting resources available to untrusted code) and capability-based security (Google Caja is seriously cool).

Posted 2009-08-02 01:02:25 GMT by Vladimir Sedach

Thanks for the insightful comment, Vladimir.

I've been very intrigued by the restricted (non-Turing complete) languages with like Coq's typed lambda calculus which can be reasoned about in powerful ways, but are still useful. However, I think it is somewhat ridiculous that the type-system or compile time language functionality is more restricted that the runtime language functionality. That's completely backwards, as at compile time a (hopefully) skilled operator is on hand, and there's no need to prove things about the results of operations. Whereas, there is definitely a great deal of use to be had about proving the properties of runtime systems. Maybe time for a longer blog post about this.

I'm a bit sceptical about the current fad for trying to retrofit security onto things like x86 machine code, Python or JavaScript. Coming up with a design that is built around security sounds more secure :-)

Posted 2009-08-09 07:32:36 GMT by John Fremlin

I'm glad you've written about this. It's bothered me for a while that computer scientists are generally so eager to throw up their hands in despair when given the task of statically analyzing a fundamentally dynamic program. But you're absolutely right, and backed up by lots of people who have successfully done it (Jeffery Siskind is a notable example with his Scheme compiler).

Posted 2009-10-13 23:16:55 GMT by Spencer Tipping

Post a comment


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK