3

Simulation of Nondeterministic Machines

 1 year ago
source link: https://rjlipton.wpcomstaging.com/2009/05/11/simulation-of-nondeterministic-machines/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Simulation of Nondeterministic Machines

May 11, 2009

How efficiently can a deterministic Turing machine simulate a nondeterministic one?

images

Dana Scott and Michael Rabin shared the Turing award in 1976 for their seminal paper on finite automata, where they introduced the critical concept of nondeterminism. Without the notion of nondeterminism, there would be no P=NP, there would be no polynomial time hierarchy, there would be no {\mathsf{NLOG}}, many things would be missing from theory. This was clearly one of the great papers of all time.

I have talked about Michael, one of the greatest theoreticians in the world, before. See my earlier post for more details on his great work.

Dana Scott has also done terrific work, in addition to his Turing Awarding winning work with Michael. Dana is probably best known for his deep work on various aspects of set theory and logic. Dana did early work on set theory and helped with the understanding of Paul Cohen’s notion of forcing by introducing Boolean Valued Models of set theory–along with Bob Solovay. Dana has also done seminal work throughout model theory and set theory; for example, there is the notion of a “Scott Sentence” in logic.

One of Scott’s great results was a semantic basis of programming languages, the so called denotational semantic model. This is a wonderful result, that reminds me of a time years ago

When I was at Berkeley, as faculty, in 1979, Dana was visiting Xerox Parc near Stanford for a few months. One day I planned to visit him and just my luck that morning my car broke down. I remember calling him to apologize that I would not be able to visit that day, since I had no way to get there.

Ann Yasuhara, who was also visiting Parc and is a good friend of mine, called me and said that I really needed to come: she said essentially “failure to appear was not an option.” I really wanted to ask Dana a question about non-standard models of arithmetic, since Rich DeMillo and I were working on an approach to showing that P=NP was independent from Peano Arithmetic. So I discovered that Berkeley had “rental-like” cars that a faculty member could sign out for the day. So I got one and off I went to see Scott.

The meeting with Dana was unusual. After saying hello, Dana immediately asked me to explain the basis of his theory of denotational semantics. It was clear I better have a good answer. Luckily for me I had just read a section from the “Handbook of Mathematical Logic” on his work, and could answer Dana. If it were today, I could not have said anything about the theory, but then I did fine. Dana smiled and asked me what was on my mind. I asked him my specific question on model theory–the reason I was visiting. Dana said nice problem, but he had no idea how to solve it. The rest of the afternoon was fun, we discussed many topics. I never did figure out what would have happened if I was unable to answer his question.

Today, my question, is what is the best way to simulate a non-deterministic Turing machine that runs in time by a deterministic Turing machine? The classic simulation shows that the deterministic machine will run in time at most

where is a value that depends on the non-deterministic machine . My question is: what is the best value for ?

A Conjecture

I tend to believe that P could equal NP, or at least I do not believe that it is “obvious” that P=NP is impossible. My goal is to get you to think about proving the far more modest conjecture:

Conjecture: For any ,

I definitely believe that this could be true. Note, it would not have any dire consequences, it would not change the relationship between major complexity classes, but it would show that nondeterminism is “weaker” that we think. However, what we can prove today is vastly weaker than this, but I would like to show you a possible approach. Perhaps together we can prove the conjecture: it may be within our grasp.

The Standard Simulation

The following is a classic theorem:

Theorem: Suppose that is a multi-tape nondeterministic Turing machine that runs in time . Then, there is a deterministic machine that simulates in time at most

for a constant .

The proof of this theorem is well known, but I thought I would sketch it for comparison with a better version that I will present in a moment. This simulation is based on the simple observation that in natural way a nondeterministic machine defines a search tree . The root of the tree is the start configuration of the machine on the current input. The descendants of a node of the tree are the set of all possible next configurations of the machine. The deterministic simulator just searches each path of the tree from the root, looking for an accepting leaf. If one is found it accepts, if none are found it rejects. The time, worst case, is the size of the tree which is

The problem with this method is that the constant is huge. It depends on the number of total states that can follow a given total state of the machine . This number depends on the number of states of the finite control, the alphabet used on the tapes, and the number of tapes. Thus, the bound is nowhere near the conjecture. But we can do much better.

A Better Simulation

Theorem: Suppose that is a two-tape nondeterministic Turing machine with binary alphabet that runs in time . Then, there is a deterministic machine that simulates in time at most

The critical point of this theorem is not the exact constant , it is the fact that the constant is independent of the number of states of . The constant only depends on the number of tapes and the alphabet: it does not depend on the number of states of the finite control of the Turing machine.

I think that the constant is not even close to the best that we can get with the technology that I will outline. But the key insight is this : unlike all the usual simulations, this technology does not depend on the number of states of .

The Simulation Details

I will now sketch how the simulation works. Let be a two-tape nondeterministic Turing machine with a binary alphabet. Initially, the first tape has the input on it and its head is scanning the right most input symbol, with the rest of tape all blanks (); the second tape is all blanks with its head scanning a blank–what else? As usual a Turing machine program consists of a finite set of rules of the form:

where are states of the finite state control; are in ; are in ; and are in .

The meaning of a rule is: if the finite state control is in state and tape head- is currently scanning the symbols (), then the machine can execute the move to the state , write where tape head- is, and finally move tape head- according to . The motion is left for , right for , and stay for .

Note, that since the machine is nondeterministic, of course, more than one rule can apply. Also we are only allowing the machine to write a –, even though it must be able to read blanks. This could be relaxed but is equivalent to increasing the actual alphabet, so we will insist that only – can be written.

Our plan is to exploit the structure of the Turing tapes, and thereby reduce the cost of the simulation. For this it is useful to have two types of partial traces of the machine’s computation. Let and fix an input. A computation of the machine is a series of steps, where each step applies one of the rules of the machine. The machine accepts if there is a computation that reaches an accepting state.

A strong trace is a sequence of length so that is equal to,

where are in ; are in ; and are in . A strong trace looks like a series of rules except that there are no states. You should think of it as saying: the heads are scanning , the machine writes to the current head positions, and then moves them according to .

A weak trace is a sequence of length so that is equal to,

where are in ; and are in . Note, a weak trace looks like strong trace except that there is no information on the symbols being scanned by the heads. You should think of it as saying: while there is no information on what the heads are scanning, in any event, the machine writes to the current head positions, and then moves them according to .

Note, both strong and weak traces always start from the beginning of the computation. Also the next lemma uses the following simple property of Turing tapes: the symbol written at location at time is equal to the symbol read at location at time provided this is the time of the first return to that location.

A key lemma is the following:

Lemma: A weak trace uniquely determines exactly one strong trace; moreover, given a weak trace in polynomial time the strong trace can be constructed.

Proof: The only information missing in a weak trace is symbols that were scanned at the beginning of the time step . Clearly, the weak trace uniquely determines the motion of both tape heads, so we can determine the last time that a location on the tape was visited before time . There are two cases.

Case 1: This is the first time, the location is being visited. In this case the symbol is easily determined by its location. It is either a blank or an input symbol.

Case 2: The location had been visited previously. Then, the symbol seen at the beginning of step is the same as the symbol that was last written.

The simulation works like this. The deterministic machine will try all possible weak traces. For each weak trace it will first compute the strong trace. Then, it must check whether or not the strong trace is an accepting one. It does this as follows.

Set to be the set that contains just the initial state of the machine. Inductively, will be the set of finite state control states that the machine can be in if it follows the strong trace up to the first steps. Note, can become empty, in which case this trace is “inconsistent” and we simulate the next one. Also if at any time the accepting state is in we accept. If we try all traces and never get an accepting state, then we reject.

The only issue is how to compute from . But this is easy. For any state in , we just follow the strong trace and see what states can be next given the symbols read, written, and the motions.

Suppose that is the set of possible finite state control states that can be in for the strong trace up to step . We will show how to compute efficiently. Let be equal to

The key question is if the machine is in state at time can it move to state at time ? But this can be determined just from the above information. If is in , then is in if and only if there is a rule,

That’s all there is to the simulation. The time bound is dominated by the number of weak traces which is easily seen to be where is . The comes from: there are two symbols to leave on a tape and three motions, and this is squared because there are two tapes. If there were tapes and symbols it would be

Thus, we can actually prove more generally:

Theorem: Suppose that is a -tape nondeterministic Turing machine with an alphabet of symbols that runs in time . Then, there is a deterministic machine that simulates in time at most

Open Problems

Prove the conjecture. Or failing that improve the simulation that I sketched. I think that it should be possible to prove a much better constant than the one I currently get. I think the constant can be improved easily to , that is we can remove the factor on the constant of . More on this soon.

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK