4

Computing Very Large Sums

 1 year ago
source link: https://rjlipton.wpcomstaging.com/2009/06/09/computing-very-large-sums/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Computing Very Large Sums

June 9, 2009

Computing exponential size sums are sometimes easy and sometimes hard

images

Jin-Yi Cai is one of the top researchers in complexity theory. He is one of the greatest pure problem solvers that I have had the pleasure to work with. Cai has solved a number of famous open problems, starting with his thesis work under the great Juris Hartmanis, continuing with his work on several long-standing complexity conjectures, and more recently extending the beautiful work of Leslie Valiant on holographic computation. I will discuss this work in the future.

Today I will talk about “doing sums.” The problem is computing the sum of elements from an exponential size set in polynomial time. These problems arise in many parts of computer science, yet we still do not completely understand when they are easy to compute and when they are hard to compute.

Cai is a joy to work with and do research with. Over the years I have worked with many people and he is one of the nicest. I am trying to think of a funny or strange story, but I cannot think of anything that unusual about him. I wonder what that says? Is my memory failing or more likely he is just so solid a person that I cannot think of anything

There is one story of how we worked hard and solved a problem, and then got completely passed by a much more clever result. Recall that Barrington’s Theorem shows that polynomial length bounded width computations can compute all of . David Barrington did not care about about the exact polynomial length: he only cared that it was polynomial. Jin-Yi and I tried to improve his construction. The issue is this: suppose that is a formula of size with all paths to leaves the same length. Barrington shows that this can be computed by a bounded width computation of length order .

Cai and I tried to reduce the exponent on . After a great deal of work, we were able to show how to improve Barrington’s initial bound from to . Our method used a complex “gadget” and some other ideas.

As soon as our result was published, others started to work on the problem. Richard Cleve wiped us out. He proved that the exponent could be or or even better for any . He saw much more deeply into the problem than we did; he set up a pretty framework that avoided any ad hoc gadgets, allowing him to essentially solve the problem. Oh well, at least we raised the question.

Summation Problems

The general summation problem over a field is: find the value of

where is an arbitrary finite subset of and is an arbitrary function from to . When the field is a subset of the complex numbers, it makes sense to talk about computing the value with either additive or multiplicative error.

I find these problems interesting for several reasons:

Summations arise naturally in many settings. However, the classic approach in mathematics is to get bounds on summations, as computer scientists we want algorithms that can compute them. That does not mean that general bounds are not related to computing their values, but general bounds are not enough.

Summations arise as a model of quantum computation. If we could compute certain summations efficiently, then we could simulate quantum computations. This is likely too much to ask, but the question then becomes when can a sum be computed fast?

They yield some interesting mathematics, and I like the questions that arise. For a variety of reasons, summation type questions seem not to have received sufficient attention in complexity theory. Yet I would argue that they are a natural set of questions.

Lastly, they have nothing to do with variance. Sorry I could not avoid saying something today about the “storm” of wonderful comments that the recent post on variance caused. I will get back to that in the near future.

A Hard Family of Summations

Let me start by showing the following “folklore theorem.” I have known it for a while, but cannot find a reference. I am happy to include a reference, if someone knows who first proved it.

Notation: Hereafter, and will denote intervals of integers, i.e. could be , which is equal to,

When I say , where is an interval, I mean the summation to be over all integers in the interval . Also, the notation is the number of integers in the interval.

Theorem: Let be a degree at most rational function. Computing the sum

is at least as hard as integer factoring.

In this and all future results, algorithms can, of course, be polynomial in the number of bits that are needed to describe the given functions. I will not cheat and encode information by using coefficients that are more than exponential size.

Lemma: Consider the following function defined by,

Assume . Then, is equal to

As usual is the cardinality of the set .

Proof: Define to be

Then, if , then . I next claim that if , then Suppose that . If , then the claim is easy to see. Thus, assume that and that the inequality is false. Then,

which implies that

Recall that , hence this is a contradiction. This proves the claim.

It remains to calculate the contribution of each to the sum : Each with contributes ; each other contributes at most . The total number of is clearly bounded by .

This proves the lemma.

Lemma: Factoring of can be reduced to a polynomial number of evaluations of the function .

Proof: Let be an -bit number, and let . Define the function . By the lemma, is equal to the number of divisors so that plus an error term that is .

Start with . If this is , then is prime. So assume that . Then, by a binary search we can find a factor of .

Two remarks about this theorem. First, we can weaken the assumption on computing exactly. The argument actually shows that either or . For large enough, this shows that we can handle either additive error of almost or multiplicative error of exponential size.

Second, the theorem shows that the summation problem for very simple rational functions is hard. Using a result–I believe still unpublished–of Joe Killian and others?, we can do much better. Killian proved the following:

Theorem: Given an integer , the following question is NP-complete: Is there a factor of in the range ?

The intuition for this result is, even if one is given the factorization of , it could be hard to find a subset of the prime factors, whose product lies in the given interval. Killian’s insight is that the problem is essentially a knapsack type question, where the values to be packed are the logarithms of the prime factors of .

This result and the above methods yield the following stronger theorem:

Theorem: Let be a degree at most rational function. Computing the sum

is NP-hard. This remains true even for computing the sum within an additive error of .

As remarked earlier we can allow any additive error less than or even an exponential multiplicative error.

This uses a proof, similar to that of the previous lemma:

Proof: Let be an -bit number, and let . Define the function . By the lemma, is equal to the number of divisors so that plus an error term that is . Thus, determining if there is a factor in an interval reduces to one summation evaluation.

There is some interesting earlier work on related problems. For example, Leonid Gurvits and Warren Smith show that computing the integral of rational functions that are given by straight-line arithmetic circuits is #P-hard. The key to their result is the following pretty formula for the permanent of an by matrix ,

where is an integer and is equal to

An Easy Family of Summations

An interesting family of “easy” summations arises from a recent paper by Jin-Yi Cai, Xi Chen, and Pinyan Lu (CCL): “Graph Homomorphisms with Complex Values: A Dichotomy Theorem.” László Lovász, first defined the notion of graph homomorphism, which generalizes many interesting graph properties such as: vertex cover, independent set, and graph colorings–to name a few. Thus, the paper of CCL, by allowing complex valued graph homomorphisms, covers many important graph properties. A natural question is: which properties are easy and which are hard?

They answer this question and prove a dichotomy type theorem. Roughly such theorems prove that all problems of a certain type can be divided into two classes: the first class are all in polynomial time, and the second type are all NP-hard. The proof that the first class are in polynomial time relies on an interesting family of summations, which we will discuss in a moment.

You probably have seen dichotomy type theorems before. A very simple example is this: consider SAT problems. Those with only two-clauses are always easy to solve. While in general those with three-clauses are, of course, NP-complete.

The work of CCL is a much harder, a much deeper, and a much more powerful result. However, it is in the same spirit as the simple example I just gave. However, I will not be able to do justice to their beautiful paper, since it is a mere 111 pages: yes, that’s not a typo, nor is it in binary. Read their paper for the details. Or at least browse through the paper–it contains many interesting ideas.

CCL must be able to classify certain summations that arise from finite rings, and decide which are easy to compute and which are not. The main result on easy ones is this beautiful theorem:

Theorem: Let be any positive integer, and . Let be a quadratic polynomial in variables over . Then the following sum

can be evaluated in polynomial time in and .

Note, when is a prime, the result follows from known results–see this. However, the full general case when is an arbitrary number with unknown factorization is new.

These sums are quite interesting, and it is not obvious that they are easy to compute for several reasons. First, there are, of course, an exponential number of terms in the sum: . Second, raising a root of unity to the value of a quadratic polynomial is a term that varies in a complex manner. Finally, the problem is an exact evaluation: no error–the exact value is computed. The bottom line is that it is not at all clear why such a sum should be easy to compute.

Sketch of the Proof

The proof of the above theorem is fairly involved. The main idea is to show that it can be reduced to a class of sums that Gauss knew about. I will, with Cai’s kind permission, include a sketch of the proof for the special case of an odd prime power. This, I believe, gives the flavor of how such theorems are proved.

Define the Gauss sum as

Gauss knew the value of these sums, and in closed form. The story of how he did this is quite interesting–see this.

The main idea of the proof is to use induction on the number of variables. Let be an odd prime, and . We are trying to compute We assume has the following form:

where all the and are elements in . We can assume the constant term in is zero since it can be taken out of as a constant factor.

Note, if was purely linear, then the sum could be easily written as a product of a simple sum and a sum over variables. Even though is not linear, the strategy is to achieve this same goal. Thus, we can assume that has non-linear terms.

For any non-zero , we can write it as , where is a unique non-negative integer, such that . We call the order of (with respect to ). Since has non-zero quadratic terms, let be the smallest order of all the non-zero quadratic coefficients of . We consider the following two cases: there exists at least one square term with coefficient of order , or not. We call the former the first case (I) and the latter the second case (II). Note, all this complexity is due to the fact that we are working over a ring and not a field.

In the first case (I), without loss of generality, we assume and (so is invertible in ). Then by the minimality of , every non-zero coefficient of a quadratic term has a factor . Now we factor out from every quadratic term involving , namely from We can write

where is a linear form over and is a quadratic polynomial in By adding and then subtracting , we get

where is a quadratic polynomial over .

For any fixed , when takes all the values in , also takes all the values in . Thus,

The first term can be evaluated in polynomial time using Gaussian sums, and the second term is reduced to in which has at most variables.

Now we compute the sum using Gauss sums. If , then this is already . Suppose , we let and , where . Then we can show that if , we can “complete” the square and the sum becomes (a constant multiple of) a Gauss sum. On the other hand if , we can show that the sum is 0.

Finally, the second case (II) is reduced to the first case by a change of variable transformation.

Open Problems

I believe that the following should be easy to compute, but I cannot prove it. Compute the sum

within an additive error of where and are at most bit numbers. Note, it is easy to compute the sum

where is a polynomial of degree in time polynomial in .

Another set of questions come from the work of CCL. Their algorithm runs in polynomial time. Can we improve it to a smaller complexity class? What about, for instance, the class ?

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK