1

Is P=NP a Grave Matter?

 8 months ago
source link: https://rjlipton.wpcomstaging.com/2023/10/31/is-pnp-a-grave-matter/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Is P=NP a Grave Matter?

October 31, 2023

Our favorite problem moribund?

HalloweenPNP.jpg?resize=155%2C206&ssl=1

The photo at right was taken by a friend—with thanks—in San Carlos, California, last weekend. We do not know who put out the Halloween display. My late colleague Alan Selman sported a license plate that declared P NE NP (NE for “not equal to”). Dick and I last gave our thoughts in several posts in 2020 after waffling on whether we believe “equal” or “not equal.”

This Halloween we discuss a different matter: Is work on P vs. NP whistling past the graveyard?

And a second question is suggested by the punning last words of Mercutio in Romeo and Juliet. Always a jokester despite a mortal wound, he says:

“Ask for me tomorrow, and you shall find me a grave man.”

The pun is on “grave” meaning “serious.” Certainly P vs. NP has been central to our field’s formative period. But will it stay that way tomorrow?

Consider, for instance, the career of Adam Tauman Kalai whom we just featured. His DBLP page shows recent papers on:

  • Algorithmic fairness.
  • Large language models.
  • Loss minimization in neural networks.
  • Automated code generation.
  • Low-rank matrices.

These are all vital areas, and it is wonderful that theory opens a window on them. But they are not going to break through on P vs. NP. Well, low-rank matrices are important to lower bounds, so that is a possibility. We’ve kept our ears to the ground for word of new ideas on P vs. NP. The closest we’ve seen is recent work on the Minimum Circuit Size Problem, which is surveyed at the end of this wonderful long recent article in Quanta that surveys the state of complexity theory.

Has the Specter of NP-Hardness Faded?

Complexity theory used to represent a check on scaling up computers to solve moderately larger cases of important hard computational problems. The famous 1979 book by Michael Garey and David Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, opened with a parable of a project worker needing to explain to the boss why a task is hard.

Well, nothing since has prevented computers from scaling up majorly. Right now Google Chrome is telling me (Ken) that one of my tabs—just one of a dozen tabs I have open—is taking up 419 megabytes of memory. A tab I just opened for tracking tonight’s World Series game grabbed 130 MB.

What’s more, the hard problems themselves have yielded in many significant, sizable instances. We noted the success of SAT solvers already seven years ago and likewise for new Traveling Salesman algorithms. An analogy is that the game of Go is PSPACE-hard on {n \times n} boards, but the standard {19 \times 19} board is plenty complex for us—and conquerable by computers.

Other disciplines have found that they can grow around problems that are “Big” but not intractable. Weather forecasting has long regarded getting to kilometer-scale fineness as the “holy grail” but limitations apart from computing power have conspired to open other areas of progress that our computers are ready for. Deep learning results come from long training computations. The underlying model may be a small number of levels of threshold circuits—but we have not clamped any tight upper bounds on the corresponding complexity class TC⁰ either. Large language models (LLMs), ones that result from months of computation, are able to solve problems in a universal manner—with varying degrees of confidence, to be sure.

President Joe Biden just rolled out a sweeping executive order Monday that aims to monitor and regulate the risks of artificial intelligence while also harnessing its potential. This marks the latest effort to address a rapidly evolving technology that has sparked concern among world leaders. He did not say the same about complexity theory. The AI warnings are quite apart from P vs. NP. Perhaps he should have included complexity? Perhaps marrying LLMs to SAT solvers and game programs will radically change the world even more than AI is expected to do so now. Is this possible?

Is P =? NP the Right Question?

The focus on whether SAT is in polynomial time hides a more desperate reality: In over half a century of trying, we really have not proved a super-linear lower bound on the complexity of any natural problem.

  • There are problems in NP that require very slightly above linear time on a standard multitape Turing machine. But the reason leans mightily on the limited nature of Turing tapes, and even then they fail when the Turing machine is allowed planar tapes, let alone some kind of random access.
  • Not even those problems have super-linear lower bounds on Boolean circuit size. No such bound is known even for problems in {2^{O(n)}} exponential time, which are provably not in polynomial time. We covered one recent effort at general circuit lower bounds here.
  • Walter Baur and Volker Strassen proved {\Omega(n\log n)} bounds on the number of arithmetical circuit gates—indeed, of multiplication gates—needed for some simple {n}-variable functions like {f(x_1,\dots,x_n) = x_1^n + \cdots + x_n^n}. But maybe the “size” of an {n}-variable polynomial of total degree {d} should be reckoned as {N = n\log d} anyway. Then nobody knows a super-linear lower bound in terms of {N}.

The real question is:

Why do the barriers to proving nontrivial lower bounds slam down right at linear to begin with?

This question involves us in models and “worlds” that are a lot less robust and approchable than the complexity notions by which our field developed.

What Hope to Solve P vs. NP?

We are coming on the 30th anniversary of the “Natural Proofs” barrier. It was discussed here.

One essence is that natural attempts to prove lower bounds for one hard problem—if they succeed—convert around to become actual algorithms that give upper bounds for another hard problem. In the case of discrete logarithm, the two problems are the same. Thus the effort to push lower bounds by “natural” means can actually be self-defeating.

We have noticed this self-defeating phenomenon in other veins. Completeness and hardness results often exploit succinct representations of core predicates. Those predicates embody expansive power in small programs or circuits—power that makes it hard to prove that those small programs or circuits lack general solving power.

We have remarked on particular attempts with self-defeating elements

  • here in regard to the “polynomial method” for lower bounds;
  • here in regard to circuit-based attempts on P {\neq} NP;
  • here in regard to the MCSP problem;
  • here in regard to permanent versus determinant; and
  • here on the “geometric complexity” effort led by Ketan Mulmuley.

Mulmuley’s program had real novelty and seeming potential to resolve P vs. NP when it was introduced, for reasons I gave in my own 2002 survey of it. But Ketan himself admitted that he didn’t see resolution on any shorter timescale than 150 years. Maybe there is hope in efforts like this by David Zuckerman that relax the problem in a way that may dodge some barriers and apply high-powered combinatorics.

Open Problems

The vital meta-problems are: how long will the P versus NP problem stay open—and how long will it stay inviting?

[some word and link fixes; added mention of Zuckerman’s work at end]

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK