3

Surprises in Mathematics and Theory

 2 years ago
source link: https://rjlipton.wpcomstaging.com/2009/09/27/surprises-in-mathematics-and-theory/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Surprises in Mathematics and Theory

September 27, 2009

There are many kinds of surprises in mathematics and theory: here are some examples

images

Gil Kalai is one of the great combinatorialists in the world, who has proved terrific results in many aspects of mathematics: from geometry, to set systems, to voting systems, to quantum computation, and on. He also writes one of the most interesting blogs in all of mathematics—Combinatorics and more; I strongly recommend it to you.

Today I want to talk about surprises in mathematics. I claim that that there are often surprises in our field—if there were none it would be pretty boring. The field is exciting precisely because there are surprises, guesses that are wrong, proofs that eventually fall apart, and in general enough entropy to make the field exciting.

The geometer Karol Borsuk asked in 1932: Can every convex body in {\mathbb{R}^{d}} be cut into {d + 1} pieces of smaller diameter? This became the Borsuk Conjecture, which was proved in low dimensions and for special classes of convex bodies—for example, Hugo Hadwiger proved in 1946 that it was true for all smooth convex bodies. The intuition seemed to be that the result was going to eventually be proved; it was just a matter of time.

The shock, I think more than a surprise, came when Jeff Kahn and Kalai proved in 1993 that for {d} large enough the answer was not {d+1}. On the contrary, they showed that for {d} large enough, we need at least

\displaystyle  c^{\sqrt d}

pieces. Notice that this is a lower bound, and does not imply that {c^{\sqrt d}} pieces are enough. Here {c>1} is a fixed constant. I think it is fair to say that {c^{\sqrt d}} is really different from {d+1}—no? Their paper is short and brilliant: you definitely should take a look at it.

Surprises In Mathematics

There are many additional surprises in mathematics. I have listed a few that you may find, I hope, interesting. I tried to break them into “types.”

I also want to thank Subrahmanyam Kalyanasundaram for his tremendous help with this post.

We Guessed The Wrong Way

Sometimes in mathematics and theory, the conventional wisdom is wrong—we believe that X is true, but is false. The evidence for X could have been wrong for many reasons: it was based on small or special cases, it was based on false analogies, it was based on a view that the world would be simpler if X were true, or it was based on our lack of imagination.

The “we” is used to denote the community of all who work on mathematics and theory: it includes you and me. So this comment, and all later ones, are addressed to me as well as everyone else. I have guessed wrong many times, which has led me to work extremely hard on trying to prove a false statement. Perhaps, everyone else is a better guesser than I am, but I think we all have our limits.

Here are two examples of this:

{\bullet }Nondeterministic space closed under complement: Neil Immerman and Robert Szelepcsényi independently solved the famous LBA problem. They used a very clever counting method to prove that {\mathsf{NLOG}} was closed under complement. This was, I believe, a pretty good example of a surprise. As I discussed earlier, most had guessed that {\mathsf{NLOG}} would not be closed under complement. A great result.

{\bullet }A Vision Conjecture: In the current issue of the Bulletin of the American Mathematical Society, a conjecture due to Béla Julesz is discussed. He was an expert in human visual perception, and is famous for a conjecture about perception. After many human experiments, over the years, he concluded that if two images had the same first and second order statistics, then a person would find them indistinguishable.

Left is random and right is not

Left is random and right is not: both have same statistics to third order

David Freedman and Persi Diaconis founded a simple rule that generates pictures that have the same first, second, and even third order statistics that a random picture has. Yet, their pictures are easily distinguished by a person from a random picture. I quote Persi: When we told Julesz, he had a wonderful reaction. “Thank you. For twenty years Bell Labs has paid me to study the Julesz conjecture. Now they will pay me for another twenty years to understand why it is wrong.”

We Accepted a False Proof

Sometimes in mathematics and theory, a proof is given that is false—just wrong. We all are human, we all make mistakes; sometimes those mistakes can be believed for long periods of time.

Here are three examples of this:

{\bullet}Four-Color Theorem: The Four-Color Theorem (4CT) dates back to 1852, when it was first proposed as a conjecture. Francis Guthrie was trying to color the map of counties in England and observed that four colors were enough. Consequently, he proposed the 4CT. In 1879, Alfred Kempe provided a “proof” for the 4CT. A year later, Peter Tait proposed another proof for 4CT. Interestingly both proofs stood for 11 years before they were proved wrong. Percy Heawood disproved Kempe’s proof in 1890, and Julius Petersen showed that Tait’s proof was wrong a year later.

However, Kempe’s and Tait’s proofs, or attempts at a proof, were not fully futile. For instance, Heawood noticed that Kempe’s proof can be adapted into a correct proof of a “Five-Color Theorem”. There were several attempts at proving the 4CT before it was eventually proved in 1976. See this article by Robin Thomas for a historical perspective of the problem.

{\bullet }Hilbert’s {16^{th}} Problem: I will not state what Hilbert’s {16^{th}} problem is—it will not affect this example. The key is that in 1923 Henri Dulac proved a pretty result about the number of limit cycles that a certain family of differential equations could have. He showed that the number was always finite, while the {16^{th}} was not his goal, this result helped solve at least part of the Hilbert problem. Dulac’s result was considered a great theorem.

Almost sixty years later, Yulij Ilyashenko, in 1981, found a fatal bug in Dulac’s theorem. Seven years later he and independently Jean Écalle, Jacques Martinet, Robert Moussu, and Jean Pierre Ramis found correct proofs that showed that Dulac’s theorem was correct, even if his proof was not.

{\bullet }The Burnside Conjecture: William Burnside is a famous group theorist who proved many wonderful theorems. Perhaps his most famous is the {pq} theorem that played a major role in starting the quest to classify all finite simple groups:

Theorem: A finite group of order {p^{a}q^{b}} where {p,q} are primes is solvable.

He also made a number of very influential conjectures. Perhaps the most famous was made in 1902, and became known as the Burnside Conjecture: If a group is finitely generated and periodic, then it is finite. Periodic means simply that for any element {x} in the group, {x^{n(x)} = 1} for some {n(x)}. This was eventually shown to be false in 1964.

However, a natural question arose immediately, what if the {n(x)} was the same for all the elements of the group, then would the group, then, be finite? In order to attack this new problem, group theorists broke it up into many: {B(m,n)} is the class of groups that are generated by {m} elements and all elements in the group satisfy, {x^{n} = 1}. Sergei Adian, Pyotr Novikov proved that {B(m, n)} is infinite for {n} odd, {n \ge 4381} by a long complex combinatorial proof in 1968.

Another group theorist, John Britton, claimed an alternative proof in 1970. Unfortunately, Adian later discovered that Britton’s proof was wrong.

I once looked at Britton proof. It is a whole monograph of about {300} pages, with many many technical lemmas. Britton needed many constants in his proof, so rather that statements like, “let {v > 191\cdot j^{2}},” he would write “let {v > c_{11}\cdot j^{2}}, where {c_{11}} was a constant to be determined later on. Then, in the appendix he collected all the inequalities that he needed his constants to satisfy, and all he had to do is show that the inequalities were consistent. He did this. Britton had indeed successfully gone through the cumbersome task of checking the consistency of the inequalities, coming up with constants that satisfy all of them simultaneously.

The bug that Adian discovered was that Britton had made a simple mistake and written down one inequality incorrectly. Unfortunately, the resulting system was inconsistent, and the proof was unsalvageable.

We Misjudged a Problem

Sometimes in mathematics and theory, a problem is believed to be very hard. The problem is believed to be so hard that either no one works seriously on the problem, or people try to create whole new theories for attacking the problem. Then, a elementary proof is given—one that uses no new technology, one that could have been found many years before.

Here are two examples of this:

{\bullet }Van der Waerden Conjecture: Van der Waerden made a conjecture about the permanent in 1926. He conjectured that

\displaystyle  \mathsf{perm}(A) \ge n!(\frac{1}{n})^{n}

for any doubly stochastic matrix {A}. Further, that equality holds only for the matrix that has all entries equal. Recall a doubly stochastic matrix is a non-negative matrix with its rows and columns sums equal to {1}.

This seems like a straightforward inequality. Somehow it stood for decades until it was solved independently by Georgy Egorychev and Dmitry Falikman, in 1979 and 1980. The surprise here is that this “notorious” problem was solved by fairly simple proofs. They were awarded the prestigious Fulkerson Prize for their work, even though the proofs were simple—their result was a breakthrough in the understanding of the permanent.

{\bullet }Approximation for Planar TSP: The classic problem of finding a TSP for planar graphs is well known to be NP-hard. What Sanjeev Arora did in 1996 was to find an approximation algorithm that had eluded everyone for years. His algorithm’s running time was near-linear and the error {1+\epsilon} for any given {\epsilon > 0}. Joseph Mitchell found a similar result at almost the same time—I think there should be a whole post on: why do problems stay open for years, and then are solved by two researchers independently? Another day.

László Lovász told me that these algorithms could have been discovered years before—he thought one of the “amazing” things about them was precisely that they did not use any new tools. They are very clever, and both use important insight(s) into the structure of an optimal tour. Yet, he said, they could have been found before, way before.

We Misjudge an Approach to a Problem

Sometimes in mathematics and theory, someone outlines an approach to solve an open problem, which the rest of the community does not believe is viable. This mistake can lead to an interesting surprise when their method does actually work.

Here are two examples of this:

{\bullet }Hilbert’s {10^{th}}: The famous Tenth asks for a decision procedure that can tell whether or not a Diophantine equation has an integer solution. Such an equation is of the form:

\displaystyle  F(x_{1},\dots,x_{n}) = 0

where {F} is an integer polynomial. The combined work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson yielded a proof that this problem in undecidable in 1970. The last step was taken by Matiyasevich.

This result would probably have surprised the great Hilbert—he likely thought that there was such a procedure. So this result could be in the “we guessed wrong section,” but I included it here for a reason.

Julia Robinson showed in 1950 that proving that a certain number theory predicate could be encoded into a Diophantine equation would suffice to solve the entire problem. What Matiyasevich did was to show that this could be done. Before he did this the eminent logician Georg Kreisel had remarked: “Well, that’s not the way it’s gonna go.” (The quote of Kreisel is part of a film about Hilbert’s Tenth.)

Kreisel’s insight was that finding such a predicate would not only prove that the Tenth was undecidable, but would also prove that it was exactly the same as the Halting problem. He thought that perhaps this was too much to expect. He was wrong.

{\bullet }Four-Color Theorem (Again): The 4CT theorem was proven by Kenneth Appel and Wolfgang Haken in 1976. The proof is famous, since it not only solved a long standing conjecture dating from 1852, but made extensive use of computation.

They point out that their result is simply carrying out an attack on the problem that was already known; the method is essentially due to George Birkhoff. Others had used the method to get partial results of the form: the four color theorem is true for all planar maps of size {\le X}. In 1922 Philip Franklin proved 25, and later Walter Stromquist 52—this is not an exhaustive list of all the partial results.

Haken once told me that one reason he felt that they succeeded was that they believed that the current attack could be made to work. That there was no need for an entire new approach. Rather hard work and some computer magic could resolve the problem. Perhaps the surprise is that the attack actually worked.

We Never Thought About That Problem

Sometimes in mathematics and theory, a result is found that is entirely new. Well perhaps not completely new, but is not from a well studied area. The result can be very surprising precisely because not one previously had dreamed that such a result of this type could even be proved.

Here are two examples of this:

{\bullet }Collapse Of The Polynomial Time Hierarchy: Dick Karp and I proved that if SAT has polynomial size circuits, then the polynomial time hierarchy collapses to the second level. The proof is really simple—yet it is one of those basic results about computation. It’s one of my own results, so there is the risk that my opinion is biased, but I think most would agree with this assessment.

I always felt that one of key reasons that we found this result is simple: we were probably the first that ever thought about the problem. Previously, all circuits results went the other way: if there was a good algorithm, then there was a good circuit. Indeed, in general having a linear size circuit says nothing about the algorithmic complexity of a problem: a problem can have a linear size circuit for all {n} and still be undecidable.

One of the things that I always tell my students is this: beware of simple counter-examples. Often there can be a gem hiding inside a trivial counter-example.

{\bullet }Voting Theorems: Kenneth Arrow is a Nobel Prize winner in economics. One of his great results is the proof that any reasonable voting system that tries to select among {3} or more outcomes is flawed. The proof of this result is not too hard—see this, but I believe the brilliant insight was the idea that one could prove something at all about voting systems. That, seems to me, to be a great surprise.

We Never Connected Those Areas

Sometimes in mathematics and theory, the connection of two disparate areas can be quite surprising.

Here are two examples of this:

{\bullet }Prime Numbers and Complex Analysis: Johann Dirichlet’s theorem is one of the first uses of complex analysis to prove a theorem about primes. The theorem states that every arithmetic progression

\displaystyle  a, a+b, a+2b, a+3b, \dots

contains an infinite number of primes, as long as {a} and {b} have no common divisor. Dirichlet introduced certain complex functions that he cleverly connected to the number of primes that were in a given progression. Then, he proved his theorem by showing that his functions were non-zero at {1}, which proved that there are an infinite number of primes in the progression.

{\bullet }Distributed Computing and Topology: I will end with a connection that I know little about, but still feel that it is safe to say that it is surprising. That is the connection between distributed computing and algebraic topology. Maurice Herlihy and Sergio Rajsbaum have a primer that should help explain the connection better than I can.

Open Problems

What is your favorite surprise? What is your favorite type of surprise? Are there other types of surprises? I love to hear your thoughts.

Finally, could there be major surprises in complexity theory in the future?

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK