OPEN APP
Home >Opinion >Columns >To ‘P’ or ‘NP’, that is the question

The big mathematics news of the week was, of course, Kumar Eswaran’s claim to have proved the Riemann Hypothesis. No doubt his paper supporting the claim will be examined minutely, as it must be. After all, proving the Hypothesis is widely considered one of the most difficult tasks in mathematics. If Eswaran is right, it will be earthshaking news, at least in mathematical circles.

But while we wait to hear one way or another, remember that mathematics is filled with plenty of other unsolved problems and hypotheses and conjectures.

In May 2000, the Clay Mathematics Institute selected seven such as their “Millennium Prize Problems": the Riemann Hypothesis, the Birch and Swinnerton-Dyer Conjecture, the Poincaré Conjecture, the Hodge Conjecture, the Navier–Stokes Equation, the Yang-Mills and Mass Gap, and the P versus NP problem. To anyone who solves any of these, the Institute offered a remarkable prize: one million dollars.

In the two decades since, one of the seven has actually been solved: the Poincaré Conjecture. Famously, the man who solved it, a reclusive Russian mathematician called Grigori Perelman, refused the Clay Prize, or even any felicitation for his achievement. The other six, though, remain open questions. So if you can use a million dollars before taxes, this may be your chance.

I don’t know much about most of those six problems. But in my days as a student of computer science, we studied and talked a fair amount about P versus NP. Here’s a broadbrush explanation of what that’s about.

When you programme a computer to tackle a particular problem, one concern is how long it will take. For example, if you have to sort a list of names into alphabetical order, you may not expect it to be done instantaneously, but you also don’t want the computer to take a year, or even a week, to do the job. More to the point, you probably want to know how the time to sort depends on the number of names in the list. Does a list ten times as long take ten times as long to be sorted?

Computer scientists characterize algorithms by measures like these. If there’s a linear relationship between the number count and the time the algorithm takes, we say it’s an O(n) ("Order of n") algorithm. Meaning, if n is the number of items we’re looking at, the time is a linear function of n.

Think of a spectacularly bone-headed way of cleaning up after dropping rice all over the floor: pick up one grain at a time and toss it in the trash. As you can imagine, it’s not just that you will take a long time to finish, it’s also that how much time you take is directly related to the number of grains on the floor: 2,000 will need twice as long as 1,000. This stupid algorithm is O(n), because it depends on the number of grains you’ve upended.

Naturally, you prefer the far more sensible method of sweeping the whole pile of rice onto a dustpan—a few swipes of the broom will most likely do the trick. Possibly this is an “Order of the square root of n" algorithm.

Researchers who study algorithms have long divided problems into two categories:

• “P" problems, which an algorithm can provide a solution for in what’s called “polynomial time" (thus “P"). These might be O(n), or O(n2), or O(n3- 4n)— that is, a time related to a polynomial involving n. The point is, we know such an algorithm will give us an answer in some finite time. The grain-by-grain rigmarole above is a “P" algorithm.

• “NP" problems, which we may not know of any way to solve quickly, even in polynomial time; but if we have a solution, we can verify its correctness in polynomial time. ("NP" stands for “nondeterministic polynomial time"). Take a map at random and ask: if I want to paint this map so that no countries that share a border have the same colour, are three colours enough? That’s an “NP" problem.

Think about it a bit and you will see that P problems are all NP by definition. For if we can solve a problem in polynomial time, then we can also verify its solution in polynomial time— by solving it.

But we don’t know if all NP problems are P. So the great debate in mathematics and computer science circles is, is P the same as NP? (Ask a mathematician “Is P = NP?" and you may impress her.) That is, can we show that all NP problems can be solved in polynomial time? Or, contrariwise, can we prove that at least one NP problem absolutely cannot be solved in polynomial time?

There is a general sense that, if we ever prove P = NP, this would be a quite different world. Scott Aaronson of the University of Texas puts it this way: if P = NP, “there would be no special value in ‘creative leaps,’ no fundamental gap between solving a problem and recognizing the solution once it’s found."

True. Except, we don’t know if P = NP.

Now most mathematicians and CS people actually believe that P is not equal to NP. This is because there are a few thousand known NP problems for which nobody has yet found a polynomial-time solution. Yet even a few thousand examples doth not a proof make. What doth indeed a proof make is a sequence of logical steps that start from some axioms and lead inexorably to an irrefutable conclusion.

For the P vs NP question, we don’t have that.

But it’s not for lack of trying. In my column here two days ago (“Why I’m sceptical about the claim that Riemann Hypothesis has been proven", bit.ly/3jLp4eT), I mentioned Vinay Deolalikar, a researcher at HP Labs. In August 2010, he posted a paper online claiming he had answered the P vs NP question— by proving that P was, in fact, not the same as NP.

Interested people the world over immediately started poring over Deolalikar’s paper, exactly as he must have expected and wanted. In one such examination, the computer scientist Richard Lipton wrote: “[Deolalikar] uses the beautiful result of Moshe Vardi (1982) and Neil Immerman (1986): On ordered structures, a relation is defined by a first order formula plus the Least Fixed Point (LFP) operator if and only if it is computable in polynomial time."

Never mind what all that means (but please note the “beautiful"). I quote Lipton only to show how minutely Deolalikar’s result was being studied. Within days, the mathematician Terence Tao left this comment on Lipton’s blog: “it appears that Deolalikar has accidentally (and incorrectly) amplified the power of [certain] tools from something that is too weak to establish [a particular] claim, to something that is far too strong for establishing this claim. As a consequence, he proves statements ... that are too strong to really be believable (and in particular, appears to establish the incorrect claim that k-XORSAT is not in P)."

Again, never mind what all that means, except that this was how Deolalikar’s claim began to unravel. Yet even so, he had raised questions some mathematicians found intriguing. As Milos Hasan wrote: “I wonder if [his] idea of defining random distributions over SAT variables could inspire an interesting complexity class."

There’s a hint of how science often progresses. Even in failure, you find nuggets to ruminate over and explore.

I think that’s beautiful.

Once a computer scientist, Dilip D’Souza now lives in Mumbai and writes for his dinners. His Twitter handle is @DeathEndsFun

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

Close
×
Edit Profile
My ReadsRedeem a Gift CardLogout