1. ## The dumb questions thread

Can someone explain to me the P versus NP problem. I read the Wikpedia entry and it didn't help me much. The most math I've had is business calculus.  Reply With Quote

2. Originally Posted by NobleSavage Can someone explain to me the P versus NP problem. I read the Wikpedia entry and it didn't help me much. The most math I've had is business calculus.
One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.

So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.

In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").

The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?

It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(nc) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(cn) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2100000 is a very big number), while polynomials scale more reasonably (1000002 is big, but much, much smaller than 2100000).

Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.

We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.

We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain ). These are the problems we can check that the answer is yes quickly (with a hint).

Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.  Reply With Quote

3. Originally Posted by beero1000 One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.

So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.

In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").

The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?

It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(nc) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(cn) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2100000 is a very big number), while polynomials scale more reasonably (1000002 is big, but much, much smaller than 2100000).

Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.

We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.

We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain ). These are the problems we can check that the answer is yes quickly (with a hint).

Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.
A nice clear exposition. I wish I had the energy and bandwidth to go into these kinds of topics.  Reply With Quote

4. Thanks beero1000! That made sense to me.  Reply With Quote

5. Originally Posted by NobleSavage Thanks beero1000! That made sense to me.
My pleasure. It's a fascinating subject, and I could go on about it for a while. The disappointing reality though, is that we probably won't see a solution in our lifetimes. The problem is just too hard. Prevailing opinion is that Mulmuley's geometric complexity theory, applying algebraic geometry techniques to complexity theory, is the only known idea that has a chance of resolving it, but even so it will probably be a century or more. Here's to hoping that view is too pessimistic.

This is a good overview: http://cacm.acm.org/magazines/2009/9...oblem/fulltext  Reply With Quote

6. What is an eclipse called if it's the Earth eclipsing the sun as seen from the moon? I'm guessing it's still a solar eclipse since the terms solar eclipse and lunar eclipse refer to what is being eclipsed and not what is doing the eclipsing.

A less dumb question: would that not be the coolest, most badass thing to see??  Reply With Quote

7. Originally Posted by hylidae What is an eclipse called if it's the Earth eclipsing the sun as seen from the moon? I'm guessing it's still a solar eclipse since the terms solar eclipse and lunar eclipse refer to what is being eclipsed and not what is doing the eclipsing.

A less dumb question: would that not be the coolest, most badass thing to see??
Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.  Reply With Quote

8. Originally Posted by hylidae Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.
How very geocentric of them. IMO, it is a solar eclipse, when viewed from the Moon, as the Sun is the body being observed. Calling it a lunar eclipse simply because it is the same configuration of bodies suggests that the position of the observer is irrelevant, and that earthbound observers get naming rights.

Since when was NASA an authority on space, anyway? They can't even get an astronaut into low earth orbit, much less to the Moon.

(OK, full disclosure - nor can I).

- - - Updated - - - Originally Posted by beero1000 ...

Finally, we get to the question: Is P = NP? ...
Simple; P=NP if, (and only if) N=0.

Where do I collect my Fields Medal?   Reply With Quote

9. Originally Posted by hylidae Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.
Perhaps because it is the same phenomenon, just viewed from a different place. That is, the earth cannot eclipse the sun (as viewed from the moon) unless the earth also eclipses the moon (as viewed from the earth) at the same time.  Reply With Quote

10. Originally Posted by bilby Simple; P=NP if, (and only if) N=0.

Where do I collect my Fields Medal? At the International Congress of Mathematicians, in Seoul, in 2014. But you'll have to fix the mistake in your proof first.   Reply With Quote

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•