Page 1 of 80 1231151 ... LastLast
Results 1 to 10 of 796

Thread: The dumb questions thread

  1. Top | #1
    Veteran Member NobleSavage's Avatar
    Join Date
    Apr 2003
    Location
    127.0.0.1
    Posts
    3,079
    Archived
    1,363
    Total Posts
    4,442
    Rep Power
    64

    The dumb questions thread

    Can someone explain to me the P versus NP problem. I read the Wikpedia entry and it didn't help me much. The most math I've had is business calculus.

  2. Top | #2
    Veteran Member
    Join Date
    Sep 2006
    Location
    Connecticut
    Posts
    2,139
    Archived
    1,715
    Total Posts
    3,854
    Rep Power
    51
    Quote Originally Posted by NobleSavage View Post
    Can someone explain to me the P versus NP problem. I read the Wikpedia entry and it didn't help me much. The most math I've had is business calculus.
    One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.

    So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.

    In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").

    The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?

    It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(nc) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(cn) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2100000 is a very big number), while polynomials scale more reasonably (1000002 is big, but much, much smaller than 2100000).

    Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.

    We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.

    We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain ). These are the problems we can check that the answer is yes quickly (with a hint).

    Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.

  3. Top | #3
    Senior Member
    Join Date
    Sep 2008
    Location
    seattle
    Posts
    646
    Archived
    27,602
    Total Posts
    28,248
    Rep Power
    41
    Quote Originally Posted by beero1000 View Post
    One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.

    So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.

    In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").

    The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?

    It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(nc) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(cn) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2100000 is a very big number), while polynomials scale more reasonably (1000002 is big, but much, much smaller than 2100000).

    Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.

    We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.

    We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain ). These are the problems we can check that the answer is yes quickly (with a hint).

    Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.
    A nice clear exposition. I wish I had the energy and bandwidth to go into these kinds of topics.

  4. Top | #4
    Veteran Member NobleSavage's Avatar
    Join Date
    Apr 2003
    Location
    127.0.0.1
    Posts
    3,079
    Archived
    1,363
    Total Posts
    4,442
    Rep Power
    64
    Thanks beero1000! That made sense to me.

  5. Top | #5
    Veteran Member
    Join Date
    Sep 2006
    Location
    Connecticut
    Posts
    2,139
    Archived
    1,715
    Total Posts
    3,854
    Rep Power
    51
    Quote Originally Posted by NobleSavage View Post
    Thanks beero1000! That made sense to me.
    My pleasure. It's a fascinating subject, and I could go on about it for a while. The disappointing reality though, is that we probably won't see a solution in our lifetimes. The problem is just too hard. Prevailing opinion is that Mulmuley's geometric complexity theory, applying algebraic geometry techniques to complexity theory, is the only known idea that has a chance of resolving it, but even so it will probably be a century or more. Here's to hoping that view is too pessimistic.

    This is a good overview: http://cacm.acm.org/magazines/2009/9...oblem/fulltext

  6. Top | #6
    Intergalactic Villainess Angry Floof's Avatar
    Join Date
    Jul 2008
    Location
    Sector 001
    Posts
    9,202
    Archived
    14,435
    Total Posts
    23,637
    Rep Power
    59
    What is an eclipse called if it's the Earth eclipsing the sun as seen from the moon? I'm guessing it's still a solar eclipse since the terms solar eclipse and lunar eclipse refer to what is being eclipsed and not what is doing the eclipsing.

    A less dumb question: would that not be the coolest, most badass thing to see??

  7. Top | #7
    Intergalactic Villainess Angry Floof's Avatar
    Join Date
    Jul 2008
    Location
    Sector 001
    Posts
    9,202
    Archived
    14,435
    Total Posts
    23,637
    Rep Power
    59
    Quote Originally Posted by hylidae View Post
    What is an eclipse called if it's the Earth eclipsing the sun as seen from the moon? I'm guessing it's still a solar eclipse since the terms solar eclipse and lunar eclipse refer to what is being eclipsed and not what is doing the eclipsing.

    A less dumb question: would that not be the coolest, most badass thing to see??
    Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.

  8. Top | #8
    Fair dinkum thinkum bilby's Avatar
    Join Date
    Mar 2007
    Location
    The Sunshine State: The one with Crocs, not Gators
    Posts
    21,626
    Archived
    10,477
    Total Posts
    32,103
    Rep Power
    82
    Quote Originally Posted by hylidae View Post
    Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.
    How very geocentric of them. IMO, it is a solar eclipse, when viewed from the Moon, as the Sun is the body being observed. Calling it a lunar eclipse simply because it is the same configuration of bodies suggests that the position of the observer is irrelevant, and that earthbound observers get naming rights.

    Since when was NASA an authority on space, anyway? They can't even get an astronaut into low earth orbit, much less to the Moon.

    (OK, full disclosure - nor can I).

    - - - Updated - - -

    Quote Originally Posted by beero1000 View Post
    ...

    Finally, we get to the question: Is P = NP? ...
    Simple; P=NP if, (and only if) N=0.

    Where do I collect my Fields Medal?

  9. Top | #9
    Veteran Member Wiploc's Avatar
    Join Date
    Dec 2002
    Location
    Denver
    Posts
    1,950
    Archived
    14,058
    Total Posts
    16,008
    Rep Power
    64
    Quote Originally Posted by hylidae View Post
    Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.
    Perhaps because it is the same phenomenon, just viewed from a different place. That is, the earth cannot eclipse the sun (as viewed from the moon) unless the earth also eclipses the moon (as viewed from the earth) at the same time.

  10. Top | #10
    Veteran Member
    Join Date
    Sep 2006
    Location
    Connecticut
    Posts
    2,139
    Archived
    1,715
    Total Posts
    3,854
    Rep Power
    51
    Quote Originally Posted by bilby View Post
    Simple; P=NP if, (and only if) N=0.

    Where do I collect my Fields Medal?
    At the International Congress of Mathematicians, in Seoul, in 2014. But you'll have to fix the mistake in your proof first.

Similar Threads

  1. Quick Questions Thread
    By Kharakov in forum Mathematics
    Replies: 10
    Last Post: 01-14-2019, 02:15 AM
  2. The dumb statement's thread
    By Kharakov in forum Logic and Epistemology
    Replies: 22
    Last Post: 02-23-2018, 01:35 AM
  3. Really dumb questions
    By Davka in forum Natural Science
    Replies: 9
    Last Post: 12-26-2014, 10:36 AM
  4. Dumb Questions
    By spikepipsqueak in forum Natural Science
    Replies: 3
    Last Post: 12-22-2014, 08:46 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •