One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.
So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.
In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").
The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?
It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(n
^{c}) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(c
^{n}) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2
^{100000} is a very big number), while polynomials scale more reasonably (100000
^{2} is big, but much, much smaller than 2
^{100000}).
Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.
We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.
We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain
). These are the problems we can check that the answer is yes quickly (with a hint).
Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.