The **time complexity** of a problem is the number of steps that it takes to solve an instance of the problem, as a function of the size of the input, (usually measured in bits) using the most efficient algorithm. To understand this intuitively, consider the example of an instance that is *n* bits long that can be solved in *n*^{2} steps. In this example we say the problem has a time complexity of *n*^{2}. Of course, the exact number of steps will depend on exactly what machine or language is being used. To avoid that problem, we generally use Big O notation. If a problem has time complexity O(*n*^{2}) on one typical computer, then it will also have complexity O(*n*^{2}) on most other computers, so this notation allows us to generalize away from the details of a particular computer.

**Example:** Mowing grass has linear complexity because it takes double the time to mow double the area.
However, looking up something in a dictionary has only logarithmic complexity because for a double sized dictionary you have to open it only one time more (e.g. exactly in the middle - then the problem is reduced to the half).

Much of complexity theory deals with decision problems. A decision problem is a problem where the answer is always YES/NO. For example, the problem *IS-PRIME* is: given an integer written in binary, return whether it is a prime number or not. A decision problem is equivalent to a *language*, which is a set of finite-length strings. For a given decision problem, the equivalent language is the set of all strings for which the answer is YES.

Decision problems are often considered because an arbitrary problem can always be reduced to a decision problem. For example, the problem *HAS-FACTOR* is: given integers *n* and *k* written in binary, return whether *n* has any prime factors less than *k*. If we can solve *HAS-FACTOR* with a certain amount of resources, then we can use that solution to solve *FACTORIZE* without much more resources. Just do a binary search on *k* until you find the smallest factor of *n*. Then divide out that factor, and repeat until you find all the factors.

Complexity theory often makes a distinction between YES answers and NO answers.
For example, the set NP is defined as the set of problems where the YES instances can be checked quickly. The set Co-NP is the set of problems where the NO instances can be checked quickly. The "Co" in the name stands for "complement". The *complement* of a problem is one where all the YES and NO answers are swapped, such as *IS-COMPOSITE* for *IS-PRIME*.

The set P is the set of decision problems that can be solved by a deterministic machine in polynomial time. The set NP is the set of decision problems that can be solved by a non-deterministic machine in polynomial time. The question of whether P is the same set as NP is the most important open question in theoretical computer science. There is even a $1,000,000 prize for solving it. (See Complexity classes P and NP and oracles).

Questions like this motivate the concepts of *hard* and *complete*. A set of problems *X* is hard for a set of problems *Y* if every problem in *Y* can be transformed easily into some problem in *X* with the same answer. The definition of "easily" is different in different contexts. The most important hard set is NP-hard. Set *X* is complete for *Y* if it is hard for *Y*, and is also a subset of *Y*. The most important complete set is NP-complete. See the articles on those two sets for more detail on the definition of "hard" and "complete".

The following are some of the classes of problems considered in complexity theory, along with rough definitions. See computation for a chart showing which classes are subsets of other classes.

Many of these classes have a \'Co' partner (ie NP and Co-NP) which consists of the complements of all languages in the original class. For example if **L** is in **NP** then the **complement of L** is in **Co-NP**. (This doesn't mean that the **complement of NP** is **Co-NP** - there are languages which are known to be in both, and other languages which are known to be in neither).

If you don't see a class listed (such as **Co-UP**) you should look under its partner (such as **UP**).

P | Solvable in polynomial time (see Complexity classes P and NP) | |

NP | YES answers checkable in polynomial time (see Complexity classes P and NP) | |

Co-NP | NO answers checkable in polynomial time | |

NP-complete | The hardest problems in NP | |

Co-NP-complete | The hardest problems in Co-NP | |

NP-hard | Either NP-complete or harder | |

NP-easy | non-decision-problem analogue to NP | |

NP-equivalent | non-decision-problem analogue to NP-complete | |

UP | Unambiguous Non-Deterministic Polytime functions. | |

#P | Count solutions to an NP problem | |

#P-complete | The hardest problems in #P | |

NC | Solvable efficiently (in polylogarithmic time) on parallel computers | |

P-complete | The hardest problems in P to solve on parallel computers | |

PSPACE | Solvable with polynomial memory and unlimited time | |

PSPACE-complete | The hardest problems in PSPACE | |

EXPTIME | Solvable with exponential time | |

EXPSPACE | Solvable with exponential memory and unlimited time | |

BQP | Solvable in polynomial time on a quantum computer (answer is probably right) | |

BPP | Solvable in polynomial time by randomized algorithms (answer is probably right) | |

RP | Solvable in polynomial time by randomized algorithms (NO answer is probably right, YES is certainly right) | |

ZPP | Solvable by randomized algorithms (answer is always right, average running time is polynomial) | |

PCP | Checkable in polynomial time by a randomized algorithm. |