Time complexity


In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size. In both cases, the time complexity is generally expressed as a function of the size of the input. Since this function is generally difficult to compute exactly, and the running time for small inputs is usually not consequential, one commonly focuses on the behavior of the complexity when the input size increases—that is, the asymptotic behavior of the complexity. Therefore, the time complexity is commonly expressed using big O notation, typically etc., where is the size in units of bits needed to represent the input.
Algorithmic complexities are classified according to the type of function appearing in the big O notation. For example, an algorithm with time complexity is a linear time algorithm and an algorithm with time complexity for some constant is a polynomial time algorithm.

Table of common time complexities

The following table summarizes some classes of commonly encountered time complexities. In the table,, i.e., polynomial in x.
NameComplexity classTime complexity Examples of running timesExample algorithms
constant time10Finding the median value in a sorted array of numbers. Calculating .
inverse Ackermann timeAmortized time per operation using a disjoint set
iterated logarithmic timeDistributed coloring of cycles
log-logarithmicAmortized time per operation using a bounded priority queue
logarithmic timeDLOGTIME,Binary search
polylogarithmic time
fractional powerwhere,Range searching in a k-d tree
linear time,Finding the smallest or largest item in an unsorted array. Kadane's algorithm. Linear search.
"n log-star n" timeSeidel's polygon triangulation algorithm.
linearithmic time,Fastest possible comparison sort. Fast Fourier transform.
quasilinear timeMultipoint polynomial evaluation
quadratic timeBubble sort. Insertion sort. Direct convolution
cubic timeNaive multiplication of two matrices. Calculating partial correlation.
polynomial timeP,Karmarkar's algorithm for linear programming. AKS primality test
quasi-polynomial timeQP,Best-known -approximation algorithm for the directed Steiner tree problem, best known parity game solver, best known graph isomorphism algorithm
sub-exponential time
SUBEXPfor allContains BPP unless EXPTIME equals MA.
sub-exponential time
Best classical algorithm for integer factorization
formerly-best algorithm for graph isomorphism
exponential time
E,Solving the traveling salesman problem using dynamic programming
factorial timeSolving the traveling salesman problem via brute-force search
exponential timeEXPTIME,Solving matrix chain multiplication via brute-force search
double exponential time2-EXPTIMEDeciding the truth of a given statement in Presburger arithmetic

Constant time

An algorithm is said to be constant time if the value of is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. In a similar manner, finding the minimal value in an array sorted in ascending order; it is the first element. However, finding the minimal value in an unordered array is not a constant time operation as scanning over each element in the array is needed in order to determine the minimal value. Hence it is a linear time operation, taking time. If the number of elements is known in advance and does not change, however, such an algorithm can still be said to run in constant time.
Despite the name "constant time", the running time does not have to be independent of the problem size, but an upper bound for the running time has to be independent of the problem size. For example, the task "exchange the values of and if necessary so that " is called constant time even though the time may depend on whether or not it is already true that. However, there is some constant such that the time required is always at most.

Logarithmic time

An algorithm is said to take logarithmic time when. Since and are related by a constant multiplier, and such a multiplier is irrelevant to big O classification, the standard usage for logarithmic-time algorithms is regardless of the base of the logarithm appearing in the expression of.
Algorithms taking logarithmic time are commonly found in operations on binary trees or when using binary search.
An algorithm is considered highly efficient, as the ratio of the number of operations to the size of the input decreases and tends to zero when increases. An algorithm that must access all elements of its input cannot take logarithmic time, as the time taken for reading an input of size is of the order of.
An example of logarithmic time is given by dictionary search. Consider a dictionary which contains entries, sorted in alphabetical order. We suppose that, for, one may access the th entry of the dictionary in a constant time. Let denote this th entry. Under these hypotheses, the test to see if a word is in the dictionary may be done in logarithmic time: consider, where denotes the floor function. If --that is to say, the word is exactly in the middle of the dictionary--then we are done. Else, if --i.e., if the word comes earlier in alphabetical order than the middle word of the whole dictionary--we continue the search in the same way in the left half of the dictionary, and then again repeatedly until the correct word is found. Otherwise, if it comes after the middle word, continue similarly with the right half of the dictionary. This algorithm is similar to the method often used to find an entry in a paper dictionary. As a result, the search space within the dictionary decreases as the algorithm gets closer to the target word.

Polylogarithmic time

An algorithm is said to run in polylogarithmic time if its time is for some constant. Another way to write this is.
For example, matrix chain ordering can be solved in polylogarithmic time on a parallel random-access machine, and a graph can be determined to be planar in a fully dynamic way in time per insert/delete operation.

Sub-linear time

An algorithm is said to run in sub-linear time if. In particular this includes algorithms with the time complexities defined above.
The specific term sublinear time algorithm commonly refers to randomized algorithms that sample a small fraction of their inputs and process them efficiently to approximately infer properties of the entire instance. This type of sublinear time algorithm is closely related to property testing and statistics.
Other settings where algorithms can run in sublinear time include:
  • Parallel algorithms that have linear or greater total work, but sub-linear depth.
  • Algorithms that have guaranteed assumptions on the input structure. An important example are operations on data structures, e.g. binary search in a sorted array.
  • Algorithms that search for local structure in the input, for example finding a local minimum in a 1-D array. A closely related notion is that of Local Computation Algorithms where the algorithm receives a large input and queries to local information about some valid large output.

    Linear time

An algorithm is said to take linear time, or time, if its time complexity is. Informally, this means that the running time increases at most linearly with the size of the input. More precisely, this means that there is a constant such that the running time is at most for every input of size. For example, a procedure that adds up all elements of a list requires time proportional to the length of the list, if the adding time is constant, or, at least, bounded by a constant.
Linear time is the best possible time complexity in situations where the algorithm has to sequentially read its entire input. Therefore, much research has been invested into discovering algorithms exhibiting linear time or, at least, nearly linear time. This research includes both software and hardware methods. There are several hardware technologies which exploit parallelism to provide this. An example is content-addressable memory. This concept of linear time is used in string matching algorithms such as the Boyer–Moore string-search algorithm and Ukkonen's algorithm.