Big O notation

Big O notation is a way of representing the performance of an algorithm, indicating how long it takes to run based on the size of the input.

Big O notation is a way of expressing the complexity of an algorithm and is often used to compare the performance of different algorithms, since it allows you to compare the number of steps that each algorithm takes to solve a problem of a given size.

In general, the complexity of an algorithm is described using "big O" notation, which gives an upper bound on the number of steps that the algorithm takes. For example, an algorithm with a complexity of O(n) will take at most a certain number of steps for every input size n.

There are several common complexities that you might see when analysing algorithms, including:

  • O(1): Constant time. This means that the algorithm always takes the same amount of time, no matter how large the input is.

  • O(log n): Logarithmic time. This means that the algorithm's running time increases logarithmically with the size of the input.

  • O(n): Linear time. This means that the algorithm's running time increases linearly with the size of the input.

  • O(n log n): Linear logarithmic time. This means that the algorithm's running time increases with the size of the input, but at a slower rate than linear time.

  • O(n^2): Quadratic time. This means that the algorithm's running time increases with the square of the size of the input.

  • O(n^3): Cubic time. This means that the algorithm's running time increases with the cube of the size of the input.

  • O(2^n): Exponential time. This means that the algorithm's running time increases exponentially with the size of the input.

  • O(n!): Factorial time. This means that the algorithm's running time increases with the factorial of the size of the input. This is an extremely slow-running algorithm, and is usually not practical for real-world use.

Last updated