### The big O Notation - The Coding Shala

Home >> Algorithm >> The Big O Notation

In this post, we will learn what is The Big O Notation and where do we use it.

## The Big O Notation

We use the Big O notation to classify algorithms according to how their running time or space (e.g. in memory or on disk) requirements grow as the input size grows.

It describes the upper bound of the growth rate of a function and could be thought of as the worst-case scenario.

It describes especially well the situation for large inputs of an algorithm, but it does not care about how well your algorithm performs with inputs of small size.

## Common notations we used in big-O

Constant time: O(1). The number of required operations is not dependent on the input size. Examples: accessing an element of an array by its index, calculating the sum of arithmetic progression using the formula, printing a single value.

Logarithmic time: O(log n). The number of required operations is proportional to the logarithm of the input size. The base of the logarithm can be any; that's why it's not written. Example: the binary search in a sorted array.

Square-root time: O(sqrt(n)). The number of required operations is proportional to the square root of the input size.

Linear time: O(n). The number of required operations is proportional to the input size, i.e. time grows linearly as input size increases. Often, such algorithms are iterated over a collection once. Examples: sequential search, finding max/min item in an array.

Log-linear time O(n log n). The running time grows in proportion to n log n of the input. As a rule, the base of the logarithm does not matter; thus, we skip it.

Quadratic time: O(n2). The number of required operations is proportional to the square of the input size. Examples: simple sorting algorithms such as bubble sort, selection sort, insertion sort.

Exponential time: O(2n). The number of required operations depends exponentially on the input size growing fast.

Other Posts You May Like