# Analysis of Algorithms | Big-O analysis

In our previous articles on Analysis of Algorithms, we had discussed asymptotic notations, their worst and best case performance etc. in brief. In this article, we discuss analysis of algorithm using Big – O asymptotic notation in complete details.

**Big-O Analysis of Algorithms**

The Big O notation defines an upper bound of an algorithm, it bounds a function only from above. For example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers linear time.

The Big-O Asymptotic Notation gives us the Upper Bound Idea, mathematically described below:

f(n) = O(g(n)) if there exists a positive integer n

_{0}and a positive constant c, such that f(n)≤c.g(n) ∀ n≥n_{0}

The general step wise procedure for Big-O runtime analysis is as follows:

- Figure out what the input is and what n represents.
- Express the maximum number of operations, the algorithm performs in terms of n.
- Eliminate all excluding the highest order terms.
- Remove all the constant factors.

Some of the useful properties on Big-O notation analysis are as follow:

▪ Constant Multiplication:

If f(n) = c.g(n), then O(f(n)) = O(g(n)) ; where c is a nonzero constant.

▪ Polynomial Function:

If f(n) = a_{0}+ a_{1}.n + a_{2}.n^{2}+ —- + a_{m}.n^{m}, then O(f(n)) = O(n^{m}).

▪ Summation Function:

If f(n) = f_{1}(n) + f_{2}(n) + —- + f_{m}(n) and f_{i}(n)≤f_{i+1}(n) ∀ i=1, 2, —-, m,

then O(f(n)) = O(max(f_{1}(n), f_{2}(n), —-, f_{m}(n))).

▪ Logarithmic Function:

If f(n) = log_{a}n and g(n)=log_{b}n, then O(f(n))=O(g(n))

; all log functions grow in the same manner in terms of Big-O.

Basically, this asymptotic notation is used to measure and compare the worst-case scenarios of algorithms theoretically. For any algorithm, the Big-O analysis should be straightforward as long as we correctly identify the operations that are dependent on n, the input size.

**Runtime Analysis of Algorithms**

In general cases, we mainly used to measure and compare the worst-case theoretical running time complexities of algorithms for the performance analysis.

The fastest possible running time for any algorithm is O(1), commonly referred to as *Constant Running Time*. In this case, the algorithm always takes the same amount of time to execute, regardless of the input size. This is the ideal runtime for an algorithm, but it’s rarely achievable.

In actual cases, the performance (Runtime) of an algorithm depends on n, that is the size of the input or the number of operations is required for each input item.

The algorithms can be classified as follows from the best-to-worst performance (Running Time Complexity):

▪ A logarithmic algorithm – O(logn)

Runtime grows logarithmically in proportion to n.

▪ A linear algorithm – O(n)

Runtime grows directly in proportion to n.

▪ A superlinear algorithm – O(nlogn)

Runtime grows in proportion to n.

▪ A polynomial algorithm – O(n^{c})

Runtime grows quicker than previous all based on n.

▪ A exponential algorithm – O(c^{n})

Runtime grows even faster than polynomial algorithm based on n.

▪ A factorial algorithm – O(n!)

Runtime grows the fastest and becomes quickly unusable for even

small values of n.

Where, n is the input size and c is a positive constant.

**Algorithmic Examples of Runtime Analysis**:

Some of the examples of all those types of algorithms (in worst-case scenarios) are mentioned below:

▪ Logarithmic algorithm – O(logn) – Binary Search.

▪ Linear algorithm – O(n) – Linear Search.

▪ Superlinear algorithm – O(nlogn) – Heap Sort, Merge Sort.

▪ Polynomial algorithm – O(n^c) – Strassen’s Matrix Multiplication, Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort.

▪ Exponential algorithm – O(c^n) – Tower of Hanoi.

▪ Factorial algorithm – O(n!) – Determinant Expansion by Minors, Brute force Search algorithm for Traveling Salesman Problem.

**Mathematical Examples of Runtime Analysis**:

The performances (Runtimes) of different orders of algorithms separate rapidly as n (the input size) gets larger. Let’s consider the mathematical example:

If n = 10, If n=20, log(10) = 1; log(20) = 2.996; 10 = 10; 20 = 20; 10log(10)=10; 20log(20)=59.9; 10^{2}=100; 20^{2}=400; 2^{10}=1024; 2^{20}=1048576; 10!=3628800; 20!=2.432902e+18^{18};

**Memory Footprint Analysis of Algorithms**

For performance analysis of an algorithm, runtime measurement is not only relevant metric but also we need to consider the memory usage amount of the program. This is referred to as the Memory Footprint of the algorithm, shortly known as Space Complexity.

Here also, we need to measure and compare the worst case theoretical space complexities of algorithms for the performance analysis.

It basically depends on two major aspects described below:

- Firstly, the implementation of the program is responsible for memory usage. For example, we can assume that recursive implementation always reserves more memory than the corresponding iterative implementation of a particular problem.
- And the other one is n, the input size or the amount of storage required for each item. For example, a simple algorithm with a high amount of input size can consume more memory than a complex algorithm with less amount of input size.

Algorithmic Examples of Memory Footprint Analysis: The algorithms with examples are classified from the best-to-worst performance (Space Complexity) based on the worst-case scenarios are mentioned below:

▪ Ideal algorithm - O(1) - Linear Search, Binary Search, Bubble Sort, Selection Sort, Insertion Sort, Heap Sort, Shell Sort. ▪ Logarithmic algorithm - O(log n) - Merge Sort. ▪ Linear algorithm - O(n) - Quick Sort. ▪ Sub-linear algorithm - O(n+k) - Radix Sort.

**Space-Time Tradeoff and Efficiency**

There is usually a trade-off between optimal memory use and runtime performance.

In general for an algorithm, space efficiency and time efficiency reach at two opposite ends and each point in between them has a certain time and space efficiency. So, the more time efficiency you have, the less space efficiency you have and vice versa.

For example, Mergesort algorithm is exceedingly fast but requires a lot of space to do the operations. On the other side, Bubble Sort is exceedingly slow but requires the minimum space.

At the end of this topic, we can conclude that finding an algorithm that works in less running time and also having less requirement of memory space, can make a huge difference in how well an algorithm performs.

## Recommended Posts:

- Analysis of Algorithms | Set 4 (Analysis of Loops)
- Analysis of Algorithms | Set 1 (Asymptotic Analysis)
- Analysis of Algorithm | Set 5 (Amortized Analysis Introduction)
- Analysis of algorithms | little o and little omega notations
- Analysis of Algorithms | Set 5 (Practice Problems)
- Analysis of Algorithms | Set 3 (Asymptotic Notations)
- Analysis of Algorithms | Set 2 (Worst, Average and Best Cases)
- Asymptotic Analysis and comparison of sorting algorithms
- Algorithms Sample Questions | Set 3 | Time Order Analysis
- Algorithms | Analysis of Algorithms | Question 13
- Algorithms | Analysis of Algorithms | Question 16
- Algorithms | Analysis of Algorithms | Question 18
- Algorithms | Analysis of Algorithms | Question 14
- Algorithms | Analysis of Algorithms | Question 17
- Algorithms | Analysis of Algorithms | Question 15

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.