Skip to main content
HomeAbout PythonLearn Python

Analyzing Complexity of Code through Python

Get introduced to Asymptotic Analysis. Learn more about the complexity of the algorithm as well as asymptotic notation, such as Big O, Big θ, and Big Ω notation. Along with the examples of complexity in a different algorithm.
Aug 2018  · 11 min read

The complexity of an algorithm is a measure of the amount of time and/or space required by an algorithm for an input of a given size (n). Though the complexity of the algorithm does depends upon the specific factors such as: The architecture of the computer i.e.the hardware platform representation of the Abstract Data Type(ADT) compiler efficiency the complexity of the underlying algorithm size of the input. Though you will see most significant factors are a complexity of underlying algorithm and size of the input.

In DataCamp's blog Python Data Structures Tutorial you can learn about the basic overview of data structure and the implementation of the data structure in Python in that blog. The blog post gives an introduction to the basic data structure of Python. With it, you'll discover Abstract Data Type and Data Structure, Primitive Data Structure and Non-Primitive Data Structure.

Asymptotic Analysis

Asymptotic analysis refers to the computing of the running time of any piece of code or the operation in a mathematical unit of a computation. Its operation is computed in terms of a function like f(n). In mathematical analysis, asymptotic analysis, also known as asymptotics, is a method of describing limiting behavior.

The time required by the algorithm falls under the three types: Worst case - Maximum time required by an algorithm and it is mostly used or done while analyzing the algorithm. Best case - Minimum time required for the algorithm or piece of code and it is not normally calculated while analyzing the algorithm. Average case - Average time required for an algorithm or portion of code and it is sometimes done while analyzing the algorithm.

Asymptotic notation

The commonly used notation for calculating the running time complexity of the algorithm is as follows:

  • Big O notation
  • Big θ notation
  • Big Ω notation

Big Oh Notation, Ο

Big O is used to measure the performance or complexity of an algorithm. In more mathematical term, it is the upper bound of the growth rate of a function, or that if a function g(x) grows no faster than a function f(x), then g is said to be a member of O(f).In general, it is used to express the upper bound of an algorithm and which gives the measure for the worst time complexity or the longest time an algorithm possibly take to complete.

Big Omega Notation, Ω

The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete.

Big Theta Notation, θ

The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time.

Notation is used to determine the complexity of various algorithm's

Big O notation is mostly used, and it is used mainly all the times to find the upper bound of an algorithm whereas the Big θ notation is sometimes used which is used to detect the average case and the Ω notation is the least used notation among the three.

You will be seeing the examples of the notation used in the algorithm to determine the complexity for that particular algorithm.

For example for a quick sort:

Quick sort is a Divide and Conquer algorithm where it is used for sorting. It serves as a systematic method of placing elements in the order, i.e. For example arranging elements or number in the array in ascending or descending order. This algorithm picks the pivot or the index that is chosen from the given array. Also, the pivot can be selected in the different ways. The example that is implemented below is the pivot element selected with the last element.

The main crux of the Quick sort is the partition. From an array, a partition element is chosen, and then the partition element(i.e.pit for example) is kept in correct position and then the element greater to the partition is kept at the right of the pit, and the smaller element is kept at the left of the pit.

#The last element will be taken as a pivot by the use of the function
#The smaller element is placed left to the pivot
#The greater element is placed to the right of the pivot
def partition(array,low,high):
    i = ( low-1 )         # index of smaller element is chosen
    pivot = array[high]     # pivot is chosen

    for j in range(low , high):

        #Is the element less or equal to the pivot
        if   array[j] <= pivot:

            # increment index of smaller element
            i = i+1
            array[i],array[j] = array[j],array[i]

    array[i+1],array[high] = array[high],array[i+1]
    return ( i+1 )

# The main crux of the problem that implements Quick sort is
#array[] is to be sorted
#high is the ending index
#low is the starting index

# Function to do Quick sort
def quickSort(array,low,high):
    if low < high:

       #pit is the partitioning index
        pit = partition(array,low,high)

      #Element sorted before and after partition
        quickSort(array, low, pit-1)
        quickSort(array, pit+1, high)

n = len(array)
print ("The Sorted array is:")
for i in range(n):
    print ("%d" %array[i]),

The Sorted array is:
You will get the output:

The Sorted array is: 2 4 6 8 10 12

Now it is time to analyze the time complexity. At first,

  • The best case is: Ω(n log(n))
  • The average case is: Θ(n log(n))
  • The worst case is: O(n^2)

Now, let's analyze the above code.

Best Case: It is the case where the partition element picks the element which is middle to be a pivot i.e.Since the algorithm will invoke recursively on first and second half. Thus - the total number of steps needed is the number of times it will take to reach from n to 1 if you divide the problem by 2 each step. So,there are n/2/2/2/2/..../2=1 *k times But, note that the equation is actually: n / 2^k = 1. Since 2^logn = n, we get k = long. So the number of steps (iterations) the algorithm requires is O(log), which will make the algorithm O(n log n) - since each iteration is O(n).

Average case: To do average case we need to consider all the permutations of the array and calculate the time taken by every permutation. You can refer it more on Merge sort.

Worst case: In the worst case, if the first element is chosen as the pivot the Worst case complexity is chosen when the input to be sorted is in increasing or decreasing order. The reason for the worst case is that the after the partitioning, the partition size will be 1 and the other size will be n-1. Here,T(n) is the function: So, the Time to quick sort 'n' elements T(n) = Time it takes to partition 'n' elements O(n) + Time to quick sort 'n-1' elements T(n-1) So, T(n) = T(n-1) + O(n) => T(n) = O(n^2)


The code below may not be that fancy, and you may not call it as an algorithm, but you can consider it as an algorithm because any code that does anything at all is technically an algorithm and it is a way of solving a particular problem. The algorithm you can see above is the use of for loop containing a single print statement.

print('I love Python');

Hello world!
The time complexity of the above algorithm is O(1) because it always takes one step. It is a constant time.

stuffs= ['eggs','toothbrush','kittens','mugs']
for stuff in stuffs:
    print("Here's a stuff: {}".format(stuff));

Here's a stuff: eggs Here's a stuff: toothbrush Here's a stuff: kittens Here's a stuff: mugs How would you describe the efficiency of the above algorithm in Big O Notation?

For analyzing the above algorithm, you need to consider a fact or analyze how many steps this algorithm it takes. In the above case, it has four items in a list, and you need to print each one out one time. But now you consider and think what if there are elements or items in the list more than 4 items, i.e., say 15 items than would the for loop would take the same number of steps for 15 items in a list also? Since this for loop takes as many steps as there are elements, you need to say that the algorithm has of the efficiency of O(N) rather than O(1).

The next example is a simple Python-based algorithm for determining whether a number is prime:

def is_prime(number):   
    for i in range(2, number):       
        if number % 2 == 0:           
            return True   
    return False

The preceding code accepts a number as an argument and begins a for loop in which you divide every number from 2 up to that number and see if there’s a remainder. If there’s no remainder, you know that the number is not prime and you immediately return False. If you make it all the way up to the number and always find a remainder, then you know that the number is prime and you return True.

You can determine the efficiency of the above algorithm to be O(N). The above example does not take data in the form of array or list but the number passed in is as an argument. If you pass some arbitrary number like 11, the for loop runs for about the eleven steps. (It actually runs nine steps, since it starts at two and ends right before the actual number.)For the number 101, the loop runs about 101 steps. Since the number of steps increases in lockstep with the number passed into the function, this is a classic example of O(N).

def twoForLoops(n):
    for i in range(1,n):
    for i in range(1,100):

In the above code, the complexity of the algorithm is O(N).Since the second loop does contain 100 as an argument which can be ignored because you need to express the complexity by assuming N as very large.

def twoConditionalLoops(m,n):
    for i in range(0,m):
    for i in range(0,n):

There are two loops where a length of one loop is m, and the other loop length is n. Also, you need to assume m and n to be large and then the complexity of the operation is O(n+m). Since the loops are different and it takes a different input, the complexity is additive in nature.

def twoNestedForLoops(int m,int n):
    for i in range(0,n):
        for j in range(0,m):

There is a nested for loop, and again you need to assume n and m to be large, and then the complexity of the operation is O(n*m). Since the loops are the same and are nested, the complexity is multiplicative in nature.


You have made it to the end of this tutorial! Along the way, you learned asymptotic notation as well as a basic tool used by the programmers and data scientist. You have just learned a basic and easy way of analyzing complexity which is written just in plain English where technical word and mathematical rigor is not given. Though data structure and algorithm topics are often read and taught only if you are a computer science undergraduate or in a related field. It is equally important to have knowledge of this topic, i.e., not be an expert. To get a deep dive into the learning journey, you can refer to this link. MIT opencourseware Algorithm course

If you would like to learn more about Python, check out these DataCamp courses:


Python courses

Certification available


Introduction to Python

4 hr
Master the basics of data analysis with Python in just four hours. This online course will introduce the Python interface and explore popular packages.
See DetailsRight Arrow
Start Course
See MoreRight Arrow

Mastering the Pandas .explode() Method: A Comprehensive Guide

Learn all you need to know about the pandas .explode() method, covering single and multiple columns, handling nested data, and common pitfalls with practical Python code examples.

Adel Nehme

5 min

Python NaN: 4 Ways to Check for Missing Values in Python

Explore 4 ways to detect NaN values in Python, using NumPy and Pandas. Learn key differences between NaN and None to clean and analyze data efficiently.
Adel Nehme's photo

Adel Nehme

5 min

Seaborn Heatmaps: A Guide to Data Visualization

Learn how to create eye-catching Seaborn heatmaps
Joleen Bothma's photo

Joleen Bothma

9 min

Test-Driven Development in Python: A Beginner's Guide

Dive into test-driven development (TDD) with our comprehensive Python tutorial. Learn how to write robust tests before coding with practical examples.
Amina Edmunds's photo

Amina Edmunds

7 min

Exponents in Python: A Comprehensive Guide for Beginners

Master exponents in Python using various methods, from built-in functions to powerful libraries like NumPy, and leverage them in real-world scenarios to gain a deeper understanding.
Satyam Tripathi's photo

Satyam Tripathi

9 min

Python Linked Lists: Tutorial With Examples

Learn everything you need to know about linked lists: when to use them, their types, and implementation in Python.
Natassha Selvaraj's photo

Natassha Selvaraj

9 min

See MoreSee More