Big O Notation is a metric for determining an algorithm's efficiency. Put simply, it gives an estimate of how long it takes your code to run on different sets of inputs. You can also see it as a way to measure how effectively your code scales as your input size increases. This Big O Notation cheat sheet is here to make these concepts easier for you.
Now, time complexity and space complexity are two specific techniques to measure efficiency.
A function's time complexity measures how long it takes to execute in terms of computational steps. The space complexity of a function measures the amount of memory your code uses.
For a quick refresher on everything around Big O notation, keep reading this cheat sheet!
BASIC LEVEL - Big O Notation Cheat Sheet
What is Big O?
Big O is also known as the algorithm's upper bound since it analyses the worst-case situation.
The best-case scenario usually tells us nothing — we'll possibly solve the problem on the first try. That’s why we employ worst-case scenarios to get meaningful input. It tells us that the algorithm will always perform equal to or better than the worst-case scenario.
Now, the algorithm & data structure you employ while programming code is critical. Big O notation makes it easier to compare the performance of different algorithms and figure out which one is best for your code.
In computer science, Big O Notation is a mathematical function used to determine the difficulty of an algorithm. It defines the time it takes to execute an algorithm. It will also help you determine how your algorithm's performance will change as the input size grows.
How do you measure the efficiency of an algorithm?
Efficiency is measured in two ways: time complexity and space complexity.
A function's time complexity measures how long it takes to execute in terms of computational steps. The space complexity of a function is determined by the amount of memory it uses.
The big O notation, O(g(n)), is a collection of functions. A function f(n) is a member of that collection only if it fits the following criteria:
0 f(n) c.g(n)
So, when an algorithm performs a computation on each item in an array of size n, it takes O(n) time and performs O(1) work on each item.
But why do we need Big O?
The world we live in today consists of complicated apps and software, each running on various devices and each having different capabilities. Some devices like desktops can run heavy machine learning software, but others like phones can only run apps. So when you create an application, you’ll need to optimize your code so that it runs smoothly across devices to give you an edge over your competitors.
As a result, programmers should inspect and evaluate their code thoroughly. This cheat sheet for Big O Notation (a time complexity cheat sheet across data structures) will help you understand a range of complications.
What is Time Complexity?
The time complexity, computational complexity or temporal complexity describes the amount of time necessary to execute an algorithm. It is not a measure of the actual time taken to run an algorithm, instead, it is a measure of how the time taken scales with change in the input length. As a result, the size and magnitude of the processed data have a significant impact. Moreover, It also aids in defining an algorithm's usefulness and evaluating its performance.
What is Space Complexity?
The overall amount of memory or space utilized by an algorithm/program, including the space of input values for execution, is called space complexity. To determine space complexity, simply compute how much space the variables in an algorithm/a program take up.
People usually confuse auxiliary space with space complexity. Auxiliary space is not the equivalent of space complexity, but it’s a part of it. Auxiliary space is just the temporary or extra space, whereas space complexity also includes space used by input values.
Put simply,
Space Complexity = Auxiliary space + Space used by input values.
The best algorithms/programs should have the least space complexity. The lesser the space used, the faster it executes.
Ideally, space and time complexities depend on various factors, such as underlying hardware, OS, CPU, processor, etc. But to keep things simple, we typically don’t consider these factors when analyzing an algorithm's performance.
Following are the key time and space complexities:
- Constant: O(1)
- Linear time: O(n)
- Logarithmic time: O(n log n)
- Quadratic time: O(n^2)
- Exponential time: 2 ^(n)
- Factorial time: O(n!)
INTERMEDIATE LEVEL - Big O Notation Cheat Sheet
The Big O chart
This is an asymptotic notation that lets you express the performance of algorithms or the complexity of algorithms based on the input. Big O assists programmers in understanding the worst-case situation, as well as the execution time or memory requirements of an algorithm. The following graph illustrates the Big O complexity.
When writing Big O notation, we look for the fastest-growing term as the input grows larger and larger. We can simplify the equation by removing any non-dominant terms and constants.
So,
- O(2n) simplifies to O(n), and
- O(n^2 + n + 1000) simplifies to O(n^2)
The following is a detailed explanation of different types of complexities with examples:
Constant Time: O(1)
When there is no dependence on the input size n, an algorithm is said to have a constant time of order O(1).
Example
def example_function(lst): print("First element of list: ", lst[0])
The function above will require only one execution step whether the above array contains 1, 100 or 1000 elements. As a result, the function is in constant time with time complexity O(1).
Linear Time: O(n)
Linear time is achieved when the running time of an algorithm increases linearly with the length of the input. This means that when a function runs for or iterates over an input size of n, it is said to have a time complexity of order O(n).
Example
def example_function(lst, size): for i in range(size): print("Element at index", i, " has value: ", lst[i])
The above function will take O(n) time (or "linear time") to complete, where n is the number of entries in the array. The function will print 10 times if the given array has 10 entries, and 100 times if the array has 100 entries.
Note: Even if you iterate over half the array, the runtime still depends on the input size, so it will be considered O(n).
Logarithm Time: O(log n)
When the size of the input data decreases in each step by a certain factor, an algorithm will have logarithmic time complexity. This means as the input size grows, the number of operations that need to be executed grows comparatively much slower.
To better understand log n, let’s think of finding a word in a dictionary. If you want to find a word with the letter “p”, you can go through every alphabet and try finding the word, which is linear time O(n). Another route you can take is to open the book to the exact center page. If the word on the center page comes before “p”, you look for the word in the right half. Otherwise, you look in the left half. In this example, you are reducing your input size by half every time, so the number of operations you will need to perform significantly reduces compared to going through every letter. Thus you will have a time complexity of O(log (n)).
Example
Binary search and finding the largest/smallest element in a binary tree are both examples of algorithms having logarithmic time complexity.
Binary search comprises searching an array for a specified value by splitting the array into two parts consistently and searching for the element in only one of the two parts. This ensures that the operation is not performed on every element of the input data.
def binarySearch(lst, x): low = 0 high = len(lst)-1 # Repeat until the pointers low and high meet each other while low <= high: mid = low + (high - low)//2 if lst[mid] == x: return mid elif lst[mid] < x: low = mid + 1 else: high = mid - 1 return -1
The Binary Search method takes a sorted list of elements and searches through it for the element x. This is how the algorithm works:
- Find the list's midpoint.
- Compare the target to the middle.
- We've located our goal if our value and the target match.
- If our value is lesser than the target, we focus on the list with values ranging from the middle plus one to the highest.
- If our value is greater than the target, we focus on the list starting with the smallest value and ending with the midpoint minus one.
- Continue until we locate the target or till we reach the last element, which indicates that the element is not present in the list.
With every iteration, the size of our search list shrinks by half. Therefore traversing and finding an entry in the list takes O(log(n)) time.
Quadratic Time: O(n^2)
The performance of a quadratic time complexity algorithm is directly related to the squared size of the input data collection. You will encounter such time complexity in programs when you perform several iterations on data sets.
Example
def quadratic_function(lst, size): for i in range(size): for j in range(size): print("Iteration : " i, "Element of list at ", j, " is ", lst[j])
We have two nested loops in the example above. If the array has n items, the outer loop will execute n times, and the inner loop will execute n times for each iteration of the outer loop, resulting in n^2 prints. If the size of the array is 10, then the loop runs 10x10 times. So the function ten will print 100 times. As a result, this function will take O(n^2) time to complete.
Exponential Time: O(2^n)
With each addition to the input (n), the growth rate doubles, and the algorithm iterates across all subsets of the input elements. When an input unit is increased by one, the number of operations executed is doubled.
Example
def fibonacci(n): if (n <= 1): return 1 else: return fibonacci(n - 2) + fibonacci(n - 1)
In the above example, we use recursion to calculate the Fibonacci sequence. The algorithm O(2^n) specifies a growth rate that doubles every time the input data set is added. An O(2^n) function's exponential growth curve starts shallow and then rises rapidly.
Data Structure Complexity Chart
Data Structures | Space Complexity | Average Case Time Complexity | |||
Access | Search | Insertion | Deletion | ||
Array | O(n) | O(1) | O(n) | O(n) | O(n) |
Stack | O(n) | O(n) | O(n) | O(1) | O(1) |
Queue | O(n) | O(n) | O(n) | O(1) | O(1) |
Singly Linked List | O(n) | O(n) | O(n) | O(1) | O(1) |
Doubly Linked List | O(n) | O(n) | O(n) | O(1) | O(1) |
Hash Table | O(n) | N/A | O(1) | O(1) | O(1) |
Binary Search Tree | O(n) | O(log n) | O(log n) | O(log n) | O(log n) |
Algorithm Complexity Chart
Algorithm complexity is a metric that determines the order of an algorithm's count of operations as a function of the size of the input data.
When we talk about complexity, we're talking about the order of operation count, not the exact total number of operations. In simple terms, complexity estimates an approximate number of steps/operations required to complete an algorithm.
Here are some important complexities of well-known algorithms that you need to know:
Search Algorithms
Search Algorithms | Space Complexity | Time Complexity | ||
Best Case | Average Case | Worst Case | ||
Linear Search | O(1) | O(1) | O(n) | O(n) |
Binary Search | O(1) | O(1) | O(log n) | O(log n) |
Sorting Algorithms
Sorting Algorithms | Space Complexity | Time Complexity | ||
Best Case | Average Case | Worst Case | ||
Selection Sort | O(1) | O(n^2) | O(n^2) | O(n^2) |
Insertion Sort | O(1) | O(n) | O(n^2) | O(n^2) |
Bubble Sort | O(1) | O(n) | O(n^2) | O(n^2) |
Quick Sort | O(log n) | O(log n) | O(n log n) | O(n log n) |
Merge Sort | O(n) | O(n) | O(n log n) | O(n log n) |
Heap Sort | O(1) | O(1) | O(n log n) | O(n log n) |
ADVANCED LEVEL - Big O Notation Cheat Sheet
Calculating Complexity
To determine the time complexity of our code, we must examine it line by line, taking note of the following factors:
- Assignments, bits, and math operators are all basic operations.
- Loops and nested loops
- Recursions and function invocations
Dropping the constants
Whenever you calculate the Big O complexity of any algorithm, you can throw out or ignore the constants.
For example:
def print_twice(lst, size): print("First Print") for i in range(size): print("Element at index", i, " has value: ", lst[i]) print("Second Print") for i in range(size): print("Element at index", i, " has value: ", lst[i])
This is O(2n), which simplifies to just O(n).
Another example is as follows:
def example_printing(lst, size): print("First Element is: ", lst[0]) print("Printing first half of list) for i in range(size//2): print("Element at index", i, " has value: ", lst[i]) print("Printing Entire list") for i in range(size): print("Element at index", i, " has value: ", lst[i])
This is O(1 + n/2 + 100), which simplifies to just O(n).
When n grows arbitrarily large, we look for the big O notation. Adding 100 or dividing by two drops dramatically as n grows larger.
Base of Logarithm in Big O
It makes no difference what the logarithm base is in Big-O complexity analysis; they are asymptotically the same or differ by just a constant factor.
Thus, O(log2 n) = O(log10 n) = O(log n).
Most of the time, when mathematicians speak of logs, they are referring to the base, e. Base 2 is preferred by computer scientists, yet it makes no difference.
Complexity for Advanced Data Structures
In the intermediate cheat sheet, we had seen a complexity chart for data structures. Let's see some more complex data structures and their complexities.
Data Structures | Space Complexity | Average Case Time Complexity | |||
Access | Search | Insertion | Deletion | ||
Skip List | O(n log n) | O(log n) | O(log n) | O(log n) | O(log n) |
Cartesian Tree | O(n) | N/A | O(log n) | O(log n) | O(log n) |
B-Tree | O(n) | O(log n) | O(log n) | O(log n) | O(log n) |
Red-Black Tree | O(n) | O(log n) | O(log n) | O(log n) | O(log n) |
Splay Tree | O(n) | N/A | O(log n) | O(log n) | O(log n) |
AVL Tree | O(n) | O(log n) | O(log n) | O(log n) | O(log n) |
KD Tree | O(n) | O(log n) | O(log n) | O(log n) | O(log n) |
Wrapping Up
That’s all for this Big O cheat sheet! We’ve covered a variety of topics around Big O notation as well as complexities for data structures and algorithms.
As programmers, you should assess the complexity of your code using a variety of data sets to estimate how long it will take for your code to execute. Though calculating Big O seems to be a daunting task, the ability to analyze the runtime of your algorithm is quite a handy tool and will help you write better, optimized code.
We hope that this cheat sheet helped you better understand Big O and provide you with the knowledge you need to write better code for your software and applications.