Understanding Time Complexity: Big O Notation Explained
A beginner-friendly guide to analyzing algorithm efficiency and understanding Big O, Omega, and Theta notation.
Understanding Time Complexity: Big O Notation Explained
Time complexity analysis helps you understand how algorithms scale. Big O notation is the standard way to express this.
What is Big O?
Big O describes the worst-case growth rate of an algorithm's runtime as input size increases.
Common Complexities
O(1) - Constant Time
Operations that take the same time regardless of input size:
def get_first(arr):
return arr[0] # Always takes same timeO(log n) - Logarithmic
Operations that halve the problem size each step:
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
# Problem size halves each iterationO(n) - Linear
Operations that process each element once:
def find_max(arr):
max_val = arr[0]
for num in arr: # Process each element
if num > max_val:
max_val = num
return max_valO(n log n) - Linearithmic
Common in efficient sorting algorithms:
# Merge sort, Quick sort
sorted_arr = sorted(arr) # O(n log n)O(n²) - Quadratic
Nested loops over the same data:
def bubble_sort(arr):
for i in range(len(arr)):
for j in range(len(arr) - 1): # Nested loop
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]Space Complexity
Space complexity follows similar rules but measures memory usage instead of time.
Quick Reference
- **O(1):** Hash map lookup
- **O(log n):** Binary search
- **O(n):** Linear search, array iteration
- **O(n log n):** Efficient sorting
- **O(n²):** Nested loops, bubble sort
- **O(2ⁿ):** Recursive Fibonacci (naive)
Understanding complexity helps you choose the right algorithm for your problem!