Introduction
You build production systems. You’ve optimized database queries, reduced API response times, and debugged performance bottlenecks. But algorithmic complexity analysis is different—it’s a formal mathematical language that Google uses to evaluate candidates.
The good news: it’s not complicated. It’s just a different vocabulary for something you already understand: how code behaves as the problem gets bigger.
Big O is about growth rate. Not actual time. Not milliseconds. Growth rate.
Part 1: What Big O Actually Measures
Beyond "Time" and "Space"
When we say an algorithm has O(n) time complexity, we’re not talking about how many milliseconds it takes. We’re talking about the number of basic operations (comparisons, assignments, arithmetic) as a function of input size.
This distinction matters because:
-
A simple operation on a modern CPU takes nanoseconds, but a complex operation (like a cryptographic hash) takes microseconds
-
The constant factors don’t matter for Big O: an algorithm that does
100noperations and one that doesnoperations are both O(n) -
Big O tells you what happens when the input is 10x bigger, or 100x bigger
Time Complexity measures the number of operations. Space Complexity measures memory used.
Within space complexity, distinguish:
-
Auxiliary space: Extra memory you allocate (hash maps, temporary arrays)
-
Total space: Input size + auxiliary space
When someone asks "what’s the space complexity?" they usually mean auxiliary space. For instance:
+
function findDuplicate(arr) {
const seen = new Set(); // Auxiliary space: O(n) in worst case
for (const num of arr) {
if (seen.has(num)) return num;
seen.add(num);
}
}
+ This is O(n) space (auxiliary), not O(2n).
Why Big O Ignores Constants
Consider these two functions:
// Function A: O(n) with coefficient 1
function iterate1(arr) {
let count = 0;
for (let i = 0; i < arr.length; i++) {
count++;
}
return count;
}
// Function B: O(n) with coefficient 3
function iterate3(arr) {
let count = 0;
for (let i = 0; i < arr.length; i++) {
count++;
count++;
count++;
}
return count;
}
Function B does 3 times as many operations. But both are O(n), because as n grows, the factor of 3 becomes irrelevant to the growth pattern.
At n=1,000: B is 3,000 operations, A is 1,000. Ratio: 3x. At n=1,000,000: B is 3,000,000, A is 1,000,000. Ratio: still 3x.
But the difference between O(n) and O(n²) grows unbounded:
At n=1,000: O(n) is 1,000 operations, O(n²) is 1,000,000. Ratio: 1,000x. At n=1,000,000: O(n) is 1,000,000, O(n²) is 10^12. Ratio: 1,000,000x.
That’s why Big O only cares about the dominant term.
Part 2: The Common Complexities with Real Intuition
Below is a comprehensive table showing what each complexity looks like in practice:
| Complexity | n=10 | n=100 | n=1,000 | n=10,000 | n=1,000,000 |
|---|---|---|---|---|---|
O(1) |
1 |
1 |
1 |
1 |
1 |
O(log n) |
3.3 |
6.6 |
10 |
13.3 |
20 |
O(n) |
10 |
100 |
1,000 |
10,000 |
1,000,000 |
O(n log n) |
33 |
664 |
9,965 |
132,877 |
19,931,568 |
O(n²) |
100 |
10,000 |
1,000,000 |
100,000,000 |
10^12 |
O(n³) |
1,000 |
1,000,000 |
10^9 |
10^12 |
10^18 |
O(2^n) |
1,024 |
1.27e+30 |
1.07e+301 |
— |
— |
O(n!) |
3,628,800 |
10^157 |
— |
— |
— |
Notice the exponential column: at n=100, we’re already at 10^30 operations. Your CPU does about 10^9 operations per second. That’s impossible.
O(1): Constant Time
What it feels like: Instant. Always the same time, no matter how big the input.
Real-world analogy: Looking up a person’s phone number in your contacts by name. You don’t read through all contacts; you jump directly to the entry.
Code pattern:
// Hash map/object lookup
const users = { alice: 25, bob: 30, charlie: 35 };
const age = users['alice']; // O(1)
// Array access by index
const arr = [10, 20, 30, 40];
const first = arr[0]; // O(1)
// Fixed-size operation
function sum(a, b) {
return a + b; // O(1)
}
Google’s verdict: Always acceptable. Any solution can be O(1).
When it’s wrong: Be careful with hash map collisions (see "Common Traps").
O(log n): Logarithmic Time
What it feels like: You eliminate half the problem with each step. Incredibly efficient even for massive inputs.
Real-world analogy: Binary search in a phone book. Each page you open cuts your search space in half.
Code pattern:
// Binary search
function binarySearch(arr, target) {
let left = 0, right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1; // Eliminate left half
else right = mid - 1; // Eliminate right half
}
return -1;
}
// Balanced tree operations (insertion, search, deletion)
class AVLTree {
search(node, value) {
if (!node) return false;
if (node.val === value) return true;
if (value < node.val) return this.search(node.left, value);
return this.search(node.right, value);
}
}
Google’s verdict: Excellent. O(log n) is almost always acceptable, even for n=10^6.
O(n): Linear Time
What it feels like: You visit each element once. For a million elements, you do roughly a million operations.
Real-world analogy: Reading through an unsorted list of names to find one person.
Code pattern:
// Simple loop
function findMax(arr) {
let max = arr[0];
for (let i = 1; i < arr.length; i++) {
if (arr[i] > max) max = arr[i];
}
return max;
}
// Linear search
function contains(arr, target) {
for (const item of arr) {
if (item === target) return true;
}
return false;
}
Google’s verdict: Good for n up to 10^6. For n=10^7 or more, might be tight on time.
O(n log n): Linearithmic Time
What it feels like: You process each element, and for each one you do something that takes log time (like inserting into a balanced tree).
Real-world analogy: Merging two sorted lists of phone books, where each merge step involves a logarithmic lookup.
Code pattern:
// Efficient sort (merge sort, heap sort, built-in sort)
function mergeSort(arr) {
if (arr.length <= 1) return arr;
const mid = Math.floor(arr.length / 2);
const left = mergeSort(arr.slice(0, mid));
const right = mergeSort(arr.slice(mid));
return merge(left, right);
}
// Inserting n items into a balanced tree
function buildBalancedBST(arr) {
const tree = new AVLTree();
for (const item of arr) {
tree.insert(item); // O(log n) per insertion
}
return tree;
}
Google’s verdict: Excellent. Almost always acceptable for competitive programming.
O(n²): Quadratic Time
What it feels like: You have nested loops. For every element, you examine every other element.
Real-world analogy: Checking every pair of people in a room to see if they know each other.
Code pattern:
// Nested loop
function bubbleSort(arr) {
const n = arr.length;
for (let i = 0; i < n; i++) {
for (let j = i + 1; j < n; j++) {
if (arr[i] > arr[j]) {
[arr[i], arr[j]] = [arr[j], arr[i]];
}
}
}
return arr;
}
// Two-pointer after sorting (still O(n²) for unsorted input)
function findAllPairs(arr) {
const result = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
result.push([arr[i], arr[j]]);
}
}
return result;
}
Google’s verdict: Acceptable for n up to ~1,000. For n=10^4, it’s 10^8 operations (risky). For n=10^5, it’s 10^10 operations (too slow).
O(n³): Cubic Time
What it feels like: Three nested loops. Really slow.
Real-world analogy: Checking all possible triangles formed by people in a room.
Code pattern:
// Three nested loops
function allTriplets(arr) {
const result = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
for (let k = j + 1; k < arr.length; k++) {
result.push([arr[i], arr[j], arr[k]]);
}
}
}
return result;
}
// Matrix multiplication
function matrixMultiply(A, B) {
const n = A.length;
const C = Array(n).fill(0).map(() => Array(n).fill(0));
for (let i = 0; i < n; i++) {
for (let j = 0; j < n; j++) {
for (let k = 0; k < n; k++) {
C[i][j] += A[i][k] * B[k][j];
}
}
}
return C;
}
Google’s verdict: Only acceptable for small inputs (n ⇐ 500). At n=1,000, you have 10^9 operations.
O(2^n): Exponential Time
What it feels like: Your algorithm branches in two for each element. The problem space doubles with each additional input.
Real-world analogy: Enumerating all possible subsets of a set.
Code pattern:
// Recursive without memoization (brute force subsets)
function generateSubsets(arr) {
if (arr.length === 0) return [[]];
const smaller = generateSubsets(arr.slice(1));
const withLast = smaller.map(subset => [arr[0], ...subset]);
return [...smaller, ...withLast];
}
// Fibonacci without memoization
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
// Backtracking: trying all combinations
function solveNQueens(n) {
const result = [];
function backtrack(row, board) {
if (row === n) {
result.push([...board]);
return;
}
for (let col = 0; col < n; col++) {
if (isSafe(board, row, col)) {
board[row] = col;
backtrack(row + 1, board);
board[row] = -1;
}
}
}
backtrack(0, Array(n).fill(-1));
return result;
}
Google’s verdict: Only acceptable if combined with optimization (memoization, pruning). Without it, fails for n > 20.
O(n!): Factorial Time
What it feels like: You enumerate all permutations. Catastrophically slow.
Real-world analogy: Finding the best order to visit n cities (travelling salesman, brute force).
Code pattern:
// All permutations
function permute(arr) {
if (arr.length <= 1) return [arr];
const result = [];
for (let i = 0; i < arr.length; i++) {
const current = arr[i];
const remaining = arr.slice(0, i).concat(arr.slice(i + 1));
const perms = permute(remaining);
for (const perm of perms) {
result.push([current, ...perm]);
}
}
return result;
}
Google’s verdict: Almost never acceptable unless n is guaranteed to be tiny (n ⇐ 8).
Part 3: How to Analyze Code
This is the practical skill. Given a piece of code, determine its time complexity.
Step 1: Identify the Loops
Count nested loops and understand their ranges.
// Single loop: O(n)
for (let i = 0; i < arr.length; i++) {
// O(1) work
}
// Two nested loops: O(n²)
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) {
// O(1) work
}
}
// Nested loop with dependent bounds: still O(n²)
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
// O(1) work
}
}
// This is n/2 * n/2 on average, which is still O(n²)
// Sequential loops: O(n)
for (let i = 0; i < arr.length; i++) {
// O(1) work
}
for (let i = 0; i < arr.length; i++) {
// O(1) work
}
// We add them: O(n) + O(n) = O(n)
Step 2: Account for Built-in Operations
Every operation has a cost. Some are deceptively expensive:
// Array.slice() is O(n): it copies elements
function bad(arr) {
for (let i = 0; i < arr.length; i++) {
const rest = arr.slice(i); // O(n) in each iteration
}
}
// Total: O(n) iterations * O(n) per slice = O(n²)
// Array.includes() is O(n): it searches
function inefficient(arr, target) {
for (const item of arr) {
if (arr.includes(item)) { // O(n) search
// do something
}
}
}
// Total: O(n) iterations * O(n) per includes = O(n²)
// Hash map lookup is O(1) amortized
function efficient(arr, target) {
const seen = new Set();
for (const item of arr) {
seen.add(item); // O(1) amortized
}
return seen.has(target); // O(1) amortized
}
// Total: O(n)
Step 3: Recognize Recursion Trees
For recursive functions, visualize the call tree:
// Fibonacci: fib(5) branches into fib(4) + fib(3), and so on
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2); // Two branches each level
}
// At level 0: 1 call
// At level 1: 2 calls
// At level 2: 4 calls
// ...
// At level n: 2^n calls
// Total: O(2^n)
// Binary search: only one branch at each level
function binarySearch(arr, target, left = 0, right = arr.length - 1) {
if (left > right) return -1;
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right);
return binarySearch(arr, target, left, mid - 1);
}
// At level 0: 1 call, searches n elements
// At level 1: 1 call, searches n/2 elements
// At level 2: 1 call, searches n/4 elements
// ...
// Depth: log n
// Total: O(log n)
// Merge sort: n branches at each level, but smaller subproblems
function mergeSort(arr) {
if (arr.length <= 1) return arr;
const mid = Math.floor(arr.length / 2);
const left = mergeSort(arr.slice(0, mid)); // O(n/2) work
const right = mergeSort(arr.slice(mid)); // O(n/2) work
return merge(left, right); // O(n) work
}
// At each level of recursion: O(n) total work
// Depth: log n levels
// Total: O(n log n)
Step 4: Amortized Analysis
Amortized analysis accounts for expensive operations that happen rarely.
// Dynamic array (like JavaScript Array)
// Most insertions are O(1), but occasionally we need to resize
class DynamicArray {
constructor() {
this.data = [];
this.size = 0;
this.capacity = 1;
}
push(value) {
if (this.size === this.capacity) {
// Resize: O(n) operation
const newData = Array(this.capacity * 2);
for (let i = 0; i < this.capacity; i++) {
newData[i] = this.data[i];
}
this.data = newData;
this.capacity *= 2;
}
this.data[this.size++] = value;
}
}
// If we push n items:
// - Pushes happen at sizes 1, 2, 4, 8, 16, ..., n
// - Resizes happen at 1, 2, 4, 8, ..., creating copies of 1, 2, 4, 8, ... elements
// - Total resizes: 1 + 2 + 4 + 8 + ... + n = 2n (geometric series)
// - Total work: n pushes + 2n resize work = 3n = O(n)
// - Amortized: O(1) per push
// Proof: n pushes take 3n total operations, so 3n/n = 3 operations per push on average
// This is O(1) amortized.
Part 4: Space Complexity
Space complexity measures memory usage. It includes:
-
Call stack space (for recursion)
-
Auxiliary data structures (hash maps, arrays you create)
Recursion and Call Stack
Every function call adds a frame to the call stack. The stack space is the maximum depth × space per frame.
// O(n) space due to call stack depth
function sum(arr, index = 0) {
if (index === arr.length) return 0;
return arr[index] + sum(arr, index + 1);
}
// At max depth: n function calls on stack
// Each frame is small (constant), so total is O(n)
// O(log n) space
function binarySearch(arr, target, left = 0, right = arr.length - 1) {
if (left > right) return -1;
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right);
return binarySearch(arr, target, left, mid - 1);
}
// Max recursion depth is log n, so O(log n) space
Auxiliary Data Structures
When you allocate memory, count it:
// O(n) space
function findDuplicate(arr) {
const seen = new Set(); // Allocate set
for (const num of arr) {
if (seen.has(num)) return num;
seen.add(num);
}
return null;
}
// Set can store up to n elements
// O(k) space where k is the number of unique elements
function countFrequency(arr) {
const freq = new Map(); // Allocate map
for (const num of arr) {
freq.set(num, (freq.get(num) || 0) + 1);
}
return freq;
}
// Map stores at most k unique keys, where k <= n
// O(1) space (if we count the input as given)
function findMax(arr) {
let max = arr[0];
for (const num of arr) {
if (num > max) max = num;
}
return max;
}
// Only use a constant amount of extra memory
In-Place vs Out-of-Place
In-place: You modify the input and use O(1) auxiliary space.
// In-place: O(1) space
function reverseInPlace(arr) {
let left = 0, right = arr.length - 1;
while (left < right) {
[arr[left], arr[right]] = [arr[right], arr[left]];
left++;
right--;
}
}
Out-of-place: You create a new data structure.
// Out-of-place: O(n) space
function reverse(arr) {
return arr.reverse(); // Returns new array
}
Google prefers in-place solutions when possible, but doesn’t penalize you for out-of-place if it’s simpler.
Part 5: Common Traps
Trap 1: "Hash Map Lookup is O(1)"
True, but with a caveat. Hash map operations are O(1) amortized average case.
In the worst case, if many keys hash to the same slot, lookups can be O(n).
// In practice, JavaScript's Map handles this well
// But conceptually:
const map = new Map();
map.set('key1', 'value1'); // O(1) amortized
map.get('key1'); // O(1) amortized
map.has('key1'); // O(1) amortized
// If you're using a custom hash function with bad collision handling:
// You could hit O(n) worst case
// But Google expects you to assume O(1) unless told otherwise
What to do: Assume hash maps are O(1) for complexity analysis. If collision handling comes up, mention that it’s amortized O(1).
Trap 2: String Concatenation in a Loop is O(n²)
In languages like Java or C#, strings are immutable. Concatenating creates a new string.
// BAD: O(n²) in some languages (Java, C#)
let result = '';
for (let i = 0; i < n; i++) {
result += 'a'; // Creates a new string each time
}
// In JavaScript, the engine optimizes this to O(n),
// but in other languages it's O(n²)
// GOOD: O(n)
const chars = [];
for (let i = 0; i < n; i++) {
chars.push('a');
}
const result = chars.join(''); // O(n) to join
What to do: In Java/C#, use StringBuilder. In JavaScript, string concatenation is usually optimized, but arrays + join() are safer.
Trap 3: Copying an Array is O(n), Not O(1)
// Copying is O(n)
function process(arr) {
const copy = arr.slice(); // O(n) work
copy.sort();
return copy;
}
// If you do this in a loop:
function processMany(arrays) {
for (const arr of arrays) {
const copy = arr.slice(); // O(n) per copy
// ...
}
}
// Total: O(sum of all lengths), which could be O(m * n) if all arrays are size n
What to do: Be aware of when you’re copying. Sometimes you can avoid it.
Trap 4: Sorting is Not Free
// If you sort inside the main loop:
function inefficient(arr) {
for (let i = 0; i < arr.length; i++) {
const sorted = [...arr].sort(); // O(n log n) per iteration
// ...
}
}
// Total: O(n) iterations * O(n log n) per sort = O(n² log n)
// Better: sort once outside
function efficient(arr) {
const sorted = [...arr].sort(); // O(n log n)
for (let i = 0; i < arr.length; i++) {
// Use the sorted array
}
}
// Total: O(n log n)
What to do: Sort before loops, not inside them.
Visualizing Complexity Growth
Here’s a mermaid diagram showing how different complexities scale:
Summary Table: When to Use What
| Complexity | Use When | Google Says |
|---|---|---|
O(1) |
Direct lookup, arithmetic |
Always good |
O(log n) |
Binary search, balanced trees |
Excellent |
O(n) |
Linear scan, simple iteration |
Good for n ≤ 10^6 |
O(n log n) |
Sorting, tree operations |
Excellent |
O(n²) |
Nested loops, many pairs |
OK for n ≤ 1000 |
O(n³) |
Triple nested loops |
Only for n ≤ 500 |
O(2^n) |
Subsets, with memoization |
Only if memoized |
O(n!) |
Permutations |
Only if n ≤ 8 |