Algorithms & Data Structures

The working toolkit every engineer reaches for. A tour of the containers, traversals, and recipes that turn "I could solve this in a week of loops" into "I solved it in ten lines and it runs on a billion inputs."

Prereq: discrete math, basic Python Read time: ~45 min Interactive figures: 1 (sort animator) Code: Python
How to read this page. The sections are ordered so each one only uses things you've seen above it. If you're rusty, read top to bottom. If you're here for a specific topic, jump — every section is self-contained. The interactive sort animator in §12 is worth a few minutes on its own: watching mergesort and quicksort side by side is the fastest way to internalize how they differ.

Vocabulary cheat-sheet

Ten symbols you'll meet over and over. Bookmark this block; the rest of the page assumes them without re-defining.

SymbolMeansWhere it shows up
$n$Input size — the thing that growsEverywhere
$O(f(n))$Upper bound on running timeEvery complexity claim
$\Theta(f(n))$Tight bound — both upper and lower§13 lower bound
$\Omega(f(n))$Lower bound§13 sorting limit
$\log n$Base-2 logarithm unless notedBinary search, tree depth
$V, E$Vertices and edges of a graph§11, §15, §16
$\alpha$ (load factor)$n / m$ in a hash table§6
$h$ (height)Longest root-to-leaf path in a tree§7, §8, §9
$w(u, v)$Weight of edge from $u$ to $v$§16 Dijkstra
$T(n)$Running time as a function of $n$Recurrences in §7, §13

1. Why algorithms — the working toolkit

There's a meme that algorithms are what you cram for interviews and then forget. That's half true. You will forget the implementation details of red-black tree rotations. Nobody remembers those. What you won't forget, if you learn it once properly, is which tool fits which problem — and that is what separates a great engineer from a merely productive one.

Here's the distinction in one line. A productive engineer solves the problem in front of them with whatever's at hand. A great engineer recognizes that the problem in front of them is an instance of a class they've seen before, reaches for the right data structure, and ends up with code that is shorter, faster, and easier to reason about. The recognition is the whole game.

Consider three realistic examples:

In every case, the win comes from recognizing the problem type and picking the matching data structure. This page is about building that recognition. We'll walk through the core containers (arrays, lists, hash tables, trees, heaps, graphs), the standard algorithms that operate on them (sorting, searching, traversal, shortest path, dynamic programming), and — most importantly — the conditions under which each one is the right tool.

Related tracks. You'll lean on discrete math for graph terminology and proof by induction, and on linear algebra when algorithms bump into matrix problems (PageRank, spectral clustering, Strassen's multiplication). The companion pages complexity and theoretical CS make the formal case for why some of these bounds are tight.

2. Big-O preview — just enough to talk about cost

We'll use Big-O notation throughout, so here's the one-minute version. The formal treatment (including $\Omega$, $\Theta$, amortized analysis, and the master theorem) lives in complexity.html.

When we say an algorithm takes $O(n)$ time, we mean its running time grows at most linearly in the input size $n$, ignoring constant factors and lower-order terms. Formally:

$$f(n) = O(g(n)) \iff \exists\, c>0,\, n_0 \ge 0 : \forall n \ge n_0,\; f(n) \le c \cdot g(n)$$

Big-O — upper bound on growth

$f(n)$
The actual running time (or space, or number of comparisons) of the algorithm as a function of input size $n$.
$g(n)$
A simpler "yardstick" function — typically $1$, $\log n$, $n$, $n \log n$, $n^2$, $2^n$.
$n$
The size of the input. For a list, the number of elements. For a graph, vertices and/or edges.
$c$
Any positive constant. Big-O deliberately hides constants so the notation survives being ported to a different machine.
$n_0$
The point past which the bound holds. Big-O is about behavior as $n$ grows large; small-$n$ shenanigans are ignored.
$\exists, \forall, \iff$
"There exists", "for all", "if and only if" — standard logic shorthand.

Intuition. If your algorithm is $O(n)$ and you double the input, you roughly double the running time. If it's $O(n^2)$, doubling makes it 4x slower. If it's $O(\log n)$, doubling costs you one extra step. That's the whole game — you can eyeball algorithmic cost without ever computing a constant.

The common growth classes, from cheapest to most expensive, are:

When you see a running time like "amortized $O(1)$", that means the average cost per operation is constant, even if individual operations occasionally take longer. Dynamic arrays and hash tables are the canonical examples — we'll see both below.

3. Arrays and dynamic arrays

An array is a contiguous block of memory holding $n$ elements of the same type. Each element has the same size, so the address of element $i$ is just start + i × element_size. That's why array indexing is $O(1)$: you do one multiplication and one memory load. Nothing to search, nothing to follow.

Contiguity isn't just a convenience. It's also what makes arrays cache-friendly: when you read element $i$, your CPU pulls the neighboring elements into cache "for free", so the next access is nearly instant. On a modern machine the difference between cache-hit and cache-miss access is roughly 100x. This is why, in practice, a simple array often beats a fancier structure even when Big-O says otherwise.

Fixed vs. dynamic array. A fixed array has a size chosen at creation time and cannot grow — this is what C's int arr[100] gives you. A dynamic array hides a fixed array behind a wrapper that automatically reallocates to a larger buffer when it fills up. Python's list, C++'s std::vector, Java's ArrayList, and Go's slice are all dynamic arrays.

The trick behind dynamic arrays is geometric resizing. When the buffer is full and you try to append, the array allocates a new buffer of size $2n$ (or $1.5n$), copies the old elements over, and frees the old one. The copy is $O(n)$, but it only happens every $n$ appends, so the amortized cost per append is still $O(1)$.

Concretely: starting empty, to get to $n$ elements, you do copies of size $1, 2, 4, 8, \dots, n$. The total work is

$$1 + 2 + 4 + \cdots + n = 2n - 1 = O(n)$$

Amortized append

$n$
The final number of elements in the dynamic array after a sequence of appends.
$1 + 2 + 4 + \cdots + n$
A geometric sum — the cost of each doubling-copy. The $k$-th copy is of size $2^k$.
$2n - 1$
The closed form of the geometric sum $\sum_{k=0}^{\log_2 n} 2^k$. It's linear in $n$, not $n \log n$ as you might fear.
$O(n)$ total, $O(1)$ amortized
Total work to do $n$ appends is linear, so the average cost per append is constant.

Why 2x and not 1.1x? The doubling factor is a tradeoff. Bigger factor = fewer copies but more wasted space. Smaller factor = less waste but more copies. Python uses roughly 1.125x with a small offset; C++ and Java use 2x. Both give amortized $O(1)$; the constants differ.

Array trade-offs:

OperationFixed arrayDynamic array
Index $a[i]$$O(1)$$O(1)$
Append at endn/a$O(1)$ amortized
Insert at frontn/a$O(n)$ — shifts everything
Insert in middlen/a$O(n)$
Delete from middlen/a$O(n)$
Search (unsorted)$O(n)$$O(n)$
Search (sorted)$O(\log n)$$O(\log n)$

Rule of thumb: if you're going to read a lot and append at the end, use a dynamic array. If you need frequent inserts in the middle, reach for a linked list or a balanced tree instead.

Source — Dynamic Array
DynamicArray:
  data = fixed array of capacity 1
  size = 0

  append(value):
    if size == capacity:
      allocate new array of size capacity * 2
      copy old elements into new array
      replace data with new array
    data[size] = value
    size += 1

  get(i):         return data[i]        // O(1)
  set(i, value):  data[i] = value       // O(1)
class DynamicArray:
    def __init__(self):
        self._cap = 1
        self._data = [None] * self._cap
        self._size = 0

    def append(self, val):
        if self._size == self._cap:
            self._resize(self._cap * 2)
        self._data[self._size] = val
        self._size += 1

    def _resize(self, new_cap):
        new_data = [None] * new_cap
        for i in range(self._size):
            new_data[i] = self._data[i]
        self._data = new_data
        self._cap = new_cap

    def get(self, i):           # O(1)
        return self._data[i]

    def set(self, i, val):      # O(1)
        self._data[i] = val

    def __len__(self):
        return self._size
class DynamicArray {
  constructor() {
    this._data = new Array(1);
    this._size = 0;
    this._cap  = 1;
  }

  append(val) {
    if (this._size === this._cap) {
      const next = new Array(this._cap * 2);
      for (let i = 0; i < this._size; i++) next[i] = this._data[i];
      this._data = next;
      this._cap *= 2;
    }
    this._data[this._size++] = val;
  }

  get(i)        { return this._data[i]; }   // O(1)
  set(i, val)   { this._data[i] = val; }    // O(1)
  get size()    { return this._size; }
}
#include <stdlib.h>
#include <string.h>

typedef struct {
    int  *data;
    int   size;
    int   cap;
} DynArr;

void da_init(DynArr *a) {
    a->data = malloc(sizeof(int));
    a->size = 0;
    a->cap  = 1;
}

void da_append(DynArr *a, int val) {
    if (a->size == a->cap) {
        a->cap *= 2;
        a->data = realloc(a->data, a->cap * sizeof(int));
    }
    a->data[a->size++] = val;
}

int  da_get(DynArr *a, int i)         { return a->data[i]; }
void da_set(DynArr *a, int i, int v)  { a->data[i] = v; }
void da_free(DynArr *a)               { free(a->data); }
#include <vector>
#include <stdexcept>

// std::vector IS the dynamic array — shown here for illustration.
template<typename T>
class DynArray {
    std::vector<T> buf;
public:
    void append(const T& v)  { buf.push_back(v); }   // amortized O(1)
    T&   get(int i)          { return buf[i]; }        // O(1)
    void set(int i, const T& v) { buf[i] = v; }       // O(1)
    int  size() const        { return (int)buf.size(); }
};

// Usage:
// DynArray<int> a;
// a.append(1); a.append(2);
// a.get(0);    // 1
import java.util.Arrays;

public class DynamicArray<T> {
    private Object[] data;
    private int size;

    public DynamicArray() { data = new Object[1]; size = 0; }

    public void append(T val) {
        if (size == data.length)
            data = Arrays.copyOf(data, data.length * 2);
        data[size++] = val;
    }

    @SuppressWarnings("unchecked")
    public T get(int i)           { return (T) data[i]; }  // O(1)
    public void set(int i, T val) { data[i] = val; }       // O(1)
    public int size()             { return size; }
}
package main

// Go slices are dynamic arrays; this shows the manual version.
type DynArr struct {
    data []int
}

func (a *DynArr) Append(v int) {
    // append handles resizing automatically (doubles capacity)
    a.data = append(a.data, v)
}

func (a *DynArr) Get(i int) int      { return a.data[i] }
func (a *DynArr) Set(i, v int)       { a.data[i] = v }
func (a *DynArr) Len() int           { return len(a.data) }

4. Linked lists

A linked list is a chain of nodes, where each node holds a value and a pointer to the next node. There's no contiguous memory, no index-to-address math. To find the $i$-th element, you start at the head and walk $i$ steps. That's $O(i)$, which in the worst case is $O(n)$.

That sounds bad. Why does this structure exist at all? Because it trades indexing speed for cheap insertion and deletion anywhere. Given a pointer to a node, you can splice in a new node or remove it in $O(1)$ — just rewrite two pointers. An array can't do that; it has to shift every element after the insertion point.

Singly vs. doubly linked. A singly linked node has one pointer, to the next node. A doubly linked node has two — to next and previous. Doubly linked lists can be traversed in both directions and can delete a node in $O(1)$ given only a pointer to it (you need the prev pointer to rewire). Singly linked lists are smaller (one pointer per node) but less flexible.

In 2025 you will almost never write a linked list by hand. The reason is cache locality: walking a linked list means chasing pointers to unpredictable addresses, and each pointer chase is likely to miss cache. A sorted array with binary search — despite its $O(n)$ inserts — often beats a linked list at in-order traversal by a factor of 10.

You will still see linked lists in a few places:

The rule. Start with an array. Switch to a linked list only when you've profiled and found that insert/delete in the middle is dominating. Most of the time you won't.
Source — Linked List
Node: value, next = null

LinkedList: head = null

insert_at_head(value):
  node = new Node(value)
  node.next = head
  head = node               // O(1)

delete(value):
  if head is null: return
  if head.value == value: head = head.next; return
  cur = head
  while cur.next != null:
    if cur.next.value == value:
      cur.next = cur.next.next   // splice out
      return
    cur = cur.next

traverse():
  cur = head
  while cur != null:
    visit(cur.value)
    cur = cur.next            // O(n)
class Node:
    def __init__(self, val):
        self.val  = val
        self.next = None

class LinkedList:
    def __init__(self):
        self.head = None

    def insert_at_head(self, val):  # O(1)
        node = Node(val)
        node.next = self.head
        self.head = node

    def delete(self, val):          # O(n)
        if self.head is None:
            return
        if self.head.val == val:
            self.head = self.head.next
            return
        cur = self.head
        while cur.next:
            if cur.next.val == val:
                cur.next = cur.next.next
                return
            cur = cur.next

    def traverse(self):             # O(n)
        cur = self.head
        while cur:
            print(cur.val, end=' -> ')
            cur = cur.next
        print('None')
class Node {
  constructor(val) { this.val = val; this.next = null; }
}

class LinkedList {
  constructor() { this.head = null; }

  insertAtHead(val) {              // O(1)
    const n = new Node(val);
    n.next = this.head;
    this.head = n;
  }

  delete(val) {                    // O(n)
    if (!this.head) return;
    if (this.head.val === val) { this.head = this.head.next; return; }
    let cur = this.head;
    while (cur.next) {
      if (cur.next.val === val) { cur.next = cur.next.next; return; }
      cur = cur.next;
    }
  }

  traverse() {                     // O(n)
    const out = [];
    let cur = this.head;
    while (cur) { out.push(cur.val); cur = cur.next; }
    return out;
  }
}
#include <stdlib.h>

typedef struct Node { int val; struct Node *next; } Node;

Node* ll_insert_head(Node *head, int val) {
    Node *n = malloc(sizeof(Node));
    n->val  = val;
    n->next = head;
    return n;       // new head
}

Node* ll_delete(Node *head, int val) {
    if (!head) return NULL;
    if (head->val == val) { Node *tmp = head->next; free(head); return tmp; }
    Node *cur = head;
    while (cur->next) {
        if (cur->next->val == val) {
            Node *tmp = cur->next;
            cur->next = tmp->next;
            free(tmp);
            return head;
        }
        cur = cur->next;
    }
    return head;
}
#include <iostream>

struct Node { int val; Node* next; Node(int v): val(v), next(nullptr){} };

struct LinkedList {
    Node* head = nullptr;

    void insertAtHead(int val) {      // O(1)
        Node* n = new Node(val);
        n->next = head;
        head = n;
    }

    void deleteVal(int val) {         // O(n)
        if (!head) return;
        if (head->val == val) { Node* t = head; head = head->next; delete t; return; }
        Node* cur = head;
        while (cur->next) {
            if (cur->next->val == val) {
                Node* t = cur->next; cur->next = t->next; delete t; return;
            }
            cur = cur->next;
        }
    }

    void traverse() const {
        for (Node* c = head; c; c = c->next) std::cout << c->val << " -> ";
        std::cout << "null\n";
    }
};
public class LinkedList<T> {
    private static class Node<T> { T val; Node<T> next; Node(T v){val=v;} }
    private Node<T> head;

    public void insertAtHead(T val) {   // O(1)
        Node<T> n = new Node<>(val);
        n.next = head;
        head = n;
    }

    public void delete(T val) {         // O(n)
        if (head == null) return;
        if (head.val.equals(val)) { head = head.next; return; }
        Node<T> cur = head;
        while (cur.next != null) {
            if (cur.next.val.equals(val)) { cur.next = cur.next.next; return; }
            cur = cur.next;
        }
    }

    public void traverse() {            // O(n)
        for (Node<T> c = head; c != null; c = c.next)
            System.out.print(c.val + " -> ");
        System.out.println("null");
    }
}
package main

type Node struct { Val int; Next *Node }

type LinkedList struct { Head *Node }

func (l *LinkedList) InsertAtHead(val int) {  // O(1)
    l.Head = &Node{Val: val, Next: l.Head}
}

func (l *LinkedList) Delete(val int) {        // O(n)
    if l.Head == nil { return }
    if l.Head.Val == val { l.Head = l.Head.Next; return }
    cur := l.Head
    for cur.Next != nil {
        if cur.Next.Val == val { cur.Next = cur.Next.Next; return }
        cur = cur.Next
    }
}

func (l *LinkedList) Traverse() []int {       // O(n)
    var out []int
    for c := l.Head; c != nil; c = c.Next { out = append(out, c.Val) }
    return out
}

5. Stacks and queues

A stack is a container with two operations: push (add to the top) and pop (remove from the top). Last in, first out — LIFO. Think of a stack of plates.

A queue is a container with two operations: enqueue (add to the back) and dequeue (remove from the front). First in, first out — FIFO. Think of a line at a coffee shop.

These aren't really separate data structures; they're interfaces that can be implemented on top of an array, a linked list, or both:

Where you'll meet them:

If you only remember one thing: DFS uses a stack, BFS uses a queue. That single fact lets you convert one traversal into the other by swapping one container.

Source — Stack & Queue
-- Stack (array-backed, LIFO) --
Stack: data=[], top=-1
push(v):  data[++top] = v
pop():    return data[top--]
peek():   return data[top]

-- Circular Queue (ring buffer, FIFO) --
Queue: data=array[cap], head=0, tail=0, size=0
enqueue(v):
  data[tail] = v
  tail = (tail + 1) mod cap
  size += 1
dequeue():
  v = data[head]
  head = (head + 1) mod cap
  size -= 1
  return v
# Stack — list is O(1) amortized for push/pop at end
class Stack:
    def __init__(self):   self._d = []
    def push(self, v):    self._d.append(v)
    def pop(self):        return self._d.pop()
    def peek(self):       return self._d[-1]
    def empty(self):      return len(self._d) == 0

# Circular queue using collections.deque (O(1) both ends)
from collections import deque

class Queue:
    def __init__(self):   self._d = deque()
    def enqueue(self, v): self._d.append(v)
    def dequeue(self):    return self._d.popleft()
    def empty(self):      return len(self._d) == 0

# Manual ring buffer
class RingQueue:
    def __init__(self, cap):
        self._data = [None] * cap
        self._cap  = cap
        self._head = self._tail = self._size = 0

    def enqueue(self, v):
        self._data[self._tail] = v
        self._tail = (self._tail + 1) % self._cap
        self._size += 1

    def dequeue(self):
        v = self._data[self._head]
        self._head = (self._head + 1) % self._cap
        self._size -= 1
        return v
// Stack backed by an array
class Stack {
  #d = [];
  push(v)  { this.#d.push(v); }
  pop()    { return this.#d.pop(); }
  peek()   { return this.#d[this.#d.length - 1]; }
  empty()  { return this.#d.length === 0; }
}

// Circular queue (ring buffer)
class RingQueue {
  constructor(cap) {
    this._d    = new Array(cap);
    this._cap  = cap;
    this._head = this._tail = this._size = 0;
  }
  enqueue(v) {
    this._d[this._tail] = v;
    this._tail = (this._tail + 1) % this._cap;
    this._size++;
  }
  dequeue() {
    const v = this._d[this._head];
    this._head = (this._head + 1) % this._cap;
    this._size--;
    return v;
  }
  empty() { return this._size === 0; }
}
#include <stdlib.h>
#define CAP 1024

// Stack
typedef struct { int d[CAP]; int top; } Stack;
void s_push(Stack *s, int v) { s->d[++s->top] = v; }
int  s_pop (Stack *s)        { return s->d[s->top--]; }
int  s_peek(Stack *s)        { return s->d[s->top]; }

// Ring-buffer queue
typedef struct {
    int d[CAP]; int head, tail, size;
} Queue;
void q_enq(Queue *q, int v) {
    q->d[q->tail] = v;
    q->tail = (q->tail + 1) % CAP;
    q->size++;
}
int q_deq(Queue *q) {
    int v = q->d[q->head];
    q->head = (q->head + 1) % CAP;
    q->size--;
    return v;
}
#include <stack>
#include <queue>

// std::stack and std::queue wrap a deque by default.

std::stack<int> stk;
stk.push(1); stk.push(2);
stk.top();   // 2
stk.pop();

std::queue<int> q;
q.push(1); q.push(2);
q.front();   // 1
q.pop();

// Manual ring buffer (same logic as C version):
template<typename T, int CAP>
struct RingQueue {
    T d[CAP]; int head=0, tail=0, size=0;
    void enqueue(T v) { d[tail]=(v); tail=(tail+1)%CAP; size++; }
    T    dequeue()    { T v=d[head]; head=(head+1)%CAP; size--; return v; }
};
import java.util.ArrayDeque;
import java.util.Deque;

// Stack
Deque<Integer> stack = new ArrayDeque<>();
stack.push(1); stack.push(2);
stack.peek();  // 2
stack.pop();

// Queue (FIFO)
Deque<Integer> queue = new ArrayDeque<>();
queue.offer(1); queue.offer(2);
queue.peek();   // 1
queue.poll();

// Manual ring buffer
public class RingQueue {
    private int[] d; private int head, tail, size, cap;
    public RingQueue(int cap) { d = new int[cap]; this.cap = cap; }
    public void enqueue(int v) { d[tail] = v; tail = (tail+1)%cap; size++; }
    public int  dequeue()      { int v=d[head]; head=(head+1)%cap; size--; return v; }
}
package main

// Stack
type Stack []int

func (s *Stack) Push(v int)  { *s = append(*s, v) }
func (s *Stack) Pop() int    { n := len(*s)-1; v := (*s)[n]; *s = (*s)[:n]; return v }
func (s *Stack) Peek() int   { return (*s)[len(*s)-1] }
func (s *Stack) Empty() bool { return len(*s) == 0 }

// Ring-buffer queue
type RingQueue struct {
    d          []int
    head, tail, size, cap int
}

func NewRingQueue(cap int) *RingQueue { return &RingQueue{d: make([]int, cap), cap: cap} }

func (q *RingQueue) Enqueue(v int) {
    q.d[q.tail] = v
    q.tail = (q.tail + 1) % q.cap
    q.size++
}

func (q *RingQueue) Dequeue() int {
    v := q.d[q.head]
    q.head = (q.head + 1) % q.cap
    q.size--
    return v
}

6. Hash tables — the most important data structure in practice

If you had to pick one data structure to take to a desert island, it would be the hash table. It gives you a map from arbitrary keys (strings, tuples, objects) to values with expected $O(1)$ insert, lookup, and delete. No ordering, no tree rotations, no comparisons — just a function and an array.

Here's the core idea. You have an array of size $m$ ("buckets"). You have a hash function $h$ that turns a key $k$ into an integer, and you use $h(k) \bmod m$ as the bucket index:

$$\text{bucket}(k) = h(k) \bmod m$$

Hashing to a bucket

$k$
The key — what you're looking up by. Can be any type that has a defined hash.
$h(k)$
The hash function applied to $k$. Returns a fixed-width integer, typically 32 or 64 bits.
$m$
The size of the bucket array. Usually a power of 2 or a prime.
$\bmod$
Modulo — the remainder after dividing. $h(k) \bmod m$ maps the integer hash into $\{0, 1, \dots, m-1\}$.

Why is this $O(1)$? Because the steps — compute the hash, modulo, array lookup — don't depend on $n$. They depend only on how fast you can hash a single key. If the hash function is uniformly distributing keys and the array is big enough, each bucket has about one element, so the lookup is genuinely constant-time.

Two keys will occasionally land in the same bucket. That's a collision. Birthday paradox says it happens surprisingly often — with $\sqrt{m}$ random keys you already expect one. So every hash table needs a story for collisions. There are two standard stories.

6.1 Chaining

Each bucket holds a linked list (or a small dynamic array) of all the key–value pairs that hashed there. To look up $k$, you compute bucket(k), then walk the list comparing keys. As long as the list stays short, this is fast.

The expected list length is the load factor:

$$\alpha = \frac{n}{m}$$

Load factor

$\alpha$
Load factor — the average number of entries per bucket.
$n$
Total number of key–value pairs currently in the table.
$m$
Number of buckets in the underlying array.

Rule of thumb. Most chaining hash tables keep $\alpha < 1$, typically around $0.7$. When $\alpha$ gets too large, the table rehashes — doubles $m$ and reinserts every element. Like dynamic array resizing, rehashing is $O(n)$ but only happens every $\Theta(n)$ inserts, so the amortized cost per insert stays $O(1)$.

6.2 Open addressing

No linked lists. Everything lives directly in the array. If your bucket is taken, you probe the next one, then the next, until you find an empty slot. The simplest probe is linear probing: try $(h(k)+1) \bmod m$, $(h(k)+2) \bmod m$, and so on. More sophisticated schemes: quadratic probing (step $i^2$) and double hashing (step = a second hash of $k$).

Open addressing uses memory more densely and avoids pointer chasing, so it's cache-friendly. The tradeoff is that it needs a smaller load factor ($\alpha \le 0.7$ or so) to stay fast, because probes start clustering badly as the table fills up. Python's dict, Rust's HashMap, and most modern hash tables use open addressing.

6.3 What can go wrong

Why it matters for you. Every time you write d = {} in Python, or map<string, int> in C++, you're using a hash table. It's the single most used data structure in real software after the array. If you ever have to implement one from scratch — which you shouldn't, but it's educational — try chaining first, then open addressing, and profile both on realistic data.
Source — Hash Map
HashMap (chaining):
  buckets = array of M empty linked lists
  size = 0

  put(key, value):
    i = hash(key) mod M
    for each (k,v) in buckets[i]:
      if k == key: update v; return
    prepend (key, value) to buckets[i]
    size += 1
    if size / M > 0.75: rehash(M * 2)

  get(key):
    i = hash(key) mod M
    for each (k,v) in buckets[i]:
      if k == key: return v
    return NOT_FOUND

  delete(key):
    i = hash(key) mod M
    remove (key,*) from buckets[i]
class HashMap:
    def __init__(self, cap=8):
        self._cap     = cap
        self._buckets = [[] for _ in range(cap)]  # each bucket = list of (k,v)
        self._size    = 0

    def _idx(self, key):
        return hash(key) % self._cap

    def put(self, key, val):
        b = self._buckets[self._idx(key)]
        for i, (k, _) in enumerate(b):
            if k == key:
                b[i] = (key, val)   # update
                return
        b.append((key, val))
        self._size += 1
        if self._size / self._cap > 0.75:
            self._rehash()

    def get(self, key, default=None):
        for k, v in self._buckets[self._idx(key)]:
            if k == key:
                return v
        return default

    def delete(self, key):
        b = self._buckets[self._idx(key)]
        self._buckets[self._idx(key)] = [(k,v) for k,v in b if k != key]

    def _rehash(self):
        old = self._buckets
        self._cap    *= 2
        self._buckets = [[] for _ in range(self._cap)]
        self._size    = 0
        for bucket in old:
            for k, v in bucket:
                self.put(k, v)
class HashMap {
  constructor(cap = 8) {
    this._cap     = cap;
    this._buckets = Array.from({length: cap}, () => []);
    this._size    = 0;
  }

  _idx(key) {
    let h = 0;
    for (const c of String(key)) h = (h * 31 + c.charCodeAt(0)) | 0;
    return Math.abs(h) % this._cap;
  }

  put(key, val) {
    const b = this._buckets[this._idx(key)];
    const entry = b.find(e => e[0] === key);
    if (entry) { entry[1] = val; return; }
    b.push([key, val]);
    this._size++;
    if (this._size / this._cap > 0.75) this._rehash();
  }

  get(key) {
    const entry = this._buckets[this._idx(key)].find(e => e[0] === key);
    return entry ? entry[1] : undefined;
  }

  delete(key) {
    const i = this._idx(key);
    this._buckets[i] = this._buckets[i].filter(e => e[0] !== key);
  }

  _rehash() {
    const old = this._buckets;
    this._cap *= 2;
    this._buckets = Array.from({length: this._cap}, () => []);
    this._size = 0;
    old.flat().forEach(([k, v]) => this.put(k, v));
  }
}
#include <stdlib.h>
#include <string.h>

#define INIT_CAP 8

typedef struct Entry { char *key; int val; struct Entry *next; } Entry;
typedef struct { Entry **buckets; int cap, size; } HashMap;

static unsigned hash_str(const char *s) {
    unsigned h = 5381;
    while (*s) h = h * 33 ^ (unsigned char)*s++;
    return h;
}

void hm_init(HashMap *m) {
    m->cap     = INIT_CAP;
    m->size    = 0;
    m->buckets = calloc(m->cap, sizeof(Entry*));
}

void hm_put(HashMap *m, const char *key, int val) {
    unsigned i = hash_str(key) % m->cap;
    for (Entry *e = m->buckets[i]; e; e = e->next)
        if (!strcmp(e->key, key)) { e->val = val; return; }
    Entry *e = malloc(sizeof(Entry));
    e->key  = strdup(key);
    e->val  = val;
    e->next = m->buckets[i];
    m->buckets[i] = e;
    m->size++;
}

int hm_get(HashMap *m, const char *key, int *out) {
    for (Entry *e = m->buckets[hash_str(key) % m->cap]; e; e = e->next)
        if (!strcmp(e->key, key)) { *out = e->val; return 1; }
    return 0;
}
#include <unordered_map>
#include <string>
#include <optional>

// std::unordered_map is the production hash map.
// Below is a minimal chaining implementation for illustration.

template<typename K, typename V>
class HashMap {
    using Bucket = std::vector<std::pair<K,V>>;
    std::vector<Bucket> _b;
    int _size = 0, _cap;

    int idx(const K& k) const { return std::hash<K>{}(k) % _cap; }
public:
    HashMap(int cap=8): _b(cap), _cap(cap) {}

    void put(const K& k, const V& v) {
        auto& bkt = _b[idx(k)];
        for (auto& p : bkt) if (p.first == k) { p.second = v; return; }
        bkt.push_back({k, v}); _size++;
    }

    std::optional<V> get(const K& k) const {
        for (auto& p : _b[idx(k)]) if (p.first == k) return p.second;
        return std::nullopt;
    }
};
import java.util.*;

public class HashMap<K, V> {
    private static class Node<K,V> { K key; V val; Node<K,V> next; }
    private Object[] buckets;
    private int cap, size;

    @SuppressWarnings("unchecked")
    public HashMap(int cap) { this.cap = cap; buckets = new Object[cap]; }

    private int idx(K key) { return Math.abs(key.hashCode()) % cap; }

    @SuppressWarnings("unchecked")
    public void put(K key, V val) {
        int i = idx(key);
        for (Node<K,V> n=(Node<K,V>)buckets[i]; n!=null; n=n.next)
            if (n.key.equals(key)) { n.val = val; return; }
        Node<K,V> n = new Node<>();
        n.key = key; n.val = val; n.next = (Node<K,V>)buckets[i];
        buckets[i] = n; size++;
    }

    @SuppressWarnings("unchecked")
    public V get(K key) {
        for (Node<K,V> n=(Node<K,V>)buckets[idx(key)]; n!=null; n=n.next)
            if (n.key.equals(key)) return n.val;
        return null;
    }
}
package main

// Go's built-in map is a hash map. Shown here for illustration.
type HashMap struct {
    buckets [][]entry
    cap     int
    size    int
}
type entry struct{ key, val string }

func NewHashMap(cap int) *HashMap {
    return &HashMap{buckets: make([][]entry, cap), cap: cap}
}

func (m *HashMap) idx(key string) int {
    h := 0
    for _, c := range key { h = h*31 + int(c) }
    if h < 0 { h = -h }
    return h % m.cap
}

func (m *HashMap) Put(key, val string) {
    i := m.idx(key)
    for j, e := range m.buckets[i] {
        if e.key == key { m.buckets[i][j].val = val; return }
    }
    m.buckets[i] = append(m.buckets[i], entry{key, val})
    m.size++
}

func (m *HashMap) Get(key string) (string, bool) {
    for _, e := range m.buckets[m.idx(key)] {
        if e.key == key { return e.val, true }
    }
    return "", false
}

7. Trees and binary search trees

A tree is a collection of nodes where each node has a value and zero or more children, with exactly one node designated as the root and no cycles. Every node except the root has exactly one parent. This is the structure of file systems, HTML documents, expression parse trees, and the recursion call stack.

A binary tree restricts each node to at most two children, conventionally called left and right. A binary tree's height is the length of the longest root-to-leaf path. A tree of $n$ nodes has height between $\log_2 n$ (perfectly balanced) and $n-1$ (a straight line, worst case).

7.1 Traversals

There are three canonical ways to visit every node in a binary tree. All are $O(n)$:

A fourth traversal is level order, which visits nodes level by level — this is just BFS on the tree.

7.2 Binary search trees

A binary search tree (BST) is a binary tree that maintains the BST property: for every node $v$, all values in the left subtree are less than $v$'s value, and all values in the right subtree are greater. That invariant turns lookup into a recursive halving:

$$T(n) = T(n/2) + O(1) \implies T(n) = O(\log n)$$

BST lookup recurrence

$T(n)$
The number of comparisons needed to find a value in a BST of $n$ nodes.
$T(n/2)$
Looking up in the relevant half of the tree — we recursed into one subtree.
$O(1)$
The single comparison at the current node before descending.
$\implies$
"Implies". Solving the recurrence gives the logarithmic bound.

But only when balanced. The $O(\log n)$ only holds if the tree is balanced. If you insert $1, 2, 3, 4, 5$ into a vanilla BST in order, you get a single right-leaning chain of length $n$ — a linked list in disguise. Lookup is now $O(n)$. This is the reason nobody uses plain BSTs in production.

So vanilla BSTs are pedagogical. They teach the invariant and the in-order traversal trick. Real code uses balanced BSTs, which we'll meet next.

Source — Binary Search Tree
Node: val, left=null, right=null

insert(node, val):
  if node is null: return new Node(val)
  if val < node.val:  node.left  = insert(node.left,  val)
  elif val > node.val: node.right = insert(node.right, val)
  return node   // duplicates ignored

search(node, val):
  if node is null: return false
  if val == node.val: return true
  if val < node.val:  return search(node.left,  val)
  else:               return search(node.right, val)

inorder(node):
  if node is null: return
  inorder(node.left)
  visit(node.val)       // values come out sorted
  inorder(node.right)
class Node:
    def __init__(self, val):
        self.val   = val
        self.left  = None
        self.right = None

class BST:
    def __init__(self): self.root = None

    def insert(self, val):
        self.root = self._ins(self.root, val)

    def _ins(self, node, val):
        if node is None:       return Node(val)
        if val < node.val:     node.left  = self._ins(node.left,  val)
        elif val > node.val:   node.right = self._ins(node.right, val)
        return node

    def search(self, val):
        cur = self.root
        while cur:
            if val == cur.val:   return True
            cur = cur.left if val < cur.val else cur.right
        return False

    def inorder(self):
        result = []
        def _go(n):
            if n: _go(n.left); result.append(n.val); _go(n.right)
        _go(self.root)
        return result   # sorted
class Node { constructor(v){this.val=v; this.left=this.right=null;} }

class BST {
  constructor() { this.root = null; }

  insert(val) { this.root = this._ins(this.root, val); }

  _ins(node, val) {
    if (!node) return new Node(val);
    if (val < node.val)      node.left  = this._ins(node.left,  val);
    else if (val > node.val) node.right = this._ins(node.right, val);
    return node;
  }

  search(val) {
    let cur = this.root;
    while (cur) {
      if (val === cur.val) return true;
      cur = val < cur.val ? cur.left : cur.right;
    }
    return false;
  }

  inorder(node = this.root, out = []) {
    if (node) { this.inorder(node.left, out); out.push(node.val); this.inorder(node.right, out); }
    return out;
  }
}
#include <stdlib.h>

typedef struct Node { int val; struct Node *left, *right; } Node;

Node* bst_insert(Node *root, int val) {
    if (!root) { Node *n = malloc(sizeof(Node)); n->val=val; n->left=n->right=NULL; return n; }
    if (val < root->val)       root->left  = bst_insert(root->left,  val);
    else if (val > root->val)  root->right = bst_insert(root->right, val);
    return root;
}

int bst_search(Node *root, int val) {
    while (root) {
        if (val == root->val) return 1;
        root = (val < root->val) ? root->left : root->right;
    }
    return 0;
}

void bst_inorder(Node *root) {      // prints sorted
    if (!root) return;
    bst_inorder(root->left);
    printf("%d ", root->val);
    bst_inorder(root->right);
}
#include <iostream>

struct Node { int val; Node *left=nullptr, *right=nullptr; Node(int v):val(v){} };

struct BST {
    Node* root = nullptr;

    Node* insert(Node* n, int val) {
        if (!n) return new Node(val);
        if (val < n->val)       n->left  = insert(n->left,  val);
        else if (val > n->val)  n->right = insert(n->right, val);
        return n;
    }
    void insert(int val) { root = insert(root, val); }

    bool search(int val) const {
        Node* cur = root;
        while (cur) {
            if (val == cur->val) return true;
            cur = (val < cur->val) ? cur->left : cur->right;
        }
        return false;
    }

    void inorder(Node* n) const {
        if (!n) return;
        inorder(n->left); std::cout << n->val << ' '; inorder(n->right);
    }
    void inorder() const { inorder(root); std::cout << '\n'; }
};
public class BST {
    private static class Node { int val; Node left, right; Node(int v){val=v;} }
    private Node root;

    public void insert(int val) { root = insert(root, val); }
    private Node insert(Node n, int val) {
        if (n == null) return new Node(val);
        if      (val < n.val) n.left  = insert(n.left,  val);
        else if (val > n.val) n.right = insert(n.right, val);
        return n;
    }

    public boolean search(int val) {
        Node cur = root;
        while (cur != null) {
            if (val == cur.val) return true;
            cur = (val < cur.val) ? cur.left : cur.right;
        }
        return false;
    }

    public void inorder() { inorder(root); System.out.println(); }
    private void inorder(Node n) {
        if (n == null) return;
        inorder(n.left); System.out.print(n.val + " "); inorder(n.right);
    }
}
package main

import "fmt"

type Node struct { Val int; Left, Right *Node }

func Insert(root *Node, val int) *Node {
    if root == nil { return &Node{Val: val} }
    if val < root.Val       { root.Left  = Insert(root.Left,  val) }
    if val > root.Val       { root.Right = Insert(root.Right, val) }
    return root
}

func Search(root *Node, val int) bool {
    for root != nil {
        if val == root.Val { return true }
        if val < root.Val  { root = root.Left } else { root = root.Right }
    }
    return false
}

func Inorder(root *Node) {   // prints sorted values
    if root == nil { return }
    Inorder(root.Left)
    fmt.Print(root.Val, " ")
    Inorder(root.Right)
}

8. Balanced trees and B-trees

A self-balancing BST is a BST that does extra bookkeeping on every insert and delete to guarantee the tree stays roughly balanced — height $O(\log n)$ no matter what. The bookkeeping happens via local operations called rotations.

A rotation is a structural move that rebalances two adjacent nodes without breaking the BST property. A right rotation, for example:

     Y              X
    / \            / \
   X   C    →     A   Y
  / \                / \
 A   B              B   C

Before the rotation, $X$ was $Y$'s left child and $B$ was $X$'s right child. After, $X$ is on top with $A$ and $Y$ below; $B$ has moved over to be $Y$'s left child. The in-order sequence $A, X, B, Y, C$ is preserved. Rotations are $O(1)$ — a handful of pointer updates.

8.1 Red-black trees

A red-black tree stores a color (red or black) on every node and enforces four invariants — "no two red nodes in a row", "every root-to-leaf path has the same number of black nodes", and so on. These invariants together guarantee the tree's height is at most $2 \log_2(n+1)$. Inserts and deletes fix violations by recoloring and rotating, which takes $O(\log n)$.

Red-black trees are what power Java's TreeMap, C++'s std::map, and the Linux kernel's memory management. The implementation is famously finicky — there are about a dozen cases — but you almost never have to write one yourself.

8.2 AVL trees

An AVL tree (Adelson-Velsky and Landis, 1962 — the first self-balancing BST) keeps each node's left and right subtree heights differing by at most 1. This is a stricter invariant than red-black, so AVL trees are more tightly balanced and have faster lookups — but insertions do more rotations on average. You see AVL in some language runtimes (older GCC's std::set, for example) but red-black has mostly won.

8.3 Treaps and skip lists

Two other balanced-ish structures you'll sometimes meet. A treap is a BST where each node also gets a random priority, and the tree is kept heap-ordered on the priorities as well as BST-ordered on the keys. The randomness gives expected $O(\log n)$ height without explicit rotations in the classical sense. A skip list (Pugh, 1990) isn't a tree at all — it's a sorted linked list with extra "express lanes" at decreasing levels of granularity. Redis uses skip lists for its sorted-set type because they're simple to implement concurrently. Both give the same $O(\log n)$ bounds with much less code than red-black.

8.4 B-trees

A B-tree is a generalization of a BST to higher fanout — each node holds multiple keys and has multiple children. A B-tree of order $t$ has between $t$ and $2t$ children per internal node, so a B-tree of $n$ elements has height $O(\log_t n)$.

Why would you want higher fanout? Because the cost model changes when your data lives on disk, not in RAM. Reading one block from an SSD takes microseconds; reading one block from spinning disk takes milliseconds. That's millions of times slower than a cache hit. So if you can pack, say, 100 keys into one disk block and do one disk read per level of the tree, a B-tree of 100-way fanout can index a billion keys in $\log_{100}(10^9) \approx 4.5$ disk reads. A red-black tree would do about 30.

This is why every production database — PostgreSQL, SQLite, MySQL (InnoDB), Oracle, LMDB, BoltDB — stores its indexes as B-trees or B+ trees. The logarithm with a huge base is exactly the shape you want when every "step" is expensive.

Taxonomy cheat sheet. In-memory ordered map: red-black or AVL tree. On-disk ordered index: B-tree or B+ tree. In-memory unordered map: hash table. On-disk unordered map: LSM tree (the write-optimized cousin, used by RocksDB and Cassandra). You rarely pick wrong if you match the structure to the storage medium.

One more note on B+ trees specifically: the variant actually used by databases. In a B+ tree, internal nodes only hold keys (for routing), and all the real data lives in the leaves, which are linked together in a doubly linked list. This makes range scans — "give me every row with id between 100 and 200" — blazingly fast: you binary-search down to the first leaf, then walk the leaf list. If you've ever wondered why a database index makes WHERE id BETWEEN ... queries fast, that's the mechanism.

Source — Red-Black Tree (rotation)
-- Red-Black Tree invariants:
-- 1. Every node is RED or BLACK
-- 2. Root is BLACK
-- 3. No two adjacent RED nodes (red node's parent is BLACK)
-- 4. Every root-to-null path has the same number of BLACK nodes

Node: val, color=RED, left=null, right=null, parent=null

rotate_left(T, x):
  y = x.right          // y becomes new subtree root
  x.right = y.left
  if y.left != null: y.left.parent = x
  y.parent = x.parent
  // splice y in where x was
  if x.parent is null:   T.root = y
  elif x == x.parent.left:  x.parent.left  = y
  else:                     x.parent.right = y
  y.left   = x
  x.parent = y

rotate_right(T, x):     // mirror image of rotate_left

insert(T, val):
  z = new Node(val, color=RED)
  bst_insert(T, z)            // standard BST insert
  insert_fixup(T, z)          // restore RB invariants

insert_fixup(T, z):
  while z.parent.color == RED:
    // parent is red — need to fix
    if z.parent == z.parent.parent.left:
      y = z.parent.parent.right  // uncle
      if y.color == RED:         // Case 1: recolor
        z.parent.color        = BLACK
        y.color               = BLACK
        z.parent.parent.color = RED
        z = z.parent.parent
      else:                      // Cases 2&3: rotate
        if z == z.parent.right:
          z = z.parent; rotate_left(T, z)
        z.parent.color        = BLACK
        z.parent.parent.color = RED
        rotate_right(T, z.parent.parent)
    // mirror for right side
  T.root.color = BLACK
RED, BLACK = True, False

class Node:
    def __init__(self, val):
        self.val    = val
        self.color  = RED
        self.left   = self.right = self.parent = None

class RBTree:
    def __init__(self):
        self.NIL  = Node(0)           # sentinel leaf
        self.NIL.color = BLACK
        self.root = self.NIL

    def _rotate_left(self, x):
        y         = x.right
        x.right   = y.left
        if y.left is not self.NIL:
            y.left.parent = x
        y.parent  = x.parent
        if x.parent is None:           self.root    = y
        elif x is x.parent.left:       x.parent.left  = y
        else:                          x.parent.right = y
        y.left    = x
        x.parent  = y

    def _rotate_right(self, x):       # mirror of _rotate_left
        y         = x.left
        x.left    = y.right
        if y.right is not self.NIL:
            y.right.parent = x
        y.parent  = x.parent
        if x.parent is None:           self.root    = y
        elif x is x.parent.right:      x.parent.right = y
        else:                          x.parent.left  = y
        y.right   = x
        x.parent  = y

    def insert(self, val):
        z = Node(val)
        z.left = z.right = self.NIL
        # standard BST insert
        y, x = None, self.root
        while x is not self.NIL:
            y = x
            x = x.left if z.val < x.val else x.right
        z.parent = y
        if y is None:                  self.root    = z
        elif z.val < y.val:            y.left       = z
        else:                          y.right      = z
        self._insert_fixup(z)

    def _insert_fixup(self, z):
        while z.parent and z.parent.color == RED:
            if z.parent is z.parent.parent.left:
                y = z.parent.parent.right
                if y.color == RED:             # Case 1
                    z.parent.color         = BLACK
                    y.color                = BLACK
                    z.parent.parent.color  = RED
                    z = z.parent.parent
                else:
                    if z is z.parent.right:    # Case 2
                        z = z.parent; self._rotate_left(z)
                    z.parent.color         = BLACK  # Case 3
                    z.parent.parent.color  = RED
                    self._rotate_right(z.parent.parent)
            else:                              # mirror
                y = z.parent.parent.left
                if y.color == RED:
                    z.parent.color = BLACK; y.color = BLACK
                    z.parent.parent.color = RED; z = z.parent.parent
                else:
                    if z is z.parent.left:
                        z = z.parent; self._rotate_right(z)
                    z.parent.color = BLACK
                    z.parent.parent.color = RED
                    self._rotate_left(z.parent.parent)
        self.root.color = BLACK
const RED = true, BLACK = false;

class RBNode {
  constructor(v) { this.val=v; this.color=RED; this.left=this.right=this.parent=null; }
}

class RBTree {
  constructor() {
    this.NIL = new RBNode(0); this.NIL.color = BLACK;
    this.root = this.NIL;
  }

  _rotateLeft(x) {
    const y = x.right; x.right = y.left;
    if (y.left !== this.NIL) y.left.parent = x;
    y.parent = x.parent;
    if (!x.parent)               this.root = y;
    else if (x === x.parent.left) x.parent.left  = y;
    else                          x.parent.right = y;
    y.left = x; x.parent = y;
  }

  _rotateRight(x) {   // mirror of _rotateLeft
    const y = x.left; x.left = y.right;
    if (y.right !== this.NIL) y.right.parent = x;
    y.parent = x.parent;
    if (!x.parent)                this.root = y;
    else if (x === x.parent.right) x.parent.right = y;
    else                           x.parent.left  = y;
    y.right = x; x.parent = y;
  }

  insert(val) {
    const z = new RBNode(val); z.left = z.right = this.NIL;
    let y = null, x = this.root;
    while (x !== this.NIL) { y = x; x = z.val < x.val ? x.left : x.right; }
    z.parent = y;
    if (!y)                  this.root = z;
    else if (z.val < y.val)  y.left    = z;
    else                     y.right   = z;
    this._fixup(z);
  }

  _fixup(z) {
    while (z.parent && z.parent.color === RED) {
      // (abbreviated — full 6-case fixup mirrors the Python version)
      if (z.parent === z.parent.parent.left) {
        const u = z.parent.parent.right;
        if (u.color === RED) {
          z.parent.color = z.parent.parent.color = BLACK;
          u.color = BLACK; z = z.parent.parent;
        } else {
          if (z === z.parent.right) { z = z.parent; this._rotateLeft(z); }
          z.parent.color = BLACK; z.parent.parent.color = RED;
          this._rotateRight(z.parent.parent);
        }
      } else { /* mirror */ break; }
    }
    this.root.color = BLACK;
  }
}
/* Skeleton: structure + left-rotate only (full insert_fixup mirrors Python) */
#include <stdlib.h>
#define RED   1
#define BLACK 0

typedef struct RBNode {
    int val, color;
    struct RBNode *left, *right, *parent;
} RBNode;

typedef struct { RBNode *root, *NIL; } RBTree;

void rb_init(RBTree *t) {
    t->NIL = calloc(1, sizeof(RBNode));
    t->NIL->color = BLACK;
    t->root = t->NIL;
}

void rotate_left(RBTree *t, RBNode *x) {
    RBNode *y = x->right;
    x->right  = y->left;
    if (y->left != t->NIL) y->left->parent = x;
    y->parent = x->parent;
    if (!x->parent)              t->root = y;
    else if (x == x->parent->left) x->parent->left  = y;
    else                           x->parent->right = y;
    y->left   = x;
    x->parent = y;
}
/* rotate_right is the mirror image */
// C++ uses std::map / std::set which are red-black trees.
// Skeleton showing structure and rotation:
#include <memory>
enum Color { RED, BLACK };

struct RBNode {
    int val; Color color = RED;
    std::shared_ptr<RBNode> left, right;
    RBNode* parent = nullptr;
    explicit RBNode(int v) : val(v) {}
};

struct RBTree {
    std::shared_ptr<RBNode> root;

    void rotateLeft(std::shared_ptr<RBNode> x) {
        auto y = x->right;
        x->right = y->left;
        if (y->left) y->left->parent = x.get();
        y->parent = x->parent;
        if (!x->parent)                 root            = y;
        else if (x.get()==x->parent->left.get()) x->parent->left  = y;
        else                            x->parent->right = y;
        y->left   = x;
        x->parent = y.get();
    }
    // rotateRight is the mirror; insert_fixup handles the 6 cases
};
// Java's TreeMap is a red-black tree. Skeleton for illustration:
public class RBTree {
    static final boolean RED = true, BLACK = false;

    static class Node {
        int val; boolean color = RED;
        Node left, right, parent;
        Node(int v) { val = v; }
    }

    Node root;

    void rotateLeft(Node x) {
        Node y = x.right;
        x.right = y.left;
        if (y.left != null) y.left.parent = x;
        y.parent = x.parent;
        if (x.parent == null)            root           = y;
        else if (x == x.parent.left)     x.parent.left  = y;
        else                             x.parent.right = y;
        y.left = x; x.parent = y;
    }

    void rotateRight(Node x) {   // mirror of rotateLeft
        Node y = x.left;
        x.left = y.right;
        if (y.right != null) y.right.parent = x;
        y.parent = x.parent;
        if (x.parent == null)            root           = y;
        else if (x == x.parent.right)    x.parent.right = y;
        else                             x.parent.left  = y;
        y.right = x; x.parent = y;
    }
    // insert() calls bstInsert() then insertFixup() — mirrors Python
}
package main

const (RED = iota; BLACK)

type RBNode struct {
    Val, Color int
    Left, Right, Parent *RBNode
}

type RBTree struct{ Root, NIL *RBNode }

func NewRBTree() *RBTree {
    nil_node := &RBNode{Color: BLACK}
    return &RBTree{Root: nil_node, NIL: nil_node}
}

func (t *RBTree) RotateLeft(x *RBNode) {
    y        := x.Right
    x.Right   = y.Left
    if y.Left != t.NIL { y.Left.Parent = x }
    y.Parent  = x.Parent
    if x.Parent == nil       { t.Root = y } else
    if x == x.Parent.Left    { x.Parent.Left  = y } else
                               { x.Parent.Right = y }
    y.Left   = x
    x.Parent = y
}

// RotateRight is the mirror image.
// Insert calls BST insert then fixup (6 cases, mirrors Python version).

9. Heaps and priority queues

A priority queue is a container where the operation you care about is "give me the smallest (or largest) element". Insert any element, peek at the minimum, remove the minimum. You don't need everything sorted — just the top one, fast.

A binary heap is the standard implementation. It's a binary tree with two properties:

  1. Shape. The tree is complete — filled level by level, left to right. This lets it be stored in a flat array with no pointers: the children of node $i$ live at $2i+1$ and $2i+2$, and the parent of $i$ is $\lfloor (i-1)/2 \rfloor$.
  2. Heap property. Every parent is $\le$ both of its children (for a min-heap). The minimum is always at index 0.

Insert and extract-min both take $O(\log n)$:

There's also a clever operation called heapify: given an arbitrary array, turn it into a heap in $O(n)$ — not $O(n \log n)$. You do it by sifting down starting from the last non-leaf and working backward. The trick is that most nodes are near the bottom and don't sift very far, so the total work is bounded by a geometric sum that collapses to $O(n)$:

$$\sum_{h=0}^{\log n} \frac{n}{2^{h+1}} \cdot h = O(n)$$

Heapify cost

$h$
The height of a node in the heap — leaves are at height 0, root at height $\log n$.
$\frac{n}{2^{h+1}}$
The approximate number of nodes at height $h$. Half the tree is at height 0, a quarter at height 1, and so on.
$h$ (multiplied)
The worst-case sift-down cost for a node at height $h$.
$\sum_{h=0}^{\log n}$
Summing over all heights from 0 to the root.

The surprise. You'd expect $O(n \log n)$ because you're processing $n$ nodes and each could sift $\log n$ levels. The saving comes from the exponential distribution of nodes: most of them are leaves and do zero work, so the total collapses to $O(n)$. This is why heapsort's build phase is linear.

9.1 Heap vs. sorted array

When does a heap beat a sorted array? When you need the minimum fast but don't care about the rest of the order. Example: "top 100 out of a billion". With a sorted array you'd have to sort everything ($O(n \log n)$) and look at the first 100. With a heap of size 100 you scan the billion and keep only the best-so-far — $O(n \log k)$ where $k = 100$. That's about 23 million times faster on a billion elements.

9.2 Decrease-key and the indexed heap

Dijkstra's algorithm, as textbooks state it, wants a heap that supports decrease-key: "here's a vertex already in the heap; its priority just got lower; please move it up". A plain binary heap can't do this in $O(\log n)$ without an auxiliary index from vertex to heap position. The workaround used in real code is different: just push the updated $(\text{new distance}, v)$ onto the heap and ignore the old entry when it later pops out with a stale distance. The heap grows by a factor of at most $\log V$, so the asymptotic cost is unchanged and the code is much simpler.

9.3 Use cases

Source — Min-Heap
MinHeap: data=[], size=0

parent(i)      = (i-1) / 2
left_child(i)  = 2*i + 1
right_child(i) = 2*i + 2

insert(val):
  data[size] = val; size += 1
  sift_up(size - 1)

sift_up(i):
  while i > 0 and data[i] < data[parent(i)]:
    swap data[i] with data[parent(i)]
    i = parent(i)

extract_min():
  min = data[0]
  data[0] = data[size-1]; size -= 1
  sift_down(0)
  return min

sift_down(i):
  while left_child(i) < size:
    smallest = i
    l = left_child(i); r = right_child(i)
    if l < size and data[l] < data[smallest]: smallest = l
    if r < size and data[r] < data[smallest]: smallest = r
    if smallest == i: break
    swap data[i] with data[smallest]
    i = smallest

build_heap(array):        // O(n) heapify
  data = array; size = len(array)
  for i from (size/2 - 1) down to 0:
    sift_down(i)
class MinHeap:
    def __init__(self): self._d = []

    def _par(self, i):   return (i - 1) // 2
    def _left(self, i):  return 2 * i + 1
    def _right(self, i): return 2 * i + 2

    def insert(self, val):
        self._d.append(val)
        self._sift_up(len(self._d) - 1)

    def _sift_up(self, i):
        while i > 0 and self._d[i] < self._d[self._par(i)]:
            p = self._par(i)
            self._d[i], self._d[p] = self._d[p], self._d[i]
            i = p

    def extract_min(self):
        if not self._d: raise IndexError("empty")
        self._d[0], self._d[-1] = self._d[-1], self._d[0]
        val = self._d.pop()
        self._sift_down(0)
        return val

    def _sift_down(self, i):
        n = len(self._d)
        while self._left(i) < n:
            s = i
            l, r = self._left(i), self._right(i)
            if l < n and self._d[l] < self._d[s]: s = l
            if r < n and self._d[r] < self._d[s]: s = r
            if s == i: break
            self._d[i], self._d[s] = self._d[s], self._d[i]
            i = s

    def peek(self): return self._d[0]

    @classmethod
    def from_list(cls, arr):    # O(n) build-heap
        h = cls()
        h._d = arr[:]
        for i in range(len(h._d) // 2 - 1, -1, -1):
            h._sift_down(i)
        return h

# Python's built-in heapq does all this:
# import heapq; h=[]; heapq.heappush(h, 3); heapq.heappop(h)
class MinHeap {
  #d = [];
  #par(i)   { return (i - 1) >> 1; }
  #left(i)  { return 2 * i + 1; }
  #right(i) { return 2 * i + 2; }
  #swap(a, b){ [this.#d[a], this.#d[b]] = [this.#d[b], this.#d[a]]; }

  insert(val) {
    this.#d.push(val);
    let i = this.#d.length - 1;
    while (i > 0 && this.#d[i] < this.#d[this.#par(i)]) {
      this.#swap(i, this.#par(i)); i = this.#par(i);
    }
  }

  extractMin() {
    const min = this.#d[0];
    this.#d[0] = this.#d.pop();
    this.#siftDown(0);
    return min;
  }

  #siftDown(i) {
    const n = this.#d.length;
    while (this.#left(i) < n) {
      let s = i, l = this.#left(i), r = this.#right(i);
      if (l < n && this.#d[l] < this.#d[s]) s = l;
      if (r < n && this.#d[r] < this.#d[s]) s = r;
      if (s === i) break;
      this.#swap(i, s); i = s;
    }
  }

  peek()    { return this.#d[0]; }
  get size(){ return this.#d.length; }
}
#include <stdlib.h>
#define MAX 1024

typedef struct { int d[MAX]; int size; } MinHeap;

static void swap(MinHeap *h, int a, int b)
  { int t=h->d[a]; h->d[a]=h->d[b]; h->d[b]=t; }

void hp_insert(MinHeap *h, int val) {
    h->d[h->size++] = val;
    int i = h->size - 1;
    while (i > 0 && h->d[i] < h->d[(i-1)/2])
        { swap(h, i, (i-1)/2); i = (i-1)/2; }
}

int hp_extract_min(MinHeap *h) {
    int min = h->d[0];
    h->d[0] = h->d[--h->size];
    int i = 0;
    for (;;) {
        int s=i, l=2*i+1, r=2*i+2;
        if (l < h->size && h->d[l] < h->d[s]) s=l;
        if (r < h->size && h->d[r] < h->d[s]) s=r;
        if (s==i) break;
        swap(h, i, s); i=s;
    }
    return min;
}
#include <vector>
#include <algorithm>
#include <queue>

// std::priority_queue is a max-heap by default.
// For min-heap: priority_queue<int, vector<int>, greater<int>>

class MinHeap {
    std::vector<int> d;
    static int par(int i)   { return (i-1)/2; }
    static int left(int i)  { return 2*i+1; }
    static int right(int i) { return 2*i+2; }
public:
    void insert(int v) {
        d.push_back(v);
        int i = (int)d.size()-1;
        while (i>0 && d[i]<d[par(i)]) { std::swap(d[i],d[par(i)]); i=par(i); }
    }
    int extractMin() {
        int m = d[0]; d[0] = d.back(); d.pop_back();
        for (int i=0;;) {
            int s=i, l=left(i), r=right(i), n=(int)d.size();
            if (l<n && d[l]<d[s]) s=l;
            if (r<n && d[r]<d[s]) s=r;
            if (s==i) break;
            std::swap(d[i],d[s]); i=s;
        }
        return m;
    }
    int peek() const { return d[0]; }
};
import java.util.PriorityQueue;  // built-in min-heap

// Built-in:
PriorityQueue<Integer> pq = new PriorityQueue<>();
pq.offer(5); pq.offer(1); pq.offer(3);
pq.peek();   // 1
pq.poll();   // extracts 1

// Manual implementation:
public class MinHeap {
    private int[] d = new int[256]; private int size = 0;

    public void insert(int v) {
        d[size++] = v;
        int i = size - 1;
        while (i > 0 && d[i] < d[(i-1)/2]) { swap(i, (i-1)/2); i=(i-1)/2; }
    }

    public int extractMin() {
        int m = d[0]; d[0] = d[--size];
        for (int i=0;;) {
            int s=i, l=2*i+1, r=2*i+2;
            if (l<size && d[l]<d[s]) s=l;
            if (r<size && d[r]<d[s]) s=r;
            if (s==i) break; swap(i,s); i=s;
        }
        return m;
    }

    private void swap(int a, int b) { int t=d[a]; d[a]=d[b]; d[b]=t; }
}
package main

import "container/heap"

// Go's heap package provides heap operations on any type
// that implements heap.Interface. Example min-heap of ints:

type IntHeap []int
func (h IntHeap) Len() int           { return len(h) }
func (h IntHeap) Less(i, j int) bool { return h[i] < h[j] }  // min-heap
func (h IntHeap) Swap(i, j int)      { h[i], h[j] = h[j], h[i] }
func (h *IntHeap) Push(x any)        { *h = append(*h, x.(int)) }
func (h *IntHeap) Pop() any {
    old := *h; n := len(old); x := old[n-1]; *h = old[:n-1]; return x
}

// Usage:
//   h := &IntHeap{5, 1, 3}
//   heap.Init(h)                  // O(n) build-heap
//   heap.Push(h, 2)               // insert
//   heap.Pop(h)                   // extract-min

10. Tries — prefix trees

A trie (pronounced "try", from retrieval) is a tree where each edge is labeled with one character and each root-to-node path spells a string. To check if a word is in the trie, you walk the tree one character at a time. Lookup costs $O(L)$ where $L$ is the length of the word — not the number of words. That's the magic.

Even better, all words sharing a prefix share a path. That's why tries are the right structure for autocomplete: once you've walked to the node for "alg", every word in that subtree starts with "alg". You just do a DFS from there.

Trie use cases in 2025:

The big downside of a plain trie is memory: one node per character per word. In practice you use a compressed trie (Patricia trie, radix tree) that merges chains of single-child nodes into a single labeled edge. Linux's fib_trie for routing and Go's net package both do this.

Source — Trie
TrieNode: children = map[char -> TrieNode], is_end = false

insert(word):
  cur = root
  for each char c in word:
    if c not in cur.children:
      cur.children[c] = new TrieNode()
    cur = cur.children[c]
  cur.is_end = true

search(word):
  cur = root
  for each char c in word:
    if c not in cur.children: return false
    cur = cur.children[c]
  return cur.is_end

starts_with(prefix):
  cur = root
  for each char c in prefix:
    if c not in cur.children: return false
    cur = cur.children[c]
  return true          // any subtree exists
class TrieNode:
    def __init__(self):
        self.children = {}
        self.is_end   = False

class Trie:
    def __init__(self): self.root = TrieNode()

    def insert(self, word: str) -> None:
        cur = self.root
        for c in word:
            if c not in cur.children:
                cur.children[c] = TrieNode()
            cur = cur.children[c]
        cur.is_end = True

    def search(self, word: str) -> bool:
        cur = self.root
        for c in word:
            if c not in cur.children:
                return False
            cur = cur.children[c]
        return cur.is_end

    def starts_with(self, prefix: str) -> bool:
        cur = self.root
        for c in prefix:
            if c not in cur.children:
                return False
            cur = cur.children[c]
        return True
class TrieNode {
  constructor() { this.ch = {}; this.end = false; }
}

class Trie {
  constructor() { this.root = new TrieNode(); }

  insert(word) {
    let cur = this.root;
    for (const c of word) {
      if (!cur.ch[c]) cur.ch[c] = new TrieNode();
      cur = cur.ch[c];
    }
    cur.end = true;
  }

  search(word) {
    let cur = this.root;
    for (const c of word) {
      if (!cur.ch[c]) return false;
      cur = cur.ch[c];
    }
    return cur.end;
  }

  startsWith(prefix) {
    let cur = this.root;
    for (const c of prefix) {
      if (!cur.ch[c]) return false;
      cur = cur.ch[c];
    }
    return true;
  }
}
#include <stdlib.h>
#include <string.h>
#define ALPHA 26

typedef struct TrieNode {
    struct TrieNode *ch[ALPHA];
    int is_end;
} TrieNode;

TrieNode* tn_new() { return calloc(1, sizeof(TrieNode)); }

void trie_insert(TrieNode *root, const char *word) {
    for (; *word; word++) {
        int i = *word - 'a';
        if (!root->ch[i]) root->ch[i] = tn_new();
        root = root->ch[i];
    }
    root->is_end = 1;
}

int trie_search(TrieNode *root, const char *word) {
    for (; *word; word++) {
        int i = *word - 'a';
        if (!root->ch[i]) return 0;
        root = root->ch[i];
    }
    return root->is_end;
}

int trie_starts_with(TrieNode *root, const char *pre) {
    for (; *pre; pre++) {
        int i = *pre - 'a';
        if (!root->ch[i]) return 0;
        root = root->ch[i];
    }
    return 1;
}
#include <unordered_map>
#include <string>

struct TrieNode {
    std::unordered_map<char, TrieNode*> ch;
    bool end = false;
};

class Trie {
    TrieNode* root = new TrieNode();
public:
    void insert(const std::string& w) {
        TrieNode* cur = root;
        for (char c : w) {
            if (!cur->ch.count(c)) cur->ch[c] = new TrieNode();
            cur = cur->ch[c];
        }
        cur->end = true;
    }

    bool search(const std::string& w) const {
        TrieNode* cur = root;
        for (char c : w) {
            auto it = cur->ch.find(c);
            if (it == cur->ch.end()) return false;
            cur = it->second;
        }
        return cur->end;
    }

    bool startsWith(const std::string& p) const {
        TrieNode* cur = root;
        for (char c : p) {
            auto it = cur->ch.find(c);
            if (it == cur->ch.end()) return false;
            cur = it->second;
        }
        return true;
    }
};
import java.util.HashMap;

public class Trie {
    private static class Node {
        HashMap<Character, Node> ch = new HashMap<>();
        boolean end = false;
    }
    private final Node root = new Node();

    public void insert(String word) {
        Node cur = root;
        for (char c : word.toCharArray()) {
            cur.ch.putIfAbsent(c, new Node());
            cur = cur.ch.get(c);
        }
        cur.end = true;
    }

    public boolean search(String word) {
        Node cur = root;
        for (char c : word.toCharArray()) {
            if (!cur.ch.containsKey(c)) return false;
            cur = cur.ch.get(c);
        }
        return cur.end;
    }

    public boolean startsWith(String prefix) {
        Node cur = root;
        for (char c : prefix.toCharArray()) {
            if (!cur.ch.containsKey(c)) return false;
            cur = cur.ch.get(c);
        }
        return true;
    }
}
package main

type TrieNode struct {
    Ch  [26]*TrieNode
    End bool
}

type Trie struct{ Root TrieNode }

func (t *Trie) Insert(word string) {
    cur := &t.Root
    for _, c := range word {
        i := c - 'a'
        if cur.Ch[i] == nil { cur.Ch[i] = &TrieNode{} }
        cur = cur.Ch[i]
    }
    cur.End = true
}

func (t *Trie) Search(word string) bool {
    cur := &t.Root
    for _, c := range word {
        i := c - 'a'
        if cur.Ch[i] == nil { return false }
        cur = cur.Ch[i]
    }
    return cur.End
}

func (t *Trie) StartsWith(prefix string) bool {
    cur := &t.Root
    for _, c := range prefix {
        i := c - 'a'
        if cur.Ch[i] == nil { return false }
        cur = cur.Ch[i]
    }
    return true
}

11. Graphs

A graph is a set of vertices (also called nodes) connected by edges. If edges have a direction ($u \to v$ is different from $v \to u$), the graph is directed. If they don't, it's undirected. Edges can also carry weights (distance, cost, capacity).

Every non-trivial system in computing is a graph. The web is a graph (pages linked to pages). Your code is a graph (functions calling functions). Your dependencies are a graph. Your subway is a graph. Social networks are graphs. Your neural network is a graph. Graph algorithms are what you reach for when the problem has relationships, not just values.

Basic terminology. Vertex $v$ has degree equal to the number of edges incident to it. A path is a sequence of vertices connected by edges. A cycle is a path that starts and ends at the same vertex. A graph is connected (undirected) if there's a path between every pair of vertices. A connected graph with no cycles is a tree. See discrete math for the formal development.

11.1 Two representations

You have two standard ways to store a graph with $V$ vertices and $E$ edges:

For a graph that's sparse — $E \ll V^2$, which describes nearly every real-world graph — you use an adjacency list. The subway graph has a few hundred stations and a few hundred edges; an adjacency matrix would waste 99.9% of its memory on zeros.

For a graph that's dense — $E \approx V^2$ — the matrix is competitive, and matrix operations (spectral methods, transitive closure) become natural. The quantitative crossover depends on your constant factors, but a rough rule: if you have more than $V^2 / 32$ edges, the matrix pays off.

11.2 A third representation — edge list

Sometimes you just want a flat list of $(u, v, w)$ tuples. An edge list is the simplest possible graph representation and it's what Kruskal's minimum-spanning-tree algorithm wants (sort edges by weight, walk them, add if they don't form a cycle). It's useless for "what are the neighbors of $v$?" queries but fine for algorithms that only want to iterate every edge once. Most graph-processing file formats (.mtx, .edges, SNAP datasets) are edge lists on disk, then loaded into an adjacency list in memory.

11.3 Compressed sparse row (CSR)

For static graphs with billions of edges — think the web graph — the adjacency list is still too cache-unfriendly. The CSR format packs all neighbor arrays into one big flat array and uses a separate offset array to say "vertex $v$'s neighbors live at positions $\text{off}[v]$ through $\text{off}[v+1]$". Two arrays, zero pointer chasing, perfect cache locality. This is how production graph libraries (NetworkX's internals, igraph, SNAP, GraphBLAS) store immutable graphs, and it's identical to how sparse matrices are stored in SciPy and Eigen — a graph is a sparse matrix.

Source — Graph Representations
-- Adjacency List (sparse graphs, O(V+E) space) --
Graph: adj = map[vertex -> list of (neighbor, weight)]

add_edge(u, v, w=1):
  adj[u].append((v, w))
  adj[v].append((u, w))   // omit for directed graph

neighbors(u): return adj[u]

-- Adjacency Matrix (dense graphs, O(V^2) space) --
Graph: mat = V x V matrix, all zeros

add_edge(u, v, w=1):
  mat[u][v] = w
  mat[v][u] = w           // omit for directed graph

has_edge(u, v): return mat[u][v] != 0
from collections import defaultdict

class AdjList:
    """Adjacency list — best for sparse graphs."""
    def __init__(self):
        self.adj = defaultdict(list)

    def add_edge(self, u, v, w=1, directed=False):
        self.adj[u].append((v, w))
        if not directed:
            self.adj[v].append((u, w))

    def neighbors(self, u):
        return self.adj[u]


class AdjMatrix:
    """Adjacency matrix — best for dense graphs."""
    def __init__(self, n):
        self.n   = n
        self.mat = [[0] * n for _ in range(n)]

    def add_edge(self, u, v, w=1, directed=False):
        self.mat[u][v] = w
        if not directed:
            self.mat[v][u] = w

    def has_edge(self, u, v) -> bool:
        return self.mat[u][v] != 0

    def weight(self, u, v) -> int:
        return self.mat[u][v]
// Adjacency list (Map of arrays)
class Graph {
  constructor(directed = false) {
    this.adj      = new Map();
    this.directed = directed;
  }

  addEdge(u, v, w = 1) {
    if (!this.adj.has(u)) this.adj.set(u, []);
    if (!this.adj.has(v)) this.adj.set(v, []);
    this.adj.get(u).push({ to: v, w });
    if (!this.directed) this.adj.get(v).push({ to: u, w });
  }

  neighbors(u) { return this.adj.get(u) || []; }
}

// Adjacency matrix (2-D array)
class GraphMatrix {
  constructor(n) {
    this.n   = n;
    this.mat = Array.from({length: n}, () => new Array(n).fill(0));
  }
  addEdge(u, v, w = 1, directed = false) {
    this.mat[u][v] = w;
    if (!directed) this.mat[v][u] = w;
  }
  hasEdge(u, v) { return this.mat[u][v] !== 0; }
}
#include <stdlib.h>
#define V_MAX 512

/* Adjacency list with a fixed-size array of edge lists */
typedef struct Edge { int to, w; struct Edge *next; } Edge;
Edge *adj[V_MAX];   // adj[u] = linked list of edges from u

void add_edge(int u, int v, int w) {
    Edge *e = malloc(sizeof(Edge)); e->to=v; e->w=w; e->next=adj[u]; adj[u]=e;
    Edge *f = malloc(sizeof(Edge)); f->to=u; f->w=w; f->next=adj[v]; adj[v]=f;
}

/* Adjacency matrix */
int mat[V_MAX][V_MAX];
void mat_add_edge(int u, int v, int w)
    { mat[u][v] = mat[v][u] = w; }
int  mat_has_edge(int u, int v)
    { return mat[u][v] != 0; }
#include <vector>
#include <utility>

// Adjacency list (standard in competitive programming)
using AdjList = std::vector<std::vector<std::pair<int,int>>>;  // adj[u] = {(v,w),...}

AdjList make_graph(int n) { return AdjList(n); }

void add_edge(AdjList& g, int u, int v, int w=1, bool directed=false) {
    g[u].push_back({v, w});
    if (!directed) g[v].push_back({u, w});
}

// Adjacency matrix
using AdjMat = std::vector<std::vector<int>>;
AdjMat make_matrix(int n) { return AdjMat(n, std::vector<int>(n, 0)); }

void mat_add_edge(AdjMat& m, int u, int v, int w=1, bool directed=false) {
    m[u][v] = w;
    if (!directed) m[v][u] = w;
}
import java.util.*;

public class Graph {
    private final int n;
    private final List<List<int[]>> adj;   // adj.get(u) = list of {v, w}
    private final boolean directed;

    public Graph(int n, boolean directed) {
        this.n = n; this.directed = directed;
        adj = new ArrayList<>();
        for (int i = 0; i < n; i++) adj.add(new ArrayList<>());
    }

    public void addEdge(int u, int v, int w) {
        adj.get(u).add(new int[]{v, w});
        if (!directed) adj.get(v).add(new int[]{u, w});
    }

    public List<int[]> neighbors(int u) { return adj.get(u); }

    // Adjacency matrix variant:
    int[][] mat = new int[512][512];
    public void matAddEdge(int u, int v, int w)
        { mat[u][v] = mat[v][u] = w; }
}
package main

// Adjacency list
type Graph struct {
    Adj      [][]Edge
    Directed bool
}
type Edge struct{ To, W int }

func NewGraph(n int, directed bool) *Graph {
    return &Graph{Adj: make([][]Edge, n), Directed: directed}
}

func (g *Graph) AddEdge(u, v, w int) {
    g.Adj[u] = append(g.Adj[u], Edge{v, w})
    if !g.Directed {
        g.Adj[v] = append(g.Adj[v], Edge{u, w})
    }
}

// Adjacency matrix
type GraphMat struct{ Mat [][]int }

func NewGraphMat(n int) *GraphMat {
    m := make([][]int, n)
    for i := range m { m[i] = make([]int, n) }
    return &GraphMat{m}
}

func (g *GraphMat) AddEdge(u, v, w int, directed bool) {
    g.Mat[u][v] = w
    if !directed { g.Mat[v][u] = w }
}

12. Interactive figure — the sort animator

Pick an algorithm, hit Play, and watch the array being sorted one comparison at a time. Bars are color-coded: blue is being compared, green is the pivot (quicksort only), orange is "already placed". The step counter shows how many comparisons have happened so far, and the length slider changes the number of elements.

Algorithm: Size: 24 Speed:

Pick an algorithm, press Play. Step counter shows comparisons so far — compare insertion's $O(n^2)$ to quicksort's $O(n \log n)$ on the same array.

A few things worth doing with this:

13. Sorting

Sorting is the most studied problem in computing. Forty-plus algorithms, each with a subculture. For practical purposes you need to know three:

13.1 Mergesort

Divide the array in half, recursively sort each half, then merge the two sorted halves into one. The merge step is a linear walk with two indices. The whole thing is $O(n \log n)$ in every case — worst, average, and best. It's stable (equal elements keep their relative order) and uses $O(n)$ extra space for the merge buffer.

The recurrence:

$$T(n) = 2 T(n/2) + O(n)$$

Mergesort recurrence

$T(n)$
Running time on an input of size $n$.
$2 T(n/2)$
Two recursive calls, each on half the input.
$O(n)$
The merge step — a linear pass over the two sorted halves.

Master theorem. Recurrences of the form $T(n) = aT(n/b) + f(n)$ have standard solutions. For mergesort, $a = b = 2$ and $f(n) = n$, which puts it in the case where $T(n) = O(n \log n)$. You'll meet the master theorem properly in complexity.html.

13.2 Quicksort

Pick a pivot element. Partition the array so everything less than the pivot is on the left, everything greater on the right. Recursively sort each side. That's it — the sort is "in place" (no merge buffer) and in practice faster than mergesort by a constant factor because of cache locality.

The catch: if your pivots are terrible (e.g., always the smallest element, which happens if you always pick the first element of an already-sorted array), each partition is completely lopsided and the recursion becomes $n \to n-1 \to n-2 \to \dots$. Running time degenerates to $O(n^2)$.

The fix: pick the pivot randomly, or use median-of-three (the median of first, middle, last). With a random pivot, the expected running time is $O(n \log n)$ and the probability of hitting worst-case behavior is vanishingly small. Production quicksort implementations (like C's qsort and LLVM's std::sort) use introspective sort: quicksort that falls back to heapsort if the recursion gets too deep, guaranteeing $O(n \log n)$ worst-case.

13.3 Heapsort

Build a max-heap from the array in $O(n)$. Then repeatedly extract the maximum and place it at the end of the array. After $n$ extractions, the array is sorted. $O(n \log n)$ in the worst case, no extra space beyond the array itself. Not stable.

Heapsort is cache-unfriendly (the sift-down jumps around), so in practice it's slower than quicksort and mergesort. Its main role is as the worst-case backstop for introspective sort: when you need a hard $O(n \log n)$ guarantee in-place, heapsort is it.

13.4 The $\Omega(n \log n)$ lower bound

You might wonder if you can do better than $n \log n$. For comparison-based sorting (where the only operation on elements is "compare two of them"), the answer is no. The proof is a beautiful one-line argument:

$$n! \le 2^d \implies d \ge \log_2(n!) = \Theta(n \log n)$$

Comparison-sort lower bound

$n!$
The number of possible orderings (permutations) of $n$ distinct items. Any sorting algorithm must be able to distinguish all of them.
$d$
The depth of the decision tree — the worst-case number of comparisons the algorithm makes.
$2^d$
The maximum number of leaves in a binary decision tree of depth $d$. Each leaf corresponds to one identified permutation.
$\log_2(n!) = \Theta(n \log n)$
Stirling's approximation: $\log_2(n!) = n \log_2 n - n \log_2 e + O(\log n) = \Theta(n \log n)$.

The argument. A sorting algorithm is a binary decision tree: at each internal node you ask "is $a_i < a_j$?" and branch. The leaves are the possible sorted outputs — there are $n!$ of them. A binary tree with $n!$ leaves has depth at least $\log_2(n!)$. So some input will require at least $\log_2(n!) \approx n \log_2 n$ comparisons. You cannot do better, period — as long as you only compare.

13.5 Non-comparison sorts

If you're allowed to look at the bits or digits of the keys directly, you can beat $n \log n$:

13.6 Timsort — what Python and Java actually use

The sort in your standard library is almost certainly not textbook mergesort or quicksort. It's Timsort (Tim Peters, 2002) — a hybrid that scans the input for already-sorted "runs", then merges runs together. On input that's already mostly sorted (which is a lot of real data), Timsort finishes in $O(n)$. On random input it degrades gracefully to $O(n \log n)$. It's stable, adaptive, and was designed by careful benchmarking on realistic data rather than on adversarial worst cases. Python, Java (for objects), Android, V8, and Swift all use some variant. If you ever wonder "why is Python's sort so fast on real workloads?" — that's why.

13.7 Stability

A sort is stable if elements that compare equal keep their relative order from the input. Mergesort is stable. Quicksort and heapsort are not (without extra bookkeeping). Stability matters when you sort by a secondary key and want the primary-key order preserved — e.g., "sort employees by department, keeping alphabetical order within each department". Python's sorted() is stable by default, which is one of the reasons it's so pleasant to use.

Source — Merge Sort & Quick Sort
-- Merge Sort: O(n log n) always, stable, O(n) extra space --
mergesort(a, lo, hi):
  if lo >= hi: return
  mid = (lo + hi) / 2
  mergesort(a, lo, mid)
  mergesort(a, mid+1, hi)
  merge(a, lo, mid, hi)

merge(a, lo, mid, hi):
  left  = a[lo..mid]
  right = a[mid+1..hi]
  i = j = 0; k = lo
  while i < len(left) and j < len(right):
    if left[i] <= right[j]: a[k++] = left[i++]
    else:                    a[k++] = right[j++]
  copy remaining left/right into a[k..]

-- Quick Sort: O(n log n) expected, in-place --
quicksort(a, lo, hi):
  if lo >= hi: return
  p = partition(a, lo, hi)   // random pivot
  quicksort(a, lo, p-1)
  quicksort(a, p+1, hi)

partition(a, lo, hi):
  swap a[random(lo,hi)] with a[hi]  // random pivot to end
  pivot = a[hi]; i = lo - 1
  for j = lo to hi-1:
    if a[j] <= pivot: i++; swap a[i] and a[j]
  swap a[i+1] and a[hi]
  return i+1
import random

# Merge sort — stable, O(n log n) always
def mergesort(a: list) -> list:
    if len(a) <= 1:
        return a
    mid   = len(a) // 2
    left  = mergesort(a[:mid])
    right = mergesort(a[mid:])
    return _merge(left, right)

def _merge(l, r):
    out = []
    i = j = 0
    while i < len(l) and j < len(r):
        if l[i] <= r[j]: out.append(l[i]); i += 1
        else:             out.append(r[j]); j += 1
    return out + l[i:] + r[j:]

# Quick sort — in-place, O(n log n) expected
def quicksort(a: list, lo: int = 0, hi: int = -1) -> None:
    if hi == -1: hi = len(a) - 1
    if lo >= hi: return
    p = _partition(a, lo, hi)
    quicksort(a, lo, p - 1)
    quicksort(a, p + 1, hi)

def _partition(a, lo, hi):
    k = random.randint(lo, hi)
    a[k], a[hi] = a[hi], a[k]      # move random pivot to end
    pivot, i = a[hi], lo - 1
    for j in range(lo, hi):
        if a[j] <= pivot:
            i += 1
            a[i], a[j] = a[j], a[i]
    a[i+1], a[hi] = a[hi], a[i+1]
    return i + 1
// Merge sort
function mergeSort(a) {
  if (a.length <= 1) return a;
  const mid = a.length >> 1;
  const L = mergeSort(a.slice(0, mid));
  const R = mergeSort(a.slice(mid));
  const out = [];
  let i = 0, j = 0;
  while (i < L.length && j < R.length)
    out.push(L[i] <= R[j] ? L[i++] : R[j++]);
  return out.concat(L.slice(i), R.slice(j));
}

// Quick sort (in-place)
function quickSort(a, lo = 0, hi = a.length - 1) {
  if (lo >= hi) return;
  const p = partition(a, lo, hi);
  quickSort(a, lo, p - 1);
  quickSort(a, p + 1, hi);
}

function partition(a, lo, hi) {
  const k = lo + Math.floor(Math.random() * (hi - lo + 1));
  [a[k], a[hi]] = [a[hi], a[k]];
  const pivot = a[hi]; let i = lo - 1;
  for (let j = lo; j < hi; j++)
    if (a[j] <= pivot) { i++; [a[i], a[j]] = [a[j], a[i]]; }
  [a[i+1], a[hi]] = [a[hi], a[i+1]];
  return i + 1;
}
#include <stdlib.h>
#include <string.h>

/* Merge sort */
static void merge(int *a, int lo, int mid, int hi) {
    int n = hi - lo + 1;
    int *tmp = malloc(n * sizeof(int));
    int i=lo, j=mid+1, k=0;
    while (i<=mid && j<=hi) tmp[k++] = a[i]<=a[j] ? a[i++] : a[j++];
    while (i<=mid) tmp[k++]=a[i++];
    while (j<=hi)  tmp[k++]=a[j++];
    memcpy(a+lo, tmp, n*sizeof(int));
    free(tmp);
}
void mergesort(int *a, int lo, int hi) {
    if (lo>=hi) return;
    int mid=(lo+hi)/2;
    mergesort(a,lo,mid); mergesort(a,mid+1,hi);
    merge(a,lo,mid,hi);
}

/* Quick sort (random pivot) */
static int partition(int *a, int lo, int hi) {
    int k = lo + rand()%(hi-lo+1); int t=a[k]; a[k]=a[hi]; a[hi]=t;
    int pivot=a[hi], i=lo-1;
    for (int j=lo;j<hi;j++) if(a[j]<=pivot){i++;t=a[i];a[i]=a[j];a[j]=t;}
    t=a[i+1];a[i+1]=a[hi];a[hi]=t; return i+1;
}
void quicksort(int *a, int lo, int hi)
    { if(lo<hi){int p=partition(a,lo,hi);quicksort(a,lo,p-1);quicksort(a,p+1,hi);} }
#include <vector>
#include <algorithm>
#include <cstdlib>

// Merge sort
std::vector<int> mergeSort(std::vector<int> a) {
    if (a.size() <= 1) return a;
    auto mid = a.begin() + a.size()/2;
    auto L = mergeSort({a.begin(), mid});
    auto R = mergeSort({mid, a.end()});
    std::vector<int> out; out.reserve(a.size());
    std::merge(L.begin(),L.end(), R.begin(),R.end(), std::back_inserter(out));
    return out;
}

// Quick sort in-place (std::sort uses introsort — same idea)
void quickSort(std::vector<int>& a, int lo, int hi) {
    if (lo >= hi) return;
    int k = lo + rand()%(hi-lo+1); std::swap(a[k], a[hi]);
    int pivot=a[hi], i=lo-1;
    for (int j=lo; j<hi; j++) if(a[j]<=pivot) std::swap(a[++i],a[j]);
    std::swap(a[i+1],a[hi]);
    int p=i+1; quickSort(a,lo,p-1); quickSort(a,p+1,hi);
}
import java.util.*;

public class Sorts {
    // Merge sort (returns new sorted list)
    public static int[] mergeSort(int[] a) {
        if (a.length <= 1) return a;
        int mid = a.length / 2;
        int[] L = mergeSort(Arrays.copyOfRange(a, 0, mid));
        int[] R = mergeSort(Arrays.copyOfRange(a, mid, a.length));
        return merge(L, R);
    }
    private static int[] merge(int[] l, int[] r) {
        int[] out = new int[l.length + r.length]; int i=0,j=0,k=0;
        while (i<l.length && j<r.length) out[k++] = l[i]<=r[j]?l[i++]:r[j++];
        while (i<l.length) out[k++]=l[i++]; while(j<r.length) out[k++]=r[j++];
        return out;
    }

    // Quick sort (in-place)
    public static void quickSort(int[] a, int lo, int hi) {
        if (lo >= hi) return;
        int p = partition(a, lo, hi);
        quickSort(a, lo, p-1); quickSort(a, p+1, hi);
    }
    private static int partition(int[] a, int lo, int hi) {
        int k = lo + (int)(Math.random()*(hi-lo+1));
        int t=a[k]; a[k]=a[hi]; a[hi]=t;
        int pivot=a[hi], i=lo-1;
        for (int j=lo;j<hi;j++) if(a[j]<=pivot){t=a[++i];a[i]=a[j];a[j]=t;}
        t=a[i+1];a[i+1]=a[hi];a[hi]=t; return i+1;
    }
}
package main

import "math/rand"

// Merge sort (returns new slice)
func MergeSort(a []int) []int {
    if len(a) <= 1 { return a }
    mid := len(a) / 2
    L, R := MergeSort(a[:mid]), MergeSort(a[mid:])
    out := make([]int, 0, len(a))
    i, j := 0, 0
    for i < len(L) && j < len(R) {
        if L[i] <= R[j] { out = append(out, L[i]); i++ } else { out = append(out, R[j]); j++ }
    }
    return append(append(out, L[i:]...), R[j:]...)
}

// Quick sort (in-place)
func QuickSort(a []int, lo, hi int) {
    if lo >= hi { return }
    p := partition(a, lo, hi)
    QuickSort(a, lo, p-1); QuickSort(a, p+1, hi)
}

func partition(a []int, lo, hi int) int {
    k := lo + rand.Intn(hi-lo+1)
    a[k], a[hi] = a[hi], a[k]
    pivot, i := a[hi], lo-1
    for j := lo; j < hi; j++ {
        if a[j] <= pivot { i++; a[i], a[j] = a[j], a[i] }
    }
    a[i+1], a[hi] = a[hi], a[i+1]
    return i + 1
}

14. Searching

14.1 Linear search

Walk the array, return the first match. $O(n)$. Works on unsorted data and is what you want for small $n$ (under 20 or so, constant-factor wins dominate).

14.2 Binary search

On a sorted array, you can find any element in $O(\log n)$. Keep two pointers, $\text{lo}$ and $\text{hi}$, initially spanning the array. Look at the middle. If the target equals the middle, done. If less, recurse on the left half; if greater, on the right. Each step halves the remaining range.

Binary search looks trivial on the blackboard and is famously hard to get exactly right. The classic traps:

The good news is that you don't write binary search from scratch — Python has bisect, C++ has std::lower_bound, Go has sort.Search. Use them.

Binary search isn't just for arrays. Any time you have a monotone predicate on a range — "the answer goes from false to true at some point as I increase $x$" — you can binary-search the transition. This is the key to problems like "find the smallest capacity such that all packages ship within $d$ days": you binary-search on capacity.

14.3 Interpolation search

If your array is sorted and the keys are roughly uniformly distributed, you can do better than binary search. Instead of always checking the middle, interpolation search guesses where the target should be based on its value: if you're looking for 90 in a sorted array from 0 to 100, start your probe near the end, not at the middle. The expected running time on uniform data drops to $O(\log \log n)$ — which is staggeringly fast (for $n = 10^9$, that's about 5 comparisons). On worst-case or skewed distributions it degenerates to $O(n)$, which is why you don't see it much.

14.4 Hash lookup

The third way to find things is to not search at all — hash the key and go directly to its bucket. We already covered that in §6. When you don't need ordering, this is almost always the winner.

Source — Binary Search
binary_search(a, target):
  lo = 0; hi = len(a) - 1
  while lo <= hi:
    mid = lo + (hi - lo) / 2    // avoids overflow vs (lo+hi)/2
    if a[mid] == target: return mid
    if a[mid] < target:  lo = mid + 1
    else:                hi = mid - 1
  return -1   // not found

-- Lower bound (first index where a[i] >= target) --
lower_bound(a, target):
  lo = 0; hi = len(a)
  while lo < hi:
    mid = lo + (hi - lo) / 2
    if a[mid] < target: lo = mid + 1
    else:               hi = mid
  return lo
def binary_search(a: list, target: int) -> int:
    """Return index of target in sorted array a, or -1 if absent."""
    lo, hi = 0, len(a) - 1
    while lo <= hi:
        mid = lo + (hi - lo) // 2   # avoids integer overflow
        if a[mid] == target:
            return mid
        elif a[mid] < target:
            lo = mid + 1
        else:
            hi = mid - 1
    return -1

def lower_bound(a: list, target: int) -> int:
    """First index i such that a[i] >= target (insertion point)."""
    lo, hi = 0, len(a)
    while lo < hi:
        mid = lo + (hi - lo) // 2
        if a[mid] < target: lo = mid + 1
        else:                hi = mid
    return lo

# Python standard library:
# import bisect
# bisect.bisect_left(a, target)  == lower_bound
# bisect.bisect_right(a, target) == upper_bound
function binarySearch(a, target) {
  let lo = 0, hi = a.length - 1;
  while (lo <= hi) {
    const mid = lo + ((hi - lo) >> 1);
    if      (a[mid] === target) return mid;
    else if (a[mid] <  target)  lo = mid + 1;
    else                        hi = mid - 1;
  }
  return -1;
}

// Lower bound: first index where a[i] >= target
function lowerBound(a, target) {
  let lo = 0, hi = a.length;
  while (lo < hi) {
    const mid = lo + ((hi - lo) >> 1);
    if (a[mid] < target) lo = mid + 1;
    else                  hi = mid;
  }
  return lo;
}

// Upper bound: first index where a[i] > target
function upperBound(a, target) {
  let lo = 0, hi = a.length;
  while (lo < hi) {
    const mid = lo + ((hi - lo) >> 1);
    if (a[mid] <= target) lo = mid + 1;
    else                   hi = mid;
  }
  return lo;
}
#include <stddef.h>

/* Returns index of target in sorted array a[0..n-1], or -1 */
int binary_search(const int *a, int n, int target) {
    int lo = 0, hi = n - 1;
    while (lo <= hi) {
        int mid = lo + (hi - lo) / 2;
        if (a[mid] == target) return mid;
        if (a[mid]  < target) lo = mid + 1;
        else                  hi = mid - 1;
    }
    return -1;
}

/* Lower bound: first i where a[i] >= target */
int lower_bound(const int *a, int n, int target) {
    int lo = 0, hi = n;
    while (lo < hi) {
        int mid = lo + (hi - lo) / 2;
        if (a[mid] < target) lo = mid + 1;
        else                 hi = mid;
    }
    return lo;
}
/* C stdlib: bsearch() for exact match */
#include <algorithm>
#include <vector>

// Exact match
int binarySearch(const std::vector<int>& a, int target) {
    int lo = 0, hi = (int)a.size() - 1;
    while (lo <= hi) {
        int mid = lo + (hi - lo) / 2;
        if      (a[mid] == target) return mid;
        else if (a[mid] <  target) lo = mid + 1;
        else                       hi = mid - 1;
    }
    return -1;
}

// Standard library equivalents (prefer these in real code):
// std::lower_bound(a.begin(), a.end(), target)  — O(log n)
// std::upper_bound(a.begin(), a.end(), target)  — O(log n)
// std::binary_search(a.begin(), a.end(), target) — returns bool
public class BinarySearch {
    public static int search(int[] a, int target) {
        int lo = 0, hi = a.length - 1;
        while (lo <= hi) {
            int mid = lo + (hi - lo) / 2;   // avoids overflow
            if      (a[mid] == target) return mid;
            else if (a[mid]  < target) lo = mid + 1;
            else                       hi = mid - 1;
        }
        return -1;
    }

    // Lower bound: first index where a[i] >= target
    public static int lowerBound(int[] a, int target) {
        int lo = 0, hi = a.length;
        while (lo < hi) {
            int mid = lo + (hi - lo) / 2;
            if (a[mid] < target) lo = mid + 1;
            else                 hi = mid;
        }
        return lo;
    }
    // Java standard: Arrays.binarySearch(a, target)
}
package main

// Exact match — returns index or -1
func BinarySearch(a []int, target int) int {
    lo, hi := 0, len(a)-1
    for lo <= hi {
        mid := lo + (hi-lo)/2
        if a[mid] == target   { return mid }
        if a[mid] < target    { lo = mid + 1 } else { hi = mid - 1 }
    }
    return -1
}

// Lower bound — first i where a[i] >= target
func LowerBound(a []int, target int) int {
    lo, hi := 0, len(a)
    for lo < hi {
        mid := lo + (hi-lo)/2
        if a[mid] < target { lo = mid + 1 } else { hi = mid }
    }
    return lo
}

// Go standard library: sort.SearchInts(a, target) == lower_bound

15. BFS and DFS — the two core graph traversals

Breadth-first and depth-first search are the two ways to systematically visit every vertex in a graph. They differ only in the container they use for the frontier: BFS uses a queue, DFS uses a stack (or equivalently, recursion). Both are $O(V + E)$.

15.1 BFS — breadth-first search

Start with the source vertex in a queue. Repeatedly: pop the front of the queue, mark it visited, enqueue all its unvisited neighbors. This visits vertices in increasing distance from the source, layer by layer. That's why BFS on an unweighted graph gives you shortest paths "for free" — the first time you reach a vertex is along a shortest path.

BFS use cases:

15.2 DFS — depth-first search

Start with the source, recurse into the first unvisited neighbor, recurse into its first unvisited neighbor, and so on — you plunge as deep as possible before backtracking. Implemented with recursion (or an explicit stack).

DFS use cases:

When to pick which. Use BFS when you need shortest paths, layers, or when the graph is very deep (DFS might blow the stack). Use DFS when you need to reason about subtrees, when memory is tight (BFS's queue can get huge), or when the problem has a recursive structure that matches.
Source — BFS & DFS
-- BFS (queue, O(V+E)) --
bfs(graph, start):
  visited = {start}
  dist    = {start: 0}
  queue   = [start]
  while queue not empty:
    u = dequeue(queue)
    for each neighbor v of u:
      if v not in visited:
        visited.add(v)
        dist[v] = dist[u] + 1
        enqueue(queue, v)
  return dist

-- DFS recursive (stack via call stack, O(V+E)) --
dfs_recursive(graph, u, visited):
  visited.add(u)
  for each neighbor v of u:
    if v not in visited:
      dfs_recursive(graph, v, visited)

-- DFS iterative (explicit stack) --
dfs_iterative(graph, start):
  visited = {}
  stack   = [start]
  while stack not empty:
    u = pop(stack)
    if u in visited: continue
    visited.add(u)
    for each neighbor v of u:
      if v not in visited: push(stack, v)
from collections import deque

# BFS — returns shortest distances from start
def bfs(graph: dict, start) -> dict:
    dist = {start: 0}
    q    = deque([start])
    while q:
        u = q.popleft()
        for v in graph.get(u, []):
            if v not in dist:
                dist[v] = dist[u] + 1
                q.append(v)
    return dist

# DFS recursive
def dfs_recursive(graph: dict, u, visited: set = None):
    if visited is None: visited = set()
    visited.add(u)
    for v in graph.get(u, []):
        if v not in visited:
            dfs_recursive(graph, v, visited)
    return visited

# DFS iterative
def dfs_iterative(graph: dict, start) -> set:
    visited = set()
    stack   = [start]
    while stack:
        u = stack.pop()
        if u in visited: continue
        visited.add(u)
        for v in graph.get(u, []):
            if v not in visited:
                stack.append(v)
    return visited
// BFS — returns Map of vertex -> shortest distance
function bfs(graph, start) {
  const dist = new Map([[start, 0]]);
  const q    = [start];
  let head   = 0;
  while (head < q.length) {
    const u = q[head++];
    for (const v of (graph.get(u) || [])) {
      if (!dist.has(v)) {
        dist.set(v, dist.get(u) + 1);
        q.push(v);
      }
    }
  }
  return dist;
}

// DFS recursive
function dfsRecursive(graph, u, visited = new Set()) {
  visited.add(u);
  for (const v of (graph.get(u) || []))
    if (!visited.has(v)) dfsRecursive(graph, v, visited);
  return visited;
}

// DFS iterative
function dfsIterative(graph, start) {
  const visited = new Set(), stack = [start];
  while (stack.length) {
    const u = stack.pop();
    if (visited.has(u)) continue;
    visited.add(u);
    for (const v of (graph.get(u) || []))
      if (!visited.has(v)) stack.push(v);
  }
  return visited;
}
#include <string.h>
#include <stdlib.h>
#define V 512

int adj[V][V], visited[V];

/* BFS — dist[] filled with shortest distances from src */
void bfs(int src, int n, int dist[]) {
    memset(visited, 0, sizeof(visited));
    memset(dist, -1, n * sizeof(int));
    int q[V], head=0, tail=0;
    visited[src]=1; dist[src]=0; q[tail++]=src;
    while (head < tail) {
        int u = q[head++];
        for (int v=0; v<n; v++)
            if (adj[u][v] && !visited[v])
                { visited[v]=1; dist[v]=dist[u]+1; q[tail++]=v; }
    }
}

/* DFS recursive */
void dfs(int u, int n) {
    visited[u] = 1;
    for (int v=0; v<n; v++)
        if (adj[u][v] && !visited[v]) dfs(v, n);
}

/* DFS iterative */
void dfs_iter(int src, int n) {
    memset(visited, 0, sizeof(visited));
    int stk[V], top=0; stk[top++]=src;
    while (top) {
        int u=stk[--top];
        if (visited[u]) continue;
        visited[u]=1;
        for (int v=n-1;v>=0;v--)
            if (adj[u][v] && !visited[v]) stk[top++]=v;
    }
}
#include <vector>
#include <queue>
#include <stack>
#include <unordered_map>

using Graph = std::vector<std::vector<int>>;

// BFS — returns dist vector (-1 = unreachable)
std::vector<int> bfs(const Graph& g, int src) {
    int n = g.size();
    std::vector<int> dist(n, -1);
    dist[src] = 0;
    std::queue<int> q; q.push(src);
    while (!q.empty()) {
        int u = q.front(); q.pop();
        for (int v : g[u]) if (dist[v]==-1) { dist[v]=dist[u]+1; q.push(v); }
    }
    return dist;
}

// DFS recursive
void dfsRec(const Graph& g, int u, std::vector<bool>& vis) {
    vis[u] = true;
    for (int v : g[u]) if (!vis[v]) dfsRec(g, v, vis);
}

// DFS iterative
void dfsIter(const Graph& g, int src) {
    std::vector<bool> vis(g.size(), false);
    std::stack<int> stk; stk.push(src);
    while (!stk.empty()) {
        int u = stk.top(); stk.pop();
        if (vis[u]) continue; vis[u]=true;
        for (int v : g[u]) if (!vis[v]) stk.push(v);
    }
}
import java.util.*;

public class Traversal {
    // BFS — returns dist array
    public static int[] bfs(List<List<Integer>> g, int src) {
        int n = g.size();
        int[] dist = new int[n]; Arrays.fill(dist, -1);
        dist[src] = 0;
        Queue<Integer> q = new LinkedList<>(); q.offer(src);
        while (!q.isEmpty()) {
            int u = q.poll();
            for (int v : g.get(u))
                if (dist[v] == -1) { dist[v] = dist[u]+1; q.offer(v); }
        }
        return dist;
    }

    // DFS recursive
    public static void dfsRec(List<List<Integer>> g, int u, boolean[] vis) {
        vis[u] = true;
        for (int v : g.get(u)) if (!vis[v]) dfsRec(g, v, vis);
    }

    // DFS iterative
    public static void dfsIter(List<List<Integer>> g, int src) {
        boolean[] vis = new boolean[g.size()];
        Deque<Integer> stk = new ArrayDeque<>(); stk.push(src);
        while (!stk.isEmpty()) {
            int u = stk.pop();
            if (vis[u]) continue; vis[u] = true;
            for (int v : g.get(u)) if (!vis[v]) stk.push(v);
        }
    }
}
package main

// BFS — returns slice of shortest distances (-1 = unreachable)
func BFS(adj [][]int, src int) []int {
    n := len(adj)
    dist := make([]int, n)
    for i := range dist { dist[i] = -1 }
    dist[src] = 0
    q := []int{src}
    for len(q) > 0 {
        u := q[0]; q = q[1:]
        for _, v := range adj[u] {
            if dist[v] == -1 { dist[v] = dist[u] + 1; q = append(q, v) }
        }
    }
    return dist
}

// DFS recursive
func DFSRec(adj [][]int, u int, vis []bool) {
    vis[u] = true
    for _, v := range adj[u] { if !vis[v] { DFSRec(adj, v, vis) } }
}

// DFS iterative
func DFSIter(adj [][]int, src int) []bool {
    vis := make([]bool, len(adj))
    stk := []int{src}
    for len(stk) > 0 {
        u := stk[len(stk)-1]; stk = stk[:len(stk)-1]
        if vis[u] { continue }
        vis[u] = true
        for _, v := range adj[u] { if !vis[v] { stk = append(stk, v) } }
    }
    return vis
}

16. Dijkstra — single-source shortest path

BFS gives you shortest paths when every edge has weight 1. When edges have different weights — think road networks, network latency, fare maps — you need Dijkstra's algorithm. Dijkstra computes the shortest distance from a source $s$ to every other vertex in a graph with non-negative edge weights.

The recipe:

  1. Initialize $\text{dist}[s] = 0$ and $\text{dist}[v] = \infty$ for every other vertex. Put $(0, s)$ in a min-heap.
  2. Extract the vertex $u$ with smallest current distance from the heap. If $u$ was already finalized, skip it (see §9.2 on stale entries).
  3. For each neighbor $v$ of $u$ with edge weight $w(u, v)$, check if $\text{dist}[u] + w(u, v) < \text{dist}[v]$. If so, update $\text{dist}[v]$ and push $(\text{dist}[v], v)$ onto the heap. This step is called relaxation.
  4. Mark $u$ as finalized — its distance is now settled forever.
  5. Repeat until the heap is empty (or until the target vertex has been finalized, if you only want one).

The update rule in one formula:

$$\text{dist}[v] \leftarrow \min\big(\text{dist}[v],\; \text{dist}[u] + w(u, v)\big)$$

Edge relaxation

$\text{dist}[v]$
The current best-known distance from the source $s$ to vertex $v$. Starts at $\infty$; shrinks as we find shorter paths.
$\text{dist}[u]$
The final distance to $u$ at the moment we extract it from the heap — Dijkstra's invariant says this is already correct.
$w(u, v)$
The weight of the edge from $u$ to $v$. Must be non-negative.
$\leftarrow$
Assignment. Set the left side to the new value.
$\min$
Take the smaller of the two — we only update if the new path is shorter.

Why it works (sketch). The invariant is: when you extract a vertex $u$ from the min-heap, $\text{dist}[u]$ is already its true shortest distance. Suppose not — then there's a shorter path that goes through some other unextracted vertex $w$ with $\text{dist}[w] < \text{dist}[u]$. But then $w$ would have been extracted before $u$, contradiction. The non-negative edge weights are essential: they ensure that distances never go down once a vertex is extracted.

With a binary heap, Dijkstra runs in $O((V + E) \log V)$. With a Fibonacci heap you can get $O(E + V \log V)$, which is asymptotically better but the constants are bad — nobody uses it. In practice, a binary heap is the right choice.

Negative weights? Dijkstra fails on graphs with negative-weight edges — you can't assume extracted distances are final. Use the Bellman-Ford algorithm instead. It's slower ($O(VE)$) but handles negative weights and even detects negative cycles.

Dijkstra is the core of every routing system you use: GPS driving directions, network routers (OSPF is Dijkstra under the hood), flight search, game AI pathfinding. The variant A* adds a heuristic estimate of remaining distance to guide the search toward the target faster — same skeleton, different priority key. You can think of it this way: Dijkstra expands vertices in order of $\text{dist}[u]$; A* expands them in order of $\text{dist}[u] + h(u, t)$, where $h$ is a lower bound on the remaining distance to the target $t$. As long as $h$ never overestimates (an admissible heuristic), A* still returns the true shortest path but usually explores far fewer vertices.

Source — Dijkstra
dijkstra(graph, src):
  dist[src] = 0; dist[v] = INF for all other v
  heap = min-heap; push (0, src)

  while heap not empty:
    (d, u) = pop_min(heap)
    if d > dist[u]: continue    // stale entry, skip

    for each (v, w) in neighbors(u):
      new_d = dist[u] + w
      if new_d < dist[v]:
        dist[v] = new_d
        push (new_d, v) onto heap

  return dist
import heapq

def dijkstra(graph: dict, src) -> dict:
    """
    graph: {u: [(v, weight), ...]}
    Returns: {vertex: shortest distance from src}
    """
    dist = {src: 0}
    heap = [(0, src)]   # (distance, vertex)

    while heap:
        d, u = heapq.heappop(heap)
        if d > dist.get(u, float('inf')):
            continue                     # stale entry
        for v, w in graph.get(u, []):
            new_d = d + w
            if new_d < dist.get(v, float('inf')):
                dist[v] = new_d
                heapq.heappush(heap, (new_d, v))
    return dist

# Example:
# graph = {'A': [('B', 1), ('C', 4)], 'B': [('C', 2)], 'C': []}
# dijkstra(graph, 'A')  # {'A': 0, 'B': 1, 'C': 3}
// MinHeap from §9 (or use a priority queue library)
function dijkstra(graph, src) {
  const dist = new Map();
  dist.set(src, 0);
  // Simple array-based min-heap via sort (fine for small graphs)
  const heap = [[0, src]];   // [dist, vertex]

  while (heap.length) {
    heap.sort((a, b) => a[0] - b[0]);
    const [d, u] = heap.shift();
    if (d > (dist.get(u) ?? Infinity)) continue;  // stale

    for (const [v, w] of (graph.get(u) || [])) {
      const nd = d + w;
      if (nd < (dist.get(v) ?? Infinity)) {
        dist.set(v, nd);
        heap.push([nd, v]);
      }
    }
  }
  return dist;
}
#include <stdlib.h>
#include <string.h>
#define V_MAX 512
#define INF   0x7fffffff

/* Adjacency list (edge list per vertex) */
typedef struct { int to, w; } Edge;
Edge adj[V_MAX][V_MAX]; int deg[V_MAX];

/* Simple O(V^2) Dijkstra (adequate for dense/small graphs) */
int dist[V_MAX]; int vis[V_MAX];

void dijkstra(int src, int n) {
    for (int i=0;i<n;i++) dist[i]=INF, vis[i]=0;
    dist[src]=0;
    for (int iter=0;iter<n;iter++) {
        int u=-1;
        for (int v=0;v<n;v++) if (!vis[v] && (u==-1||dist[v]<dist[u])) u=v;
        if (u==-1 || dist[u]==INF) break;
        vis[u]=1;
        for (int i=0;i<deg[u];i++) {
            int v=adj[u][i].to, w=adj[u][i].w;
            if (dist[u]+w < dist[v]) dist[v]=dist[u]+w;
        }
    }
}
#include <vector>
#include <queue>
#include <climits>

using Graph = std::vector<std::vector<std::pair<int,int>>>;  // adj[u]={(v,w)}

std::vector<int> dijkstra(const Graph& g, int src) {
    int n = g.size();
    std::vector<int> dist(n, INT_MAX);
    dist[src] = 0;
    std::priority_queue<std::pair<int,int>,
                        std::vector<std::pair<int,int>>,
                        std::greater<>> pq;  // min-heap
    pq.push({0, src});

    while (!pq.empty()) {
        auto [d, u] = pq.top(); pq.pop();
        if (d > dist[u]) continue;           // stale entry
        for (auto [v, w] : g[u]) {
            if (dist[u] + w < dist[v]) {
                dist[v] = dist[u] + w;
                pq.push({dist[v], v});
            }
        }
    }
    return dist;
}
import java.util.*;

public class Dijkstra {
    public static int[] dijkstra(List<List<int[]>> g, int src) {
        // g.get(u) = list of {v, weight}
        int n = g.size();
        int[] dist = new int[n]; Arrays.fill(dist, Integer.MAX_VALUE);
        dist[src] = 0;
        // PriorityQueue: min-heap on distance
        PriorityQueue<int[]> pq = new PriorityQueue<>(Comparator.comparingInt(a -> a[0]));
        pq.offer(new int[]{0, src});

        while (!pq.isEmpty()) {
            int[] top = pq.poll();
            int d = top[0], u = top[1];
            if (d > dist[u]) continue;  // stale
            for (int[] e : g.get(u)) {
                int v = e[0], w = e[1];
                if (dist[u] + w < dist[v]) {
                    dist[v] = dist[u] + w;
                    pq.offer(new int[]{dist[v], v});
                }
            }
        }
        return dist;
    }
}
package main

import "container/heap"

type Edge struct{ To, W int }

// Min-heap of (dist, vertex)
type Item struct{ d, v int }
type PQ []Item
func (h PQ) Len() int            { return len(h) }
func (h PQ) Less(i, j int) bool  { return h[i].d < h[j].d }
func (h PQ) Swap(i, j int)       { h[i], h[j] = h[j], h[i] }
func (h *PQ) Push(x any)         { *h = append(*h, x.(Item)) }
func (h *PQ) Pop() any           { old:=*h; n:=len(old); x:=old[n-1]; *h=old[:n-1]; return x }

func Dijkstra(adj [][]Edge, src int) []int {
    const inf = 1 << 30
    n := len(adj)
    dist := make([]int, n)
    for i := range dist { dist[i] = inf }
    dist[src] = 0
    pq := &PQ{{0, src}}
    heap.Init(pq)
    for pq.Len() > 0 {
        cur := heap.Pop(pq).(Item)
        if cur.d > dist[cur.v] { continue }  // stale
        for _, e := range adj[cur.v] {
            if nd := dist[cur.v] + e.W; nd < dist[e.To] {
                dist[e.To] = nd
                heap.Push(pq, Item{nd, e.To})
            }
        }
    }
    return dist
}

17. Dynamic programming

Dynamic programming (DP) is the technique that cracks problems where a naive recursion would repeat work exponentially. The trick is to memoize: store the answer to each subproblem the first time you see it, and on subsequent encounters just look it up. When every subproblem is solved exactly once, the total work is (number of subproblems) × (cost per subproblem).

Two conditions make a problem a DP problem:

  1. Optimal substructure. The optimal solution to the full problem can be expressed in terms of optimal solutions to smaller subproblems.
  2. Overlapping subproblems. The same subproblem shows up many times in the naive recursion — otherwise memoization doesn't help.

17.1 The canonical example — Fibonacci

The naive recursion is:

$$F(n) = F(n-1) + F(n-2), \quad F(0) = 0, F(1) = 1$$

Fibonacci recurrence

$F(n)$
The $n$-th Fibonacci number. $F(0) = 0$, $F(1) = 1$, $F(2) = 1$, $F(3) = 2$, $F(4) = 3$, $F(5) = 5$, ...
$F(n-1), F(n-2)$
The previous two Fibonacci numbers — each is a subproblem.
$F(0) = 0, F(1) = 1$
Base cases. Every recursive definition needs them or it doesn't terminate.

The naive cost. Implemented as plain recursion, $F(n)$ makes $\Theta(\varphi^n)$ calls where $\varphi = (1+\sqrt{5})/2 \approx 1.618$ is the golden ratio. $F(40)$ takes seconds; $F(60)$ takes hours. The problem is that $F(n-2)$ is computed, thrown away, and then computed again as a subtree of $F(n-1)$ — overlapping subproblems in action.

Memoize it (store $F(k)$ for each $k$ as you go), and the running time drops from $O(\varphi^n)$ to $O(n)$ — you compute each of $n$ values exactly once. This is DP in its simplest form.

17.2 Longest common subsequence (LCS)

Given two strings $X$ and $Y$, find the longest sequence of characters that appears in both (in the same order, not necessarily contiguous). Used by diff, spell-checkers, and DNA alignment.

Let $L(i, j)$ be the length of the LCS of the first $i$ characters of $X$ and the first $j$ characters of $Y$. The recurrence:

$$L(i, j) = \begin{cases} 0 & i = 0 \text{ or } j = 0 \\ L(i-1, j-1) + 1 & X_i = Y_j \\ \max\big(L(i-1, j),\, L(i, j-1)\big) & X_i \ne Y_j \end{cases}$$

LCS recurrence

$L(i, j)$
Length of the LCS of $X[1..i]$ and $Y[1..j]$.
$X_i, Y_j$
The $i$-th character of $X$ and $j$-th of $Y$.
Case 1 ($i=0$ or $j=0$)
An empty prefix has LCS of length 0 with anything.
Case 2 ($X_i = Y_j$)
We can extend an LCS of the shorter prefixes by one character.
Case 3 ($X_i \ne Y_j$)
The last characters don't match, so the LCS must drop one of them. We try both options and keep the longer.

Filling the table. You build an $(|X|+1) \times (|Y|+1)$ table and fill it in, either top-down with memoization or bottom-up with nested loops. Time and space are both $O(|X| \cdot |Y|)$. Space can be reduced to $O(\min(|X|, |Y|))$ by keeping only two rows at a time if you only need the length, not the actual subsequence.

17.3 0/1 Knapsack

You have $n$ items, each with a weight $w_i$ and value $v_i$. You have a knapsack with capacity $W$. Pick a subset of items to put in the knapsack (each item either in or out — the "0/1") to maximize total value, subject to total weight $\le W$.

Let $K(i, c)$ be the max value achievable using the first $i$ items with remaining capacity $c$. Then:

$$K(i, c) = \max\big(K(i-1, c),\; v_i + K(i-1, c - w_i)\big)$$

Knapsack recurrence

$K(i, c)$
Max total value using only the first $i$ items, with capacity budget $c$.
$K(i-1, c)$
Skip item $i$. Value is whatever the first $i-1$ items can achieve with the same capacity.
$v_i + K(i-1, c - w_i)$
Take item $i$. We get value $v_i$ plus whatever the first $i-1$ items can achieve with the reduced capacity. Only valid if $w_i \le c$.
$\max$
Pick whichever choice gives more value.

Pseudo-polynomial. This runs in $O(nW)$, which looks polynomial but isn't — $W$ can be exponentially large in the number of bits used to write it. Knapsack is NP-hard, and the DP only wins when $W$ is small. See complexity.html for the distinction between "polynomial in input size" and "polynomial in input magnitude".

17.4 Edit distance (Levenshtein)

How many single-character insertions, deletions, or substitutions does it take to turn $X$ into $Y$? This is the measure your spell-checker uses. Let $E(i, j)$ be the edit distance between $X[1..i]$ and $Y[1..j]$. Then:

$$E(i, j) = \begin{cases} j & i = 0 \\ i & j = 0 \\ E(i-1, j-1) & X_i = Y_j \\ 1 + \min\big(E(i-1, j),\, E(i, j-1),\, E(i-1, j-1)\big) & X_i \ne Y_j \end{cases}$$

Edit distance

$E(i, j)$
Minimum edits to turn $X[1..i]$ into $Y[1..j]$.
$E(i-1, j) + 1$
Delete $X_i$.
$E(i, j-1) + 1$
Insert $Y_j$.
$E(i-1, j-1) + 1$
Substitute $X_i$ for $Y_j$.
$X_i = Y_j$
Free match — no edit needed.

Used everywhere. Spell check ("did you mean"), approximate string matching, bioinformatics alignment (Needleman-Wunsch is this DP), and the "diff" between two files. $O(|X| \cdot |Y|)$ time, same space (reducible).

17.5 Top-down vs. bottom-up

There are two ways to actually implement any DP. Top-down is the straight translation of the recurrence into a recursive function plus a cache:

cache = {}
def F(state):
    if state in cache: return cache[state]
    ...compute from smaller states...
    cache[state] = answer
    return answer

Bottom-up is the iterative version: figure out the order in which states depend on each other, make a table, and fill it in that order. Both compute the same thing in the same Big-O time. Top-down is easier to write (the code looks like the math) but pays function-call overhead and can overflow the recursion stack. Bottom-up is usually faster in practice and easier to optimize for space — you can often keep only the last row or two of the table instead of the whole thing, dropping space from $O(n^2)$ to $O(n)$.

Python's functools.lru_cache turns any pure recursive function into a top-down DP for free. For quick prototyping it's unbeatable.

Source — Edit Distance (DP)
edit_distance(x, y):
  n = len(x); m = len(y)
  E = (n+1) x (m+1) table

  // Base cases
  for i in 0..n: E[i][0] = i   // delete all of x
  for j in 0..m: E[0][j] = j   // insert all of y

  // Fill table bottom-up
  for i in 1..n:
    for j in 1..m:
      if x[i] == y[j]:
        E[i][j] = E[i-1][j-1]       // free match
      else:
        E[i][j] = 1 + min(
          E[i-1][j],                 // delete x[i]
          E[i][j-1],                 // insert y[j]
          E[i-1][j-1]                // substitute
        )

  return E[n][m]
def edit_distance(x: str, y: str) -> int:
    """Levenshtein distance: min single-char edits to turn x into y."""
    n, m = len(x), len(y)
    # E[i][j] = edit distance between x[:i] and y[:j]
    E = [[0] * (m + 1) for _ in range(n + 1)]
    for i in range(n + 1): E[i][0] = i   # delete all of x prefix
    for j in range(m + 1): E[0][j] = j   # insert all of y prefix

    for i in range(1, n + 1):
        for j in range(1, m + 1):
            if x[i - 1] == y[j - 1]:
                E[i][j] = E[i-1][j-1]    # match, free
            else:
                E[i][j] = 1 + min(
                    E[i-1][j],            # delete x[i-1]
                    E[i][j-1],            # insert y[j-1]
                    E[i-1][j-1],          # substitute
                )
    return E[n][m]

# Space-optimized: keep only two rows
def edit_distance_opt(x: str, y: str) -> int:
    m = len(y)
    prev = list(range(m + 1))
    for c in x:
        cur = [prev[0] + 1] + [0] * m
        for j, d in enumerate(y):
            cur[j+1] = prev[j] if c == d else 1 + min(prev[j+1], cur[j], prev[j])
        prev = cur
    return prev[m]
function editDistance(x, y) {
  const n = x.length, m = y.length;
  // Build (n+1) x (m+1) table
  const E = Array.from({length: n+1}, (_, i) =>
    Array.from({length: m+1}, (_, j) => i === 0 ? j : (j === 0 ? i : 0))
  );
  for (let i = 1; i <= n; i++) {
    for (let j = 1; j <= m; j++) {
      if (x[i-1] === y[j-1]) {
        E[i][j] = E[i-1][j-1];
      } else {
        E[i][j] = 1 + Math.min(E[i-1][j], E[i][j-1], E[i-1][j-1]);
      }
    }
  }
  return E[n][m];
}

// console.log(editDistance("kitten", "sitting")); // 3
#include <stdio.h>
#include <string.h>
#define MAXN 512

int E[MAXN][MAXN];

static int min3(int a, int b, int c)
    { return a<b ? (a<c?a:c) : (b<c?b:c); }

int edit_distance(const char *x, const char *y) {
    int n = strlen(x), m = strlen(y);
    for (int i=0;i<=n;i++) E[i][0]=i;
    for (int j=0;j<=m;j++) E[0][j]=j;
    for (int i=1;i<=n;i++)
        for (int j=1;j<=m;j++) {
            if (x[i-1]==y[j-1]) E[i][j]=E[i-1][j-1];
            else E[i][j]=1+min3(E[i-1][j], E[i][j-1], E[i-1][j-1]);
        }
    return E[n][m];
}
#include <string>
#include <vector>
#include <algorithm>

int editDistance(const std::string& x, const std::string& y) {
    int n = x.size(), m = y.size();
    std::vector<std::vector<int>> E(n+1, std::vector<int>(m+1));
    for (int i=0;i<=n;i++) E[i][0]=i;
    for (int j=0;j<=m;j++) E[0][j]=j;
    for (int i=1;i<=n;i++)
        for (int j=1;j<=m;j++) {
            if (x[i-1]==y[j-1]) E[i][j]=E[i-1][j-1];
            else E[i][j]=1+std::min({E[i-1][j], E[i][j-1], E[i-1][j-1]});
        }
    return E[n][m];
}

// Space-optimized to O(m):
int editDistOpt(const std::string& x, const std::string& y) {
    int m = y.size();
    std::vector<int> prev(m+1), cur(m+1);
    std::iota(prev.begin(), prev.end(), 0);
    for (int i=1; i<=(int)x.size(); i++) {
        cur[0]=i;
        for (int j=1;j<=m;j++)
            cur[j] = x[i-1]==y[j-1] ? prev[j-1] :
                     1+std::min({prev[j], cur[j-1], prev[j-1]});
        std::swap(prev, cur);
    }
    return prev[m];
}
public class EditDistance {
    public static int compute(String x, String y) {
        int n = x.length(), m = y.length();
        int[][] E = new int[n+1][m+1];
        for (int i=0;i<=n;i++) E[i][0]=i;
        for (int j=0;j<=m;j++) E[0][j]=j;
        for (int i=1;i<=n;i++) {
            for (int j=1;j<=m;j++) {
                if (x.charAt(i-1)==y.charAt(j-1)) E[i][j]=E[i-1][j-1];
                else E[i][j]=1+Math.min(E[i-1][j-1], Math.min(E[i-1][j],E[i][j-1]));
            }
        }
        return E[n][m];
    }
    // compute("kitten","sitting") == 3
}
package main

func EditDistance(x, y string) int {
    n, m := len(x), len(y)
    // Allocate (n+1) x (m+1) table
    E := make([][]int, n+1)
    for i := range E { E[i] = make([]int, m+1) }
    for i := 0; i <= n; i++ { E[i][0] = i }
    for j := 0; j <= m; j++ { E[0][j] = j }

    for i := 1; i <= n; i++ {
        for j := 1; j <= m; j++ {
            if x[i-1] == y[j-1] {
                E[i][j] = E[i-1][j-1]
            } else {
                E[i][j] = 1 + min3(E[i-1][j], E[i][j-1], E[i-1][j-1])
            }
        }
    }
    return E[n][m]
}

func min3(a, b, c int) int {
    if a < b { if a < c { return a }; return c }
    if b < c { return b }; return c
}

17.6 The DP recipe

To solve a new DP problem:

  1. Define the state. What variables describe a subproblem? This is the hardest step. Getting it wrong makes the recurrence impossible.
  2. Write the recurrence. Express $\text{answer}(\text{state})$ in terms of smaller states. Include base cases.
  3. Order the computation. Either top-down with memoization (natural but uses recursion stack) or bottom-up with a table (iterative, often faster).
  4. Reconstruct the solution. If you need the actual choices, not just the objective value, store back-pointers as you fill the table.

18. Greedy algorithms

A greedy algorithm is one that, at each step, makes the locally best choice and never looks back. Sometimes this works perfectly. Sometimes it's catastrophically wrong. The hard part is knowing which.

When it works, the proof usually goes through a matroid or exchange argument: "if there's a better solution than the greedy one, then I can swap the greedy choice in without making things worse, contradiction." You'll see the formal version in theoretical CS.

18.1 Activity selection — a worked example

You have $n$ activities, each with a start and end time $(s_i, e_i)$. You can only do one activity at a time (no overlap). Maximize the number of activities you do.

Greedy rule: sort by end time, then greedily pick each activity whose start time is $\ge$ the end time of the last activity you picked.

Why it works. Suppose $G$ is the greedy solution and $O$ is some optimal solution with $|O| > |G|$. Consider the first activity in each, sorted by end time. The greedy picked the activity with the earliest end time, so it ended no later than $O$'s first choice. That means $G$'s remaining time budget is $\ge$ $O$'s. We can swap $G$'s first choice into $O$ without hurting $O$ and recurse. Eventually $O$ must equal $G$ in size, contradicting $|O| > |G|$.

Counter-intuition: sorting by start time or duration seems reasonable but both give wrong answers. A short activity in the middle can block two longer back-to-back ones. Try it on the set $\{(0, 10), (1, 2), (3, 11)\}$ — duration-first picks $(1, 2)$ and then can't fit anything else, while end-time first picks $(1, 2)$ and $(3, 11)$ for a total of 2. (Small example; the point is the heuristic, not this case.)

18.2 Huffman coding

Given character frequencies, build a binary code (prefix code) that minimizes total encoded length. Huffman's algorithm: repeatedly take the two least-frequent nodes from a min-heap, combine them into a parent whose frequency is their sum, push the parent back. After $n-1$ merges, you have the optimal prefix tree. $O(n \log n)$ via the heap. This is literally inside zip and gzip.

18.3 When greedy fails

Rule of thumb. If you can't articulate why the greedy choice is safe (i.e., can't be undone later without loss), don't trust it. When in doubt, reach for DP.
Source — Activity Selection
activity_selection(activities):
  // Each activity: (start, end)
  sort activities by end time (ascending)

  selected = []
  last_end = -infinity

  for each activity (s, e) in sorted order:
    if s >= last_end:          // no overlap with last selected
      selected.append((s, e))
      last_end = e

  return selected
def activity_selection(activities: list[tuple]) -> list[tuple]:
    """
    activities: list of (start, end) pairs.
    Returns max subset with no two activities overlapping.
    O(n log n) for the sort, O(n) scan.
    """
    sorted_acts = sorted(activities, key=lambda a: a[1])  # sort by end time
    selected = []
    last_end = float('-inf')

    for start, end in sorted_acts:
        if start >= last_end:       # no overlap
            selected.append((start, end))
            last_end = end

    return selected

# Example:
# acts = [(1,4),(3,5),(0,6),(5,7),(3,9),(5,9),(6,10),(8,11),(8,12),(2,14),(12,16)]
# activity_selection(acts)  # max non-overlapping set
function activitySelection(activities) {
  // activities: [{start, end}, ...]
  const sorted = [...activities].sort((a, b) => a.end - b.end);
  const selected = [];
  let lastEnd = -Infinity;

  for (const act of sorted) {
    if (act.start >= lastEnd) {
      selected.push(act);
      lastEnd = act.end;
    }
  }
  return selected;
}

// const acts = [{start:1,end:4},{start:3,end:5},{start:0,end:6},{start:5,end:7}];
// activitySelection(acts);
#include <stdlib.h>
#include <stdio.h>

typedef struct { int start, end; } Activity;

int cmp_end(const void *a, const void *b) {
    return ((Activity*)a)->end - ((Activity*)b)->end;
}

int activity_selection(Activity *acts, int n, Activity *out) {
    qsort(acts, n, sizeof(Activity), cmp_end);  // sort by end time
    int count   = 0;
    int last_end = -1;
    for (int i = 0; i < n; i++) {
        if (acts[i].start >= last_end) {
            out[count++] = acts[i];
            last_end = acts[i].end;
        }
    }
    return count;   // number of selected activities
}
#include <vector>
#include <algorithm>

struct Activity { int start, end; };

std::vector<Activity> activitySelection(std::vector<Activity> acts) {
    // Sort by end time — greedy key
    std::sort(acts.begin(), acts.end(),
              [](const Activity& a, const Activity& b){ return a.end < b.end; });

    std::vector<Activity> selected;
    int lastEnd = INT_MIN;

    for (const auto& a : acts) {
        if (a.start >= lastEnd) {
            selected.push_back(a);
            lastEnd = a.end;
        }
    }
    return selected;
}
import java.util.*;

public class ActivitySelection {
    record Activity(int start, int end) {}

    public static List<Activity> select(List<Activity> acts) {
        // Sort by end time
        List<Activity> sorted = new ArrayList<>(acts);
        sorted.sort(Comparator.comparingInt(Activity::end));

        List<Activity> selected = new ArrayList<>();
        int lastEnd = Integer.MIN_VALUE;

        for (Activity a : sorted) {
            if (a.start() >= lastEnd) {
                selected.add(a);
                lastEnd = a.end();
            }
        }
        return selected;
    }
}
package main

import "sort"

type Activity struct{ Start, End int }

func ActivitySelection(acts []Activity) []Activity {
    // Sort by end time — the greedy key
    sorted := make([]Activity, len(acts))
    copy(sorted, acts)
    sort.Slice(sorted, func(i, j int) bool {
        return sorted[i].End < sorted[j].End
    })

    var selected []Activity
    lastEnd := -1 << 30

    for _, a := range sorted {
        if a.Start >= lastEnd {
            selected = append(selected, a)
            lastEnd = a.End
        }
    }
    return selected
}

19. Divide and conquer

Divide and conquer is a design pattern: split the problem into smaller subproblems, solve each recursively, combine the results. Mergesort is the textbook example. You've also seen it in binary search (divide in half, recurse into one half — a degenerate case with a single subproblem).

Other divide-and-conquer classics:

The recurrence for balanced divide-and-conquer is the one we saw for mergesort: $T(n) = aT(n/b) + f(n)$, solved by the master theorem. You plug in $a$ (number of subproblems), $b$ (shrinkage factor), and $f(n)$ (combine cost), and read off the answer. Again, see complexity.html for the full statement.

Source — Counting Inversions
-- Count inversions: pairs (i,j) where i < j but a[i] > a[j] --
-- Runs in O(n log n) via a modified merge sort --

count_inversions(a):
  if len(a) <= 1: return a, 0
  mid = len(a) / 2
  left,  inv_l = count_inversions(a[:mid])
  right, inv_r = count_inversions(a[mid:])
  merged, inv_m = merge_count(left, right)
  return merged, inv_l + inv_r + inv_m

merge_count(left, right):
  result = []; inversions = 0
  i = j = 0
  while i < len(left) and j < len(right):
    if left[i] <= right[j]:
      result.append(left[i]); i++
    else:
      // All remaining left[i..] are > right[j]: cross inversions
      inversions += len(left) - i
      result.append(right[j]); j++
  result += left[i:] + right[j:]
  return result, inversions
def count_inversions(a: list) -> tuple[list, int]:
    """
    Returns (sorted_a, inversion_count).
    An inversion is a pair (i,j) with i < j and a[i] > a[j].
    O(n log n) via divide-and-conquer merge sort.
    """
    if len(a) <= 1:
        return a[:], 0
    mid = len(a) // 2
    left,  inv_l = count_inversions(a[:mid])
    right, inv_r = count_inversions(a[mid:])
    merged, inv_m = _merge_count(left, right)
    return merged, inv_l + inv_r + inv_m

def _merge_count(left, right):
    result = []
    inversions = 0
    i = j = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i]); i += 1
        else:
            # left[i..] are all > right[j] — each is an inversion with right[j]
            inversions += len(left) - i
            result.append(right[j]); j += 1
    result.extend(left[i:]); result.extend(right[j:])
    return result, inversions

# count_inversions([2, 4, 1, 3, 5])  # ([1,2,3,4,5], 3)
function countInversions(a) {
  if (a.length <= 1) return { sorted: [...a], inv: 0 };
  const mid = a.length >> 1;
  const L = countInversions(a.slice(0, mid));
  const R = countInversions(a.slice(mid));
  const { sorted, inv } = mergeCount(L.sorted, R.sorted);
  return { sorted, inv: L.inv + R.inv + inv };
}

function mergeCount(left, right) {
  const sorted = []; let inv = 0, i = 0, j = 0;
  while (i < left.length && j < right.length) {
    if (left[i] <= right[j]) {
      sorted.push(left[i++]);
    } else {
      inv += left.length - i;  // all left[i..] > right[j]
      sorted.push(right[j++]);
    }
  }
  return { sorted: sorted.concat(left.slice(i), right.slice(j)), inv };
}

// countInversions([2,4,1,3,5]).inv  // 3
#include <stdlib.h>
#include <string.h>

long long merge_count(int *a, int *tmp, int lo, int mid, int hi) {
    long long inv = 0;
    int i=lo, j=mid, k=lo;
    while (i < mid && j <= hi) {
        if (a[i] <= a[j]) tmp[k++]=a[i++];
        else { inv += mid - i; tmp[k++]=a[j++]; }
    }
    while (i < mid)  tmp[k++]=a[i++];
    while (j <= hi)  tmp[k++]=a[j++];
    memcpy(a+lo, tmp+lo, (hi-lo+1)*sizeof(int));
    return inv;
}

long long sort_count(int *a, int *tmp, int lo, int hi) {
    if (lo >= hi) return 0;
    int mid = lo + (hi-lo)/2;
    long long inv = sort_count(a,tmp,lo,mid)
                  + sort_count(a,tmp,mid+1,hi)
                  + merge_count(a,tmp,lo,mid+1,hi);
    return inv;
}
#include <vector>

std::pair<std::vector<int>, long long>
countInversions(std::vector<int> a) {
    if (a.size() <= 1) return {a, 0};
    int mid = a.size() / 2;
    auto [L, il] = countInversions({a.begin(), a.begin()+mid});
    auto [R, ir] = countInversions({a.begin()+mid, a.end()});

    std::vector<int> merged; long long inv = il + ir;
    int i=0, j=0;
    while (i < (int)L.size() && j < (int)R.size()) {
        if (L[i] <= R[j]) merged.push_back(L[i++]);
        else { inv += L.size()-i; merged.push_back(R[j++]); }
    }
    merged.insert(merged.end(), L.begin()+i, L.end());
    merged.insert(merged.end(), R.begin()+j, R.end());
    return {merged, inv};
}
public class CountInversions {
    public static long count(int[] a) { return sort(a, 0, a.length-1); }

    private static long sort(int[] a, int lo, int hi) {
        if (lo >= hi) return 0;
        int mid = lo + (hi-lo)/2;
        long inv = sort(a,lo,mid) + sort(a,mid+1,hi);
        return inv + merge(a, lo, mid, hi);
    }

    private static long merge(int[] a, int lo, int mid, int hi) {
        int[] tmp = new int[hi-lo+1];
        int i=lo, j=mid+1, k=0; long inv=0;
        while (i<=mid && j<=hi) {
            if (a[i]<=a[j]) tmp[k++]=a[i++];
            else { inv += mid-i+1; tmp[k++]=a[j++]; }
        }
        while (i<=mid) tmp[k++]=a[i++];
        while (j<=hi)  tmp[k++]=a[j++];
        System.arraycopy(tmp,0,a,lo,tmp.length);
        return inv;
    }
}
package main

func CountInversions(a []int) ([]int, int64) {
    if len(a) <= 1 { return append([]int{}, a...), 0 }
    mid := len(a) / 2
    L, il := CountInversions(a[:mid])
    R, ir := CountInversions(a[mid:])
    merged, im := mergeCount(L, R)
    return merged, il + ir + im
}

func mergeCount(L, R []int) ([]int, int64) {
    out := make([]int, 0, len(L)+len(R))
    var inv int64
    i, j := 0, 0
    for i < len(L) && j < len(R) {
        if L[i] <= R[j] {
            out = append(out, L[i]); i++
        } else {
            inv += int64(len(L) - i)   // all L[i..] > R[j]
            out = append(out, R[j]); j++
        }
    }
    out = append(out, L[i:]...)
    out = append(out, R[j:]...)
    return out, inv
}

20. A practical guide — how to pick the right tool

The single best question to ask when facing a new problem is: "What operations do I need, and how often?" The answer narrows the data structure instantly.

If you need...Use...Because
Index by position, mostly readArray / dynamic array$O(1)$ index, cache-friendly
Membership check, any key typeHash set$O(1)$ expected
Key → value lookupHash table$O(1)$ expected
Key → value with sorted iterationBalanced BST$O(\log n)$ ops + in-order traversal
Key → value on diskB-treeHigh fanout → few disk reads
"Give me the smallest/largest"Heap / priority queue$O(\log n)$ insert / extract-min
Top-$k$ in a streamSize-$k$ min-heap$O(n \log k)$ total
Prefix matching, autocompleteTrie$O(L)$ in word length, shares prefixes
LIFO (undo, recursion, DFS)Stack$O(1)$ push / pop
FIFO (work queue, BFS)Queue (ring buffer)$O(1)$ enqueue / dequeue
Shortest path, unweightedBFS$O(V+E)$, queue-based
Shortest path, weighted, non-negativeDijkstra with heap$O((V+E) \log V)$
Shortest path with negative edgesBellman-Ford$O(VE)$, detects neg cycles
Optimal subproblem + overlapDynamic programmingMemoize each subproblem once
Local greedy choice safeGreedyOne-pass, no backtracking
Problem splits cleanly in twoDivide and conquerRecursive halving, master theorem

A few meta-heuristics:

20.1 Cross-links to other tracks

21. Further reading

The canon

  • Cormen, Leiserson, Rivest, Stein — Introduction to Algorithms (CLRS), 4th ed., 2022. The reference. Every major data structure and algorithm with proofs. Long but unambiguous.
  • Sedgewick & Wayne — Algorithms, 4th ed., 2011. Friendlier than CLRS, with Java code. Excellent visualizations of sorting and graph algorithms. The companion algs4.cs.princeton.edu site is gold.
  • Kleinberg & Tardos — Algorithm Design, 2005. The best book on designing algorithms (as opposed to reciting them). The dynamic programming and network flow chapters are especially strong.
  • Skiena — The Algorithm Design Manual, 3rd ed., 2020. The second half is a catalog of named problems with known algorithms — invaluable as a lookup.
  • Knuth — The Art of Computer Programming, volumes 1–4A. The definitive deep dive. Not a first book. Read when you want to know exactly why something is as fast as it is.
  • Demaine — MIT 6.006 / 6.046 lecture videos (MIT OCW). The best free course for this material; Erik Demaine is a remarkable teacher.

Practice and reference

  • Bentley — Programming Pearls, 2nd ed., 2000. Short, funny essays on the craft of picking the right algorithm. The binary-search essay is a classic.
  • LeetCode, Codeforces, Kattis — if you learn best by solving problems. LeetCode's top-150 is a reasonable working set for most of this page's content.
NEXT UP
→ Complexity

We skipped the formal Big-O theorems and the master theorem to keep this page moving. Next up is the rigorous treatment: asymptotic notation, amortized analysis, P vs. NP, and why some problems genuinely cannot be solved fast.