/
ProgrammingAlgorithm Design & AnalysisAdvanced Data Structures I
Progress
3 / 13
← All Chapters
Intermediate Chapter 03 of 13 · Algorithm Design and Analysis

Advanced Data Structures I

Two data structures that achieve what balanced BSTs cannot. Van Emde Boas trees beat $O(\log n)$ per operation with $O(\log\log u)$ — exponentially faster for universe-bounded integers. Skip lists match BST performance using randomisation alone, with no rotations and no balancing invariants.

Part I — van Emde Boas Trees

Goal: maintain a dynamic set of integers from $\{0, 1, \ldots, u-1\}$ and support Insert, Delete, and Successor in $O(\log\log u)$ time. Why is this remarkable? A balanced BST gives $O(\log n)$, which for $n = u^{O(1)}$ is $O(\log u)$. Van Emde Boas gives $O(\log\log u)$ — exponentially faster.

Application — Network routing. IP addresses are integers in $\{0, \ldots, 2^{32}-1\}$, so $u = 2^{32}$. $\log u = 32$, but $\log\log u = \log 32 = 5$. A router doing billions of lookups per second gains enormously from $O(5)$ vs $O(32)$ per operation. The vEB idea underlies many packet-processing systems.

Where Does $O(\log\log u)$ Come From?

Two recurrences give this bound:

$$T(\log u) = T\!\left(\frac{\log u}{2}\right) + O(1) \implies T(\log u) = O(\log\log u)$$
$$T(u) = T(\sqrt{u}) + O(1) \implies T(u) = O(\log\log u)$$

The second is what vEB achieves: each operation recurses on a structure of size $\sqrt{u}$ exactly once.

Building vEB: Four Steps

Step 1 — Bit vector. Store $V[x] = 1$ iff $x$ is in the set. Insert/Delete: $O(1)$. Successor: $O(u)$ (scan for next 1-bit). Too slow.

Step 2 — Split into clusters. Divide $\{0,\ldots,u-1\}$ into $\sqrt{u}$ clusters of size $\sqrt{u}$. For element $x$: $\text{high}(x) = \lfloor x/\sqrt{u}\rfloor$ (cluster index), $\text{low}(x) = x \bmod \sqrt{u}$ (position within cluster). Successor drops to $O(\sqrt{u})$. Still too slow.

Step 3 — Recurse on clusters + summary. Make each cluster a recursive vEB structure of size $\sqrt{u}$. Add a summary vEB of size $\sqrt{u}$ that tracks which clusters are non-empty. Recurrence: $T(u) = 2T(\sqrt{u}) + O(1) \Rightarrow T(u) = O(\log u)$. Better, but still not $O(\log\log u)$.

Step 4 — Store min/max; don’t recurse min. The key insight: if each operation recurses into at most one sub-structure, the recurrence becomes $T(u) = T(\sqrt{u}) + O(1) \Rightarrow O(\log\log u)$. Store min and max explicitly at each level. The min is not stored recursively in the cluster — this breaks the two-recursion chain.

vEB structure $V$ for universe $\{0,\ldots,u-1\}$, $u = 2^{2^k}$:
— $V.\min$, $V.\max$: extremes stored directly (not in any cluster)
— $V.\text{cluster}[i]$: vEB of size $\sqrt{u}$ for each $0 \leq i < \sqrt{u}$
— $V.\text{summary}$: vEB of size $\sqrt{u}$; $V.\text{summary}[i]=1$ iff $V.\text{cluster}[i]$ is non-empty
— $\text{high}(x) = \lfloor x/\sqrt{u}\rfloor$, $\text{low}(x) = x \bmod \sqrt{u}$, $\text{index}(i,j) = i\sqrt{u}+j$
van Emde Boas Tree · O(log log u) Insert / Successor / Delete
import math

class VEB:
    """
    Van Emde Boas tree for universe {0, ..., u-1}.
    u must be a power of 4 (so sqrt(u) is an integer power of 2).
    All operations: O(log log u).
    """
    def __init__(self, u):
        self.u    = u
        self.min  = None          # not stored in cluster (key trick!)
        self.max  = None
        if u > 2:
            sq = int(math.isqrt(u))
            self.cluster = [VEB(sq) for _ in range(sq)]
            self.summary = VEB(sq)

    def _high(self, x): return x // int(math.isqrt(self.u))
    def _low(self, x):  return x %  int(math.isqrt(self.u))
    def _idx(self, i, j): return i * int(math.isqrt(self.u)) + j

    def insert(self, x):
        if self.min is None:          # empty structure
            self.min = self.max = x
            return
        if x < self.min:
            x, self.min = self.min, x  # swap: new x goes into cluster
        if self.u > 2:
            hi, lo = self._high(x), self._low(x)
            if self.cluster[hi].min is None:
                self.summary.insert(hi)      # first call: O(1) base case
            self.cluster[hi].insert(lo)      # second call (only one recurses!)
        if x > self.max:
            self.max = x

    def successor(self, x):
        if self.u == 2:
            return 1 if x == 0 and self.max == 1 else None
        if self.min is not None and x < self.min:
            return self.min
        hi, lo = self._high(x), self._low(x)
        # Look in same cluster first (one recursion)
        if self.cluster[hi].max is not None and lo < self.cluster[hi].max:
            j = self.cluster[hi].successor(lo)
            return self._idx(hi, j)
        # Else find next non-empty cluster (one recursion on summary)
        succ_hi = self.summary.successor(hi)
        if succ_hi is None:
            return None
        j = self.cluster[succ_hi].min
        return self._idx(succ_hi, j)

    def delete(self, x):
        if self.min == self.max:       # only one element
            self.min = self.max = None
            return
        if self.u == 2:
            self.min = self.max = (1 if x == 0 else 0)
            return
        if x == self.min:              # find new min (from smallest cluster)
            first_cluster = self.summary.min
            x = self._idx(first_cluster, self.cluster[first_cluster].min)
            self.min = x
        hi, lo = self._high(x), self._low(x)
        self.cluster[hi].delete(lo)    # first call
        if self.cluster[hi].min is None:
            self.summary.delete(hi)    # second call (only one recurses!)
            if x == self.max:
                sm = self.summary.max
                self.max = None if sm is None else self._idx(sm, self.cluster[sm].max)
        elif x == self.max:
            self.max = self._idx(hi, self.cluster[hi].max)
#include <vector>
#include <cmath>
#include <memory>
using namespace std;

struct VEB {
    int u;
    int vmin = -1, vmax = -1;       // -1 = None
    vector<unique_ptr<VEB>> cluster;
    unique_ptr<VEB> summary;

    VEB(int u) : u(u) {
        if (u > 2) {
            int sq = (int)round(sqrt(u));
            cluster.resize(sq);
            for (auto& c : cluster) c = make_unique<VEB>(sq);
            summary = make_unique<VEB>(sq);
        }
    }
    int sq()  { return (int)round(sqrt(u)); }
    int hi(int x) { return x / sq(); }
    int lo(int x) { return x % sq(); }
    int idx(int i, int j) { return i * sq() + j; }

    void insert(int x) {
        if (vmin == -1) { vmin = vmax = x; return; }
        if (x < vmin) swap(x, vmin);
        if (u > 2) {
            int h = hi(x), l = lo(x);
            if (cluster[h]->vmin == -1) summary->insert(h);
            cluster[h]->insert(l);
        }
        if (x > vmax) vmax = x;
    }

    int successor(int x) {
        if (u == 2) return (x == 0 && vmax == 1) ? 1 : -1;
        if (vmin != -1 && x < vmin) return vmin;
        int h = hi(x), l = lo(x);
        if (cluster[h]->vmax != -1 && l < cluster[h]->vmax)
            return idx(h, cluster[h]->successor(l));
        int sh = summary->successor(h);
        return sh == -1 ? -1 : idx(sh, cluster[sh]->vmin);
    }

    void del(int x) {
        if (vmin == vmax) { vmin = vmax = -1; return; }
        if (u == 2) { vmin = vmax = (x==0 ? 1 : 0); return; }
        if (x == vmin) {
            int fc = summary->vmin;
            x = idx(fc, cluster[fc]->vmin);
            vmin = x;
        }
        int h = hi(x), l = lo(x);
        cluster[h]->del(l);
        if (cluster[h]->vmin == -1) {
            summary->del(h);
            if (x == vmax) {
                int sm = summary->vmax;
                vmax = sm == -1 ? vmin : idx(sm, cluster[sm]->vmax);
            }
        } else if (x == vmax)
            vmax = idx(h, cluster[h]->vmax);
    }
};
↑ Run C++ on Compiler Explorer

Why Insert recurses only once: when the cluster is empty before Insert, the recursive call to cluster[hi].insert(lo) hits the base case ($O(1)$ because min was None), so the call to summary.insert(hi) is the only non-trivial recursion. When the cluster is non-empty, only cluster[hi].insert(lo) recurses. Never both. Hence $T(u) = T(\sqrt{u}) + O(1)$.

Space: naively $\Theta(u)$. With hash tables for non-empty clusters only: $O(n\log\log u)$. With indirection for small structures: $O(n)$ (deterministic). In practice, the $O(n\log\log u)$ randomised version is used.

Part II — Skip Lists

A skip list is a randomised data structure that maintains a sorted set with $O(\log n)$ expected time per operation — without any rebalancing, rotations, or complex invariants. Invented by William Pugh (1989).

The Idea: $\log n$ Linked Lists

Start from a sorted linked list: search takes $\Theta(n)$. Improve by adding a second “express” list that skips over some elements. With 2 lists, optimal node placement gives $O(\sqrt{n})$ search. With $k$ lists: $O(k \cdot n^{1/k})$ search. With $\log n$ lists: $O(\log n)$ search — matching balanced BSTs.

Think of the NYC subway: the express line skips most stops, the local line hits every stop. You take the express as far as possible, then switch to local. A skip list is this multi-level express/local structure, built randomly so you never need to manually decide which stations are “express”.

Randomised Insertion: Coin Flips for Promotion

Every element is always in the bottom list (level 0). When inserting $x$, flip a fair coin repeatedly: each HEADS promotes $x$ to the next higher level, TAILS stops. This gives:

$$\Pr[\text{promoted to level } k] = \left(\frac{1}{2}\right)^k \qquad \mathbb{E}[\text{levels for one element}] = \frac{1}{1-1/2} = 2$$
Skip List · O(log n) expected search / insert / delete
import random

class SkipNode:
    def __init__(self, key, level):
        self.key  = key
        self.next = [None] * (level + 1)   # forward pointers per level

class SkipList:
    """
    Randomised skip list. O(log n) expected time for all operations.
    With high probability O(log n) — probability 1 - O(1/n^c) for any c.
    """
    MAX_LEVEL = 16             # cap levels at log2(max_n)
    P         = 0.5            # promotion probability

    def __init__(self):
        self.head  = SkipNode(float('-inf'), self.MAX_LEVEL)
        self.level = 0         # current max level in use

    def _random_level(self):
        """Coin-flip promotion: geometric distribution, capped at MAX_LEVEL."""
        lvl = 0
        while random.random() < self.P and lvl < self.MAX_LEVEL:
            lvl += 1
        return lvl

    def search(self, key):
        """Returns True if key found. O(log n) expected."""
        cur = self.head
        for i in range(self.level, -1, -1):   # top level down
            while cur.next[i] and cur.next[i].key < key:
                cur = cur.next[i]              # go right
        cur = cur.next[0]                      # drop to level 0
        return cur is not None and cur.key == key

    def insert(self, key):
        """Insert key. O(log n) expected."""
        update = [None] * (self.MAX_LEVEL + 1)  # nodes that need updating
        cur = self.head
        # Find insertion position at each level
        for i in range(self.level, -1, -1):
            while cur.next[i] and cur.next[i].key < key:
                cur = cur.next[i]
            update[i] = cur

        new_level = self._random_level()
        if new_level > self.level:             # new levels needed
            for i in range(self.level + 1, new_level + 1):
                update[i] = self.head
            self.level = new_level

        new_node = SkipNode(key, new_level)
        for i in range(new_level + 1):
            new_node.next[i] = update[i].next[i]
            update[i].next[i] = new_node

    def delete(self, key):
        """Delete key. O(log n) expected."""
        update = [None] * (self.MAX_LEVEL + 1)
        cur = self.head
        for i in range(self.level, -1, -1):
            while cur.next[i] and cur.next[i].key < key:
                cur = cur.next[i]
            update[i] = cur

        target = cur.next[0]
        if target and target.key == key:
            for i in range(self.level + 1):
                if update[i].next[i] != target:
                    break
                update[i].next[i] = target.next[i]
            # Shrink level if top levels now empty
            while self.level > 0 and self.head.next[self.level] is None:
                self.level -= 1
#include <vector>
#include <random>
#include <limits>
using namespace std;

struct SkipNode {
    int key;
    vector<SkipNode*> next;
    SkipNode(int k, int lvl) : key(k), next(lvl+1, nullptr) {}
};

class SkipList {
    static const int MAX_LEVEL = 16;
    SkipNode* head;
    int level = 0;
    mt19937 rng{random_device{}()};
    bernoulli_distribution coin{0.5};

    int randomLevel() {
        int lvl = 0;
        while (coin(rng) && lvl < MAX_LEVEL) ++lvl;
        return lvl;
    }

public:
    SkipList() { head = new SkipNode(INT_MIN, MAX_LEVEL); }

    bool search(int key) {
        auto* cur = head;
        for (int i = level; i >= 0; --i)
            while (cur->next[i] && cur->next[i]->key < key)
                cur = cur->next[i];
        cur = cur->next[0];
        return cur && cur->key == key;
    }

    void insert(int key) {
        vector<SkipNode*> update(MAX_LEVEL+1);
        auto* cur = head;
        for (int i = level; i >= 0; --i) {
            while (cur->next[i] && cur->next[i]->key < key)
                cur = cur->next[i];
            update[i] = cur;
        }
        int nl = randomLevel();
        if (nl > level) {
            for (int i = level+1; i <= nl; ++i) update[i] = head;
            level = nl;
        }
        auto* node = new SkipNode(key, nl);
        for (int i = 0; i <= nl; ++i) {
            node->next[i] = update[i]->next[i];
            update[i]->next[i] = node;
        }
    }
};
↑ Run C++ on Compiler Explorer

With-High-Probability Analysis

Expected $O(\log n)$ is good but not enough in isolation — a string of bad luck could make one search take $O(n)$. Skip lists give something stronger: with high probability (w.h.p.).

With high probability (w.h.p.): event $E$ occurs w.h.p. if for any $\alpha \geq 1$, there is a choice of constants such that $\Pr[E] \geq 1 - O(1/n^\alpha)$. The error probability is polynomially small — negligible for any polynomial-time algorithm.

Key w.h.p. facts for skip lists:
— The skip list has $O(\log n)$ levels w.h.p. (Proof: $\Pr[\text{element promoted } \geq c\log n \text{ times}] = 2^{-c\log n} = n^{-c}$; union bound over $n$ elements gives $n \cdot n^{-c} = n^{1-c}$.)
— Every search costs $O(\log n)$ w.h.p. (backwards analysis + Chernoff bounds on coin flips)

Backwards Analysis of Search

Rather than analysing the search path forward, analyse it backwards from the found node upward. At each step we either go left (the current node wasn’t promoted to the next level — TAILS) or up (it was — HEADS). The search ends when we reach the top level or the $-\infty$ sentinel. We need $O(\log n)$ HEADS (up-moves) and at most $O(\log n)$ TAILS total before HEADS occur. By Chernoff bounds, the total steps before $c\log n$ HEADS is at most $d\log n$ w.h.p.

Skip list vs BST in practice. Balanced BSTs (AVL, Red-Black) guarantee worst-case $O(\log n)$, while skip lists guarantee w.h.p. $O(\log n)$. But skip lists have a major practical advantage: insertion is trivially parallelisable (just update forward pointers at the relevant levels), while BST rotations require careful synchronisation in concurrent settings. This is why Redis, LevelDB, and Cassandra use skip lists for their sorted sets.
Skip list level distribution + w.h.p. demonstration
import random, math, collections

def simulate_skip_levels(n, trials=10_000):
    """
    Simulate level distribution for n-element skip list.
    Verify w.h.p. bound: max level <= c*log2(n) with high probability.
    """
    max_levels = []
    for _ in range(trials):
        # Generate level for each of n elements
        levels = []
        for _ in range(n):
            lvl = 0
            while random.random() < 0.5:
                lvl += 1
            levels.append(lvl)
        max_levels.append(max(levels))

    c_log_n = math.ceil(2 * math.log2(n))   # c=2 threshold
    exceeded = sum(1 for ml in max_levels if ml > c_log_n)
    print(f"n={n}: E[max level]={sum(max_levels)/trials:.2f}, "
          f"log2(n)={math.log2(n):.1f}, "
          f"P(max > 2*log2(n)) = {exceeded/trials:.4f}")
    # Should be very small (polynomial in 1/n)

# Check for n=100, 1000, 10000
for n in [100, 1000, 10000]:
    simulate_skip_levels(n)

def skip_list_successor(sl, key):
    """
    Successor query: smallest element > key. O(log n) expected.
    Uses same search path as regular search.
    """
    cur = sl.head
    for i in range(sl.level, -1, -1):
        while cur.next[i] and cur.next[i].key <= key:
            cur = cur.next[i]
    # cur.next[0] is now the first element > key
    return cur.next[0].key if cur.next[0] else None
// Demonstrate w.h.p. level bound for skip lists
#include <random>
#include <vector>
#include <algorithm>
#include <cmath>
#include <iostream>
using namespace std;

int simulate_max_level(int n, mt19937& rng) {
    bernoulli_distribution coin(0.5);
    int max_lvl = 0;
    for (int i = 0; i < n; ++i) {
        int lvl = 0;
        while (coin(rng)) ++lvl;
        max_lvl = max(max_lvl, lvl);
    }
    return max_lvl;
}

void verify_whp(int n, int trials = 10000) {
    mt19937 rng(42);
    int c_log_n = (int)ceil(2.0 * log2(n));
    int exceeded = 0;
    double total = 0;
    for (int t = 0; t < trials; ++t) {
        int ml = simulate_max_level(n, rng);
        total += ml;
        if (ml > c_log_n) ++exceeded;
    }
    cout << "n=" << n
         << " E[max_level]=" << total/trials
         << " P(max > 2*log2(n))=" << (double)exceeded/trials
         << endl;
    // For n=1000: expect P(max > 20) ā‰ˆ 0.001 or less
}
↑ Run C++ on Compiler Explorer
Worked Example · Strike Range Query on NSE Options Chain 🇮🇳 TTT Systems

Given the NIFTY options chain with strikes in $\{15000, 15050, \ldots, 25000\}$ (universe size $u = 200,000/50 + 1 = 2001$, rounded up to power of 2: $u = 2048$), we need: (1) Insert a strike when it becomes liquid, (2) Delete when it expires, (3) Find successor strike for spread construction.

vEB vs skip list comparison
Property vEB (u=2048) Skip List (nā‰ˆ200)
Search/Successor$O(\log\log 2048) = O(11) \approx 4$$O(\log 200) \approx 8$ w.h.p.
Key typeInteger-indexed onlyAny comparable key
Space$O(n\log\log u)$ with hash$O(n)$ expected
Concurrent accessComplexEasier to parallelise
Recommendation: for strike chains (integer keys, fixed universe), vEB wins on latency. For instrument masters with string keys (ISIN codes, ticker symbols), skip list is the right choice. Redis’s sorted set (which powers many trading system leaderboards and order-by-price structures) is a skip list.
Python’s sortedcontainers.SortedList (used heavily in quantitative trading systems) is implemented as a skip-list-like structure. Redis ZADD/ZRANGEBYSCORE, which many TTT-style systems use for sorted instrument lists and priority queues, is backed by a skip list internally. For vEB-style integer operations at extreme speed, the Linux kernel’s rb_tree (red-black tree) is more common in practice — but vEB thinking informs cache-oblivious and van Emde Boas layout trees used in databases (Ch13).

Practice Problems
4 questions · Chapter 03
prog / daa / ch03 / q01 ★ vEB Recurrence

Without storing min/max separately, the vEB Successor makes 3 recursive calls giving $T(u) = 3T(\sqrt{u}) + O(1)$. What does this solve to?

A
$O(\log\log u)$ — same as the optimised version
B
$O(\log u)$ — linear in the log
C
$O((\log u)^{\log_2 3}) \approx O((\log u)^{1.585})$ — super-logarithmic
D
$O(\sqrt{u})$ — same as the clustered bit vector
Answer: C — $O((\log u)^{\log_2 3}) \approx O((\log u)^{1.585})$.

Substitute $m = \log u$: the recurrence $T(u) = 3T(\sqrt{u}) + O(1)$ becomes $T'(m) = 3T'(m/2) + O(1)$ with $a=3$, $b=2$, $f(m)=O(1)$. Since $m^{\log_2 3} \gg O(1)$, Case 1 of the Master Theorem gives $T'(m) = \Theta(m^{\log_2 3}) = \Theta((\log u)^{1.585})$. This is strictly worse than $O(\log u)$ for large $u$, but better than $O(\log u)$ only slightly. The key optimisation (store min, don’t recurse it) is what makes vEB achieve $O(\log\log u)$.
prog / daa / ch03 / q02 ★★ vEB Insert

In vEB Insert, when a cluster is empty before the insertion, why is the recursive call cluster[hi].insert(lo) $O(1)$?

A
Because $\sqrt{u} = O(1)$ for small clusters
B
Because inserting into an empty vEB structure just sets min=max=x and returns immediately — no further recursion
C
Because the cluster uses a hash table with $O(1)$ insert
D
Because lo = 0 always when a cluster is first populated
Answer: B.

The first line of Insert checks: if V.min is None: V.min = V.max = x; return. When a cluster is empty, its min is None, so this base case fires immediately in $O(1)$ time — no recursive calls into sub-clusters or summary. This is precisely what makes Insert recurse only once: if the cluster was empty, cluster[hi].insert(lo) is $O(1)$, so only summary.insert(hi) recurses non-trivially. If the cluster was non-empty, summary.insert(hi) is $O(1)$ (the hi-th cluster is already recorded in summary), so only cluster[hi].insert(lo) recurses. Never both.
prog / daa / ch03 / q03 ★★ Skip List Levels

A skip list has $n = 1024$ elements. With high probability, the maximum number of levels is at most $c\log_2 n$ for what value of $c$ (approximately)?

A
$c = 1/2$ — levels are $O(\frac{1}{2}\log n)$
B
$c = 1$ — levels are exactly $\log_2 n = 10$
C
$c$ can be any constant $> 1$; larger $c$ gives smaller error probability $O(1/n^{c-1})$
D
$c = \sqrt{\log n}$ — from the Chernoff bound directly
Answer: C.

The w.h.p. bound is parametric. The probability that the max level exceeds $c\log_2 n$ is at most $n \cdot (1/2)^{c\log_2 n} = n \cdot n^{-c} = n^{1-c}$. For $c=2$: error $\leq 1/n$. For $c=3$: error $\leq 1/n^2$. For $c=\alpha+1$: error $\leq 1/n^\alpha$. So we can make the error probability $O(1/n^\alpha)$ for any $\alpha$ by choosing $c = \alpha+1$. This is the essence of the with-high-probability definition: we can trade a slightly larger constant in the $O(\log n)$ bound for polynomially smaller error probability.
prog / daa / ch03 / q04 ★★★ Union Bound Application

A sorting algorithm runs $n$ skip list searches, each costing $O(\log n)$ w.h.p. (error $\leq 1/n^2$). What is the probability that all $n$ searches are fast, and why does the algorithm still run in $O(n\log n)$ w.h.p.?

A
Probability $\geq 1 - 1/n^2$ — the individual w.h.p. bound applies to all searches jointly
B
Probability $\geq 1 - n/n^2 = 1 - 1/n$ — still polynomially small error, so the algorithm is still w.h.p. $O(n\log n)$ for any fixed $\alpha$
C
By the union bound, $\Pr[\text{any search slow}] \leq n \cdot 1/n^2 = 1/n$. So all $n$ searches are fast with probability $\geq 1-1/n$, which is w.h.p. for $\alpha=1$. Setting error $\leq 1/n^{\alpha+1}$ per search gives total error $\leq 1/n^\alpha$
D
Cannot bound — $n$ searches could all fail simultaneously
Answer: C — union bound gives $1/n$ total error.

Boole’s inequality (union bound): $\Pr[E_1 \cup E_2 \cup \cdots \cup E_n] \leq \sum_{i=1}^n \Pr[E_i]$. Each search fails with probability $\leq 1/n^2$, so $\Pr[\text{any of the } n \text{ searches fail}] \leq n \cdot 1/n^2 = 1/n$. All searches succeed with probability $\geq 1 - 1/n$. For stronger guarantees: set per-search error $\leq 1/n^{\alpha+1}$, giving total error $\leq n \cdot 1/n^{\alpha+1} = 1/n^\alpha$. This is why w.h.p. events are closed under polynomial unions — $\text{poly}(n)$ w.h.p. events combined are still w.h.p.

Terminal Questions — Chapter 03 10 problems · No answers given

Work through independently. Q1–3 direct application. Q4–7 synthesis. Q8–10 careful argument.

1
Trace through vEB Successor($V$, 3) on the structure containing $\{1, 9, 10, 15\}$ with $u=16$. Show $\text{high}(x)$, $\text{low}(x)$ at each step, which clusters are checked, and how the summary is consulted. Easy
2
Build a skip list by inserting keys $[3, 6, 7, 9, 12, 19, 17, 26, 21]$ in that order. For each insertion, simulate coin flips (use the sequence H,T,H,H,T,T,H,T,H,H,T,T for the decisions). Draw the resulting multi-level structure. Easy
3
Prove that $T(u) = T(\sqrt{u}) + O(1)$ solves to $T(u) = O(\log\log u)$ by the substitution $m = \log u$: show that $T'(m) = T'(m/2) + O(1)$ and solve this by the Master Theorem or telescoping. Easy
4
Implement the vEB Delete operation from scratch (without looking at the code above). Test it by: insert $\{2, 3, 7, 11, 13\}$ into a $u=16$ vEB, then delete $3$, $7$, and $13$ one by one, verifying the correct min/max after each deletion. Medium
5
Implement a skip list with a range_query(lo, hi) method that returns all elements in $[lo, hi]$ in $O(\log n + k)$ time where $k$ is the output size. Apply it to find all NIFTY strikes between 23500 and 24000. Medium
6
The backwards analysis of skip list search uses coin flips. Formalise: let $C(k, h)$ = expected number of coin flips to get $h$ HEADS given we start with at most $k$ total flips allowed. Show that $C(k, h) = O(h + k/2)$ for appropriate parameters, giving $O(\log n)$ total steps. Medium
7
Compare skip list and vEB tree on the following workload: $10^5$ inserts of random integers in $[0, 10^6)$, then $10^5$ successor queries. Implement both in Python and measure wall-clock time. Which is faster in practice? Does the answer change if queries are clustered (consecutive keys)? Medium
8
Prove vEB Delete runs in $O(\log\log u)$ by showing that when the second recursive call fires (to delete from summary), the first call (to cluster) took $O(1)$ time, and vice versa. Draw the case analysis tree. Hard
9
Apply the Chernoff bound $\Pr[Y \geq \mathbb{E}[Y] + r] \leq e^{-2r^2/m}$ to prove: the number of steps in a skip list search is at most $d\log n$ with probability $\geq 1 - 1/n^c$ for appropriate $c, d$. What values of $c$ and $d$ does the proof give? Hard
10
The Patrascu-Thorup (2007) lower bound states: any data structure for Predecessor on a universe of size $u$ using $O(n \cdot \text{poly}(\log n))$ space requires $\Omega(\log\log u)$ time per query (for $u = n^{O(\log n)}$). Explain intuitively why this matches the vEB upper bound, making vEB optimal for this problem. Hard