/
ProgrammingAlgorithm Design & AnalysisFixed-Parameter & Distributed Algorithms
Progress
12 / 13
← All Chapters
Advanced Chapter 12 of 13 · Algorithm Design and Analysis

Fixed-Parameter & Distributed Algorithms

Two radically different responses to hard problems. Fixed-parameter algorithms accept exponential time but confine the blowup to a small parameter $k$ — giving exact solutions for the typical small-$k$ instances that arise in practice. Distributed algorithms run across networked processors, trading single-machine speed for parallelism, but confronting new challenges: symmetry breaking, message passing, and asynchrony.

Part I — Fixed-Parameter Tractability (FPT)

An NP-hard problem allows three desirable algorithm properties: (1) solve hard problems, (2) run in polynomial time, (3) get exact solutions. Unless P = NP, no algorithm has all three. Approximation algorithms have (1) and (2). Fixed-parameter algorithms have (1) and (3): they are exact but not universally polynomial — their exponential blowup is confined to a small parameter $k$.

Fixed-parameter tractable (FPT). A parameterized problem $(x, k)$ is FPT if there is an algorithm running in time $f(k) \cdot n^{O(1)}$ where $f: \mathbb{N} \to \mathbb{N}$ and the degree of the polynomial in $n$ is independent of $k$. The key: for small $k$, $f(k)$ is a manageable constant even if exponential in $k$.

Equivalently: $(x, k)$ is FPT iff there is an algorithm running in time $f(k) + n^{O(1)}$ for some $f$. (The sum form and product form are equivalent: $f(k) \cdot n^c \leq f(k)^{c+1} + n^{c+1}$.)

Example: $k$-Vertex Cover

Given graph $G=(V,E)$ and integer $k$: is there a vertex cover of size $\leq k$? This is NP-complete. But with $k$ as parameter, we can do much better than brute force.

Brute force: try all $\binom{V}{k}$ subsets of size $k$, check each in $O(E)$. Total: $O(V^k \cdot E)$ โ€” polynomial for fixed $k$, but the degree depends on $k$. This is NOT FPT.

Bounded search-tree algorithm. For any remaining edge $(u,v)$, at least one of $u$ or $v$ must be in the cover. Branch into two sub-problems: include $u$ (delete $u$ and all incident edges, recurse with $k-1$); or include $v$ (same). Return YES if either branch returns YES.

Analysis: The recursion tree has depth $\leq k$ (each branch decreases $k$ by 1). At each node: $O(V)$ work to delete the vertex. Total nodes: $\leq 2^k$. Total time: $O(2^k \cdot V)$ โ€” this IS FPT ($f(k) = 2^k$, polynomial factor $n = V$).

Practical: for $k = 32$, $2^{32} \approx 4\times10^9$ โ€” not great, but for $k \leq 20$ it’s perfectly manageable.

Kernelization

A kernel is a polynomial-time algorithm that reduces $(x, k)$ to an equivalent $(x', k')$ with $|x'| \leq f(k)$ (small input) and same YES/NO answer. Once we have a small kernel, we can solve it by any algorithm (even brute force) in time $g(f(k))$ โ€” which depends only on $k$.

Theorem: a problem is FPT $\Leftrightarrow$ it has a kernelization (not necessarily polynomial-size).

$O(k^2)$ kernel for $k$-Vertex Cover:
1. Remove self-loops and multi-edges (they don’t affect covers).
2. Any vertex of degree $> k$ must be in the cover (else we’d need $> k$ vertices just for its edges). Add it to the cover, delete it and its edges, decrease $k$.
3. Now max degree $\leq k$. If $|E'| > k^2$: answer NO (a size-$k$ cover can cover at most $k \cdot k = k^2$ edges; if there are more, it’s impossible).
4. Remove isolated vertices. Now $|V'| \leq 2k^2$ (each of the $\leq k^2$ edges has 2 endpoints).
5. The kernel has size $O(k^2)$.

Combined algorithm: kernel in $O(VE)$, then bounded search-tree on the $O(k^2)$-sized kernel in $O(2^k \cdot k^2)$. Total: $O(VE + 2^k k^2)$. Best known: $O(kV + 1.274^k)$ [Chen, Kanj, Xia 2010].

FPT and Approximation: Connection

Theorem: if an optimisation problem has an EPTAS (Efficient PTAS: $f(1/\varepsilon) \cdot n^{O(1)}$ time for $(1+\varepsilon)$-approximation), then the associated decision problem (OPT $\leq k$?) is FPT.

Proof sketch: run EPTAS with $\varepsilon = 1/(2k)$. If OPT $\leq k$, the EPTAS returns a solution of value $\leq (1 + 1/2k) \cdot k \leq k + 1/2$. Since OPT is integral, this means OPT $\leq k$ iff the returned value is $\leq k$. Time: $f(2k) \cdot n^{O(1)}$ โ€” FPT in $k$. $\square$

Contrapositive: if the decision problem is not FPT, then no EPTAS exists. This gives hardness of approximation results from parameterized complexity.
FPT Vertex Cover: bounded search-tree + O(kยฒ) kernelization
from collections import defaultdict

# โ”€โ”€ BOUNDED SEARCH-TREE: O(2^k * V) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def vertex_cover_fpt(adj, k):
    """
    FPT algorithm for k-Vertex Cover.
    adj: adjacency dict {u: set of neighbours}.
    k:   cover size budget.
    Returns a vertex cover of size <= k, or None if impossible.
    Time: O(2^k * V) โ€” FPT in k.

    Key insight: for any edge (u,v), at least one endpoint must be in cover.
    Branch on which one: try u, or try v. Depth <= k, branching factor 2.
    """
    # Find any uncovered edge
    for u in adj:
        for v in adj[u]:
            if v > u:   # avoid duplicate edges
                # Must include u or v; try both
                for chosen in [u, v]:
                    # Tentatively add 'chosen' to cover
                    removed_edges = {}
                    new_adj = {x: set(adj[x]) for x in adj}  # deep copy
                    # Remove 'chosen' and all its incident edges
                    for neighbour in new_adj[chosen]:
                        new_adj[neighbour].discard(chosen)
                    del new_adj[chosen]

                    result = vertex_cover_fpt(new_adj, k - 1)
                    if result is not None:
                        return result | {chosen}
                return None   # neither worked

    return set()   # no edges left โ€” empty cover suffices


# โ”€โ”€ KERNEL: O(k^2) reduction โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def kernel_vertex_cover(edges, n_vertices, k):
    """
    Reduce k-Vertex Cover to a kernel of size O(k^2).
    Returns (kernel_edges, kernel_k, forced_cover) or None if infeasible.
    """
    adj    = defaultdict(set)
    forced = set()     # vertices forced into cover (degree > k)

    for u, v in edges:
        adj[u].add(v)
        adj[v].add(u)

    remaining_k = k

    # Step 1: Force high-degree vertices
    changed = True
    while changed:
        changed = False
        for v in list(adj.keys()):
            if len(adj[v]) > remaining_k:
                forced.add(v)
                for u in list(adj[v]):
                    adj[u].discard(v)
                    if not adj[u]: del adj[u]
                del adj[v]
                remaining_k -= 1
                changed = True
                if remaining_k < 0:
                    return None   # infeasible

    # Step 2: Count remaining edges
    kernel_edges = [(u,v) for u in adj for v in adj[u] if v > u]
    if len(kernel_edges) > remaining_k * remaining_k:
        return None   # more edges than k^2 โ†’ impossible

    # Step 3: Remove isolated vertices (already done by adj cleanup)
    # kernel has <= 2k^2 vertices
    return kernel_edges, remaining_k, forced


def solve_vertex_cover(edges, n_vertices, k):
    """
    Full FPT pipeline: kernelize, then bounded search-tree on kernel.
    Overall: O(VE + 2^k * k^2).
    """
    result = kernel_vertex_cover(edges, n_vertices, k)
    if result is None:
        return None   # infeasible
    kernel_edges, k_remaining, forced = result

    # Build adj for kernel
    adj = defaultdict(set)
    for u, v in kernel_edges:
        adj[u].add(v); adj[v].add(u)

    kernel_solution = vertex_cover_fpt(adj, k_remaining)
    if kernel_solution is None:
        return None

    return forced | kernel_solution
// FPT Vertex Cover: bounded search-tree O(2^k * V)
#include <vector>
#include <set>
#include <optional>
using namespace std;

using Adj = vector<set<int>>;

optional<set<int>> vc_fpt(Adj adj, int n, int k) {
    // Find any uncovered edge
    int eu = -1, ev = -1;
    for (int u = 0; u < n; ++u)
        if (!adj[u].empty()) { eu=u; ev=*adj[u].begin(); break; }
    if (eu == -1) return set<int>{};  // no edges: cover is empty
    if (k == 0)   return nullopt;       // budget exhausted

    // Try adding eu
    auto try_vertex = [&](int v) -> optional<set<int>> {
        Adj a2 = adj;
        for (int u : a2[v]) a2[u].erase(v);
        a2[v].clear();
        auto sub = vc_fpt(a2, n, k-1);
        if (sub) { sub->insert(v); return sub; }
        return nullopt;
    };
    auto r = try_vertex(eu);
    if (r) return r;
    return try_vertex(ev);
}

// Kernel: force degree-k+ vertices, check |E| <= k^2
// Returns (kernel_adj, new_k, forced) or nullopt
struct KernelResult {
    Adj adj; int k; set<int> forced;
};
optional<KernelResult> kernelize(Adj adj, int n, int k) {
    set<int> forced;
    bool changed = true;
    while (changed) {
        changed = false;
        for (int v=0;v<n;v++) {
            if ((int)adj[v].size() > k) {
                forced.insert(v);
                for (int u:adj[v]) adj[u].erase(v);
                adj[v].clear(); k--; changed=true;
                if (k<0) return nullopt;
            }
        }
    }
    // Count edges
    int edge_count=0;
    for (int u=0;u<n;u++) edge_count+=(int)adj[u].size();
    edge_count/=2;
    if (edge_count > k*k) return nullopt;
    return KernelResult{adj, k, forced};
}
↑ Run C++ on Compiler Explorer

Part II — Distributed Algorithms

A distributed algorithm runs on a network of $n$ processors (nodes of an undirected graph $G=(V,E)$), each running a local process that communicates via message-passing. Two key models:

Synchronous model: rounds proceed in lockstep. Each round: all nodes send messages on all ports → messages delivered → all nodes compute new state. Costs measured in rounds (time) and messages (communication).

Asynchronous model: no global clock. Messages take arbitrary finite time. Any interleaving of sends/receives is possible. Harder to analyse; correctness must hold for all interleavings, time analysis requires assumptions on message delivery bounds.

Symmetry Breaking: Leader Election

Given a connected graph, elect exactly one leader process. This is the canonical symmetry-breaking problem in distributed computing.

Theorem 1 (impossibility): in a clique of $n$ identical deterministic processes with no unique identifiers, no algorithm can guarantee leader election.

Proof: by symmetry induction. All processes start in the same state. In each round: they all send the same messages (same state), receive the same messages (symmetric graph), make the same state transition. They remain in identical states forever. If one elects itself leader, all do โ€” contradiction. $\square$

Fix 1 โ€” Unique Identifiers (UIDs). Each process starts with a unique ID from a totally ordered set. Algorithm for clique: each process broadcasts its UID; the one receiving the maximum UID becomes leader. 1 round, $n^2$ messages.

Fix 2 โ€” Randomness. Processes pick random IDs from $\{1, \ldots, r\}$ with $r = \lceil n^2/\varepsilon \rceil$. By union bound over $\binom{n}{2}$ pairs, all IDs are distinct with probability $\geq 1 - \varepsilon$. Repeat until unique maximum found. Expected $O(1)$ rounds.

Ring Leader Election: Hirschberg-Sinclair $O(n\log n)$

Simple ring leader election: each node sends its UID clockwise; when it receives its own UID back, it has seen all UIDs and can elect the maximum. Cost: $n$ rounds, $n^2$ messages (each UID travels the full ring). Hirschberg-Sinclair improves to $O(n\log n)$ messages by doubling search radii: nodes search distance $1, 2, 4, \ldots$ in alternating directions, only forwarding a UID if it’s larger than all previously seen. Nodes still active at phase $r$ searched $2^r$ hops — at most $n/2^r$ survive each phase, giving $O(n\log n)$ total.

Distributed Maximal Independent Set (Luby’s Algorithm)

Compute an MIS of the graph distributedly. Deterministically impossible in some graphs (symmetry). Luby’s randomised algorithm:

Luby’s Algorithm (per round):
1. Each active node picks a random ID from $\{1, \ldots, n^5\}$.
2. Each node broadcasts its ID to neighbours.
3. If a node’s ID is larger than all its neighbours’ IDs: it joins the MIS, broadcasts “in” to neighbours.
4. Neighbours of MIS nodes are removed (they output “out”).
5. Repeat with remaining active nodes.

Complexity: $O(\log n)$ rounds expected (each round removes at least a constant fraction of edges w.h.p.), $O(E \log n)$ messages total. Proved by showing each round kills at least half the edges in expectation.

Synchronous BFS Tree

Root $v_0$ broadcasts a search message in round 1. Every node that receives a search message for the first time records the sender as its parent and forwards the search in round 2. Terminates in diameter $D$ rounds. Message complexity: $O(E)$ (each edge used at most once for the first delivery). Termination detected by convergecast: each leaf sends “done” to parent; parent sends “done” when it has received from all children and done its own work.

Asynchronous BFS + Shortest Paths Trees

Running the synchronous BFS algorithm asynchronously fails: a node may receive a shorter path later and need to update its parent. The fix is relaxation: nodes maintain their current best distance and update when a shorter path arrives, re-propagating to neighbours. Analogous to Bellman-Ford relaxation. Correctness holds for all interleavings; time analysis assumes messages delivered within $t$ time units.

Synchronous distributed BFS tree + Leader Election simulation
from collections import defaultdict, deque

# โ”€โ”€ SYNCHRONOUS BFS TREE (simulation) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def sync_bfs_tree(graph, root):
    """
    Synchronous distributed BFS spanning tree.
    Simulates round-by-round execution.
    graph: dict {u: list of neighbours}.
    Returns (parent, rounds, message_count).

    Round r: all nodes with dist==r-1 send "search" to all neighbours.
    A node records first sender as parent.

    Messages: O(|E|) โ€” each directed edge used at most once.
    Rounds: O(diameter).
    Termination: convergecast from leaves to root.
    """
    parent  = {root: None}   # root is its own parent (sentinel)
    dist    = {root: 0}
    round_n = 0
    messages_sent = 0

    # Queue of (sender, receiver) messages to deliver this round
    frontier = [root]

    while frontier:
        round_n += 1
        new_frontier = []
        for u in frontier:
            for v in graph[u]:
                messages_sent += 1               # message sent
                if v not in parent:              # first search arrival
                    parent[v] = u
                    dist[v]   = dist[u] + 1
                    new_frontier.append(v)
        frontier = new_frontier

    return parent, round_n, messages_sent


# โ”€โ”€ LEADER ELECTION ON CLIQUE (with UIDs) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def leader_election_clique(uids):
    """
    Clique leader election with UIDs.
    1 round: all broadcast UIDs; max UID wins.
    n messages (each of n nodes sends 1 broadcast = n-1 point-to-point).
    Returns (leader_uid, rounds, messages).
    """
    n = len(uids)
    # Round 1: all nodes broadcast their UID
    received = {uid: set(uids) - {uid} for uid in uids}   # each receives all others
    messages = n * (n - 1)   # n-1 messages per node

    # Elect: max UID
    leader = max(uids)
    return leader, 1, messages


# โ”€โ”€ LUBY'S DISTRIBUTED MIS (simulation) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
import random

def luby_mis(graph):
    """
    Luby's randomised distributed MIS algorithm.
    graph: dict {u: set of neighbours}.
    Returns (MIS set, expected rounds).
    O(log n) rounds expected. Each round: random ID, compare with neighbours.
    """
    active = set(graph.keys())
    mis    = set()
    rounds = 0
    n      = len(active)

    while active:
        rounds += 1
        ids    = {v: random.randint(1, n**5) for v in active}
        local_max = set()

        # Each node checks if it has the maximum ID among active neighbours
        for v in active:
            active_nbrs = {u for u in graph[v] if u in active}
            if all(ids[v] > ids[u] for u in active_nbrs):
                local_max.add(v)

        # Add local maxima to MIS; remove them and their neighbours
        to_remove = set()
        for v in local_max:
            mis.add(v)
            to_remove.add(v)
            to_remove |= {u for u in graph[v] if u in active}

        active -= to_remove

    return mis, rounds


# โ”€โ”€ CONVERGECAST: aggregate from leaves to root โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def convergecast_count(parent, root):
    """
    Convergecast: each non-root sends its subtree count to parent.
    Used for termination detection and computing aggregate functions.
    Returns total node count (verifies = n).
    O(V) messages, O(diameter) rounds.
    """
    children = defaultdict(list)
    for v, p in parent.items():
        if p is not None:
            children[p].append(v)

    count = {}
    # Process in reverse BFS order (leaves first)
    queue = deque()
    in_degree = {v: len(children[v]) for v in parent}
    for v in parent:
        if not children[v]:   # leaf
            queue.append(v)
            count[v] = 1

    while queue:
        v = queue.popleft()
        p = parent[v]
        if p is not None:
            count[p] = count.get(p, 1) + count[v]
            in_degree[p] -= 1
            if in_degree[p] == 0:
                queue.append(p)

    return count.get(root, 1)
// Synchronous BFS tree + Leader Election simulation
#include <vector>
#include <queue>
#include <unordered_map>
#include <algorithm>
#include <climits>
using namespace std;

// Synchronous BFS tree โ€” O(E) messages, O(diameter) rounds
struct BFSResult {
    vector<int> parent, dist;
    int rounds, messages;
};

BFSResult sync_bfs(int n, const vector<vector<int>>& adj, int root) {
    vector<int> parent(n,-1), dist(n,-1);
    dist[root]=0; parent[root]=root;
    queue<int> frontier;
    frontier.push(root);
    int rounds=0, messages=0;

    while (!frontier.empty()) {
        ++rounds;
        int sz = frontier.size();
        while (sz--) {
            int u = frontier.front(); frontier.pop();
            for (int v : adj[u]) {
                ++messages;
                if (dist[v]==-1) {
                    dist[v]=dist[u]+1;
                    parent[v]=u;
                    frontier.push(v);
                }
            }
        }
    }
    return {parent, dist, rounds, messages};
}

// Leader election on clique with UIDs โ€” 1 round, n*(n-1) messages
int leader_election(const vector<int>& uids) {
    return *max_element(uids.begin(), uids.end());
}

// Asynchronous SP tree: Bellman-Ford relaxation distributed
// Each node maintains current best dist, relaxes when shorter path arrives
// Analogous to async Bellman-Ford; O(V*E) messages in worst case
void async_sp_relax(int n, const vector<vector<pair<int,int>>>& adj,
                    int root, vector<int>& dist) {
    dist.assign(n, INT_MAX); dist[root]=0;
    // Process messages (simulated as priority queue)
    priority_queue<pair<int,int>,vector<pair<int,int>>,greater<>> pq;
    pq.push({0,root});
    while (!pq.empty()) {
        auto [d,u] = pq.top(); pq.pop();
        if (d > dist[u]) continue;
        for (auto [v,w]: adj[u])
            if (dist[u]+w < dist[v])
                { dist[v]=dist[u]+w; pq.push({dist[v],v}); }
    }
}
↑ Run C++ on Compiler Explorer
Ring leader election: naive O(nยฒ) vs Hirschberg-Sinclair O(n log n)
def ring_leader_naive(uids):
    """
    Naive ring leader election: each UID travels the full ring clockwise.
    Every node receives all n UIDs and elects the maximum.
    O(n^2) messages: each of n UIDs travels n hops.
    O(n) rounds.
    Returns (leader, rounds, messages).
    """
    n = len(uids)
    # Simulate: each UID travels around the ring
    # Message i is UID of node i, travels n hops
    messages = n * n    # n UIDs, each n hops
    rounds   = n        # after n rounds, each UID completes circuit
    leader   = max(uids)
    return leader, rounds, messages


def ring_leader_hirschberg_sinclair(uids):
    """
    Hirschberg-Sinclair: O(n log n) messages.
    Phase r: each surviving node probes distance 2^r in both directions.
    A node survives phase r if its UID is the max in a 2^(r+1)+1 neighbourhood.
    At most n/2^r nodes survive each phase => total messages O(n log n).
    Returns (leader, phases, messages_estimate).
    """
    n       = len(uids)
    active  = list(range(n))    # indices of active nodes
    messages = 0
    phase    = 0

    while len(active) > 1:
        probe_dist = 2 ** phase
        new_active = []
        for i, idx in enumerate(active):
            # Probe 2^phase hops clockwise and counterclockwise
            # Survives if uid[idx] is max in 2*probe_dist+1 window
            # (simplified simulation)
            window = [uids[active[(i + j) % len(active)]]
                      for j in range(-probe_dist, probe_dist + 1)]
            messages += 4 * probe_dist   # 2 probes * 2 directions
            if uids[idx] == max(window):
                new_active.append(idx)
        active  = new_active
        phase  += 1

    leader = uids[active[0]] if active else max(uids)
    return leader, phase, messages

# Comparison
import random
uids = [random.randint(1, 10000) for _ in range(1000)]
_, _, m_naive = ring_leader_naive(uids)
_, _, m_hs    = ring_leader_hirschberg_sinclair(uids)
print(f"n=1000: naive messages={m_naive:,}, HS messagesโ‰ˆ{m_hs:,}")
# naive: 1,000,000  HS: ~40,000 (โ‰ˆ 4n log n)
// Ring leader election comparison
#include <vector>
#include <algorithm>
#include <cmath>
#include <iostream>
using namespace std;

// Naive: O(n^2) messages
pair<int,long long> ring_leader_naive(const vector<int>& uids) {
    int n = uids.size();
    return {*max_element(uids.begin(),uids.end()), (long long)n*n};
}

// Hirschberg-Sinclair: O(n log n) messages
pair<int,long long> ring_leader_hs(const vector<int>& uids) {
    int n = uids.size();
    vector<int> active(n);
    iota(active.begin(), active.end(), 0);
    long long messages = 0;
    int phase = 0;

    while ((int)active.size() > 1) {
        int probe = 1 << phase;
        vector<int> survivors;
        for (int i=0;i<(int)active.size();i++) {
            int uid = uids[active[i]];
            bool survives = true;
            int sz = active.size();
            for (int j=-probe;j<=probe&&survives;j++)
                if (uids[active[(i+j+sz)%sz]] > uid) survives=false;
            if (survives) survivors.push_back(active[i]);
            messages += 4*probe;  // approximate
        }
        active = survivors; ++phase;
    }
    return {uids[active[0]], messages};
}
↑ Run C++ on Compiler Explorer
Worked Example · FPT Vertex Cover for Strategy Conflict Graph 🇮🇳 TTT Systems

TTT runs 10 option strategies. Pairs of strategies conflict (share overlapping positions that violate risk limits). We need to find the minimum set of strategies to suspend to eliminate all conflicts — exactly a Vertex Cover problem. In practice, the number of conflicts is small ($k \leq 5$), making the FPT approach ideal.

Kernelize first, then bounded search-tree
strategies = ['S01','S06','S08','S13','S15','S27','S27B','VOLT','CAL','COND']
# Conflict pairs (strategy indices that cannot run simultaneously)
conflicts = [(0,1),(0,2),(1,3),(2,4),(3,5),(4,6),(5,7),(6,8),(7,9)]

# k = 3: can we suspend at most 3 strategies to resolve all conflicts?
k = 3

result = solve_vertex_cover(conflicts, len(strategies), k)
if result:
    print(f"Suspend: {[strategies[i] for i in result]}")
    print(f"Suspended {len(result)} strategies (budget k={k})")
else:
    print(f"Cannot resolve all conflicts with k={k} suspensions")

# FPT time: O(2^3 * 10 + 10*9) = O(80 + 90) = trivially fast
# vs brute force: O(10^3 * 9) = O(9000) โ€” also fast here but
# for k=20, 10^20 * |E| vs 2^20 * V = 1M * V โ€” massive difference
Result: the bounded search-tree finds the exact minimum suspension set in $O(2^k \cdot n)$ time. For $k=3$, $n=10$: $80$ operations. For $k=20$, $n=10^6$: $2^{20} \cdot 10^6 \approx 10^{12}$ vs brute force $10^{120}$ โ€” FPT wins completely when $k$ is small. The $O(k^2)$ kernel further shrinks the graph before the search tree.
FPT algorithms appear in practice wherever NP-hard problems have small natural parameters. In quant finance: portfolio cardinality constraints (choose $k$ assets from $n$) are FPT in $k$. Graph coloring of strategy dependency graphs (schedule strategies into time-slots to avoid conflicts) is FPT in the number of colors. Distributed algorithms are the foundation of modern algo trading infrastructure: XTS WebSocket events are processed by distributed workers (Pashupati’s execution system), requiring message ordering guarantees (similar to async BFS), leader election for primary/backup failover, and consistent state across nodes. Luby’s MIS algorithm inspired randomised protocols in wireless network MAC layers (802.11 CSMA/CA).

Practice Problems
4 questions · Chapter 12
prog / daa / ch12 / q01 ★ FPT Definition

An algorithm for $k$-Vertex Cover runs in time $O(V^k \cdot E)$ where $k$ is the cover size. Is this FPT?

A
Yes — for any fixed $k$, the running time is polynomial in $V$ and $E$
B
No — the degree of the polynomial ($k$) depends on the parameter itself. FPT requires $f(k) \cdot n^{O(1)}$ where the exponent in $n$ is a constant independent of $k$
C
Yes — $V^k$ is bounded by $f(k) = V^k$ which depends only on the parameter
D
It depends on whether $k \leq \log V$ or $k > \log V$
Answer: B — NOT FPT.

FPT requires $f(k) \cdot n^{O(1)}$ where the exponent $O(1)$ is a constant independent of $k$. The algorithm $O(V^k \cdot E)$ has polynomial degree $k$ in $V$ — for different values of $k$, it’s a different polynomial. For $k=1$: $O(VE)$; for $k=10$: $O(V^{10}E)$; for $k=100$: $O(V^{100}E)$. This is called XP (slice-wise polynomial) but is strictly weaker than FPT. It is polynomial for each fixed $k$ but NOT with the same polynomial across all $k$ values. The bounded search-tree algorithm $O(2^k \cdot V)$ IS FPT: $f(k) = 2^k$ and the polynomial factor is $n = V$ (degree 1, independent of $k$).
prog / daa / ch12 / q02 ★★ Kernelization

In the $k$-Vertex Cover kernel, why does removing vertices of degree $> k$ not affect the answer?

A
High-degree vertices are never useful for covering edges
B
A vertex of degree $> k$ must be in any cover of size $\leq k$: if we exclude it, we need one vertex for each of its $> k$ incident edges, exhausting the entire budget just for those edges
C
High-degree vertices are in the cover only when $k$ is large enough
D
Removing them reduces the graph to a tree, which is easier to solve
Answer: B.

Vertex $v$ has degree $> k$. If $v \notin S$ (not in the cover), then all $> k$ incident edges must be covered by other vertices โ€” one per edge, since no two incident edges share a non-$v$ endpoint. This requires $> k$ vertices just for $v$’s edges, but the total cover budget is $k$. Contradiction: any cover of size $\leq k$ not containing $v$ is infeasible. Therefore $v$ must be in every valid cover. We can include $v$ in the cover unconditionally (it’s in every optimal solution), remove it and its incident edges, and decrease $k$ by 1. This is correct (the answer is the same) and polynomial to compute.
prog / daa / ch12 / q03 ★★ Symmetry Breaking

Why is leader election impossible for deterministic, indistinguishable processes on a clique, but possible with UIDs?

A
The clique has too many edges for messages to propagate efficiently
B
Deterministic algorithms always select the last node alphabetically, causing conflicts
C
Without UIDs, all processes are identical: they start in the same state, send the same messages, receive the same messages (symmetric graph), and make the same transitions โ€” remaining forever identical. UIDs break symmetry by giving each process a unique initial state
D
Leader election is impossible on all graphs without UIDs, not just cliques
Answer: C.

The proof is by symmetry induction: base case — round 0, all processes in the same start state. Inductive step — each process computes the same messages (same state), sends the same messages on each port, receives the same messages (symmetric clique), makes the same state transition. After $r$ rounds, all processes are in identical states. If the algorithm terminates and some process elects itself leader, all processes simultaneously elect themselves — contradicting “exactly one leader.”

UIDs break this: processes start in different states (different UIDs), so the induction fails at round 0. After 1 round of UID exchange, each process knows a different set of information. The process with the maximum UID has unique knowledge (it saw its UID is the max) and can elect itself.
prog / daa / ch12 / q04 ★★★ Synchronous vs Asynchronous BFS

The synchronous BFS algorithm correctly builds a BFS spanning tree. Why does running the same algorithm asynchronously fail to produce a BFS tree (though it does produce a spanning tree)?

A
Messages are lost in asynchronous networks, causing nodes to miss their parent
B
Message delivery order is non-deterministic: a node may receive a message from a farther node before a message from a closer node, setting a non-shortest parent. The algorithm still produces a spanning tree but not necessarily a BFS tree (shortest-path tree)
C
The algorithm produces cycles in asynchronous networks
D
Asynchronous networks don’t support parent pointers
Answer: B.

In the synchronous BFS, round $r$ delivers all messages from distance-$r$ nodes simultaneously. A node sets its parent to the first message it receives, which is guaranteed to be from distance $r-1$ (the closest layer).

Asynchronously, there is no guarantee on delivery order. A message from node $u$ at distance 3 might arrive at node $v$ before a message from node $w$ at distance 1 (because $w$’s message experienced a slow channel). Node $v$ sets parent to $u$ (distance 3), resulting in a non-BFS spanning tree. The algorithm still builds a spanning tree (every node gets a parent, no cycles), but it may not be the BFS (shortest-path) tree.

Fix: use relaxation (Bellman-Ford style) โ€” nodes update their parent whenever they receive a message indicating a shorter path. This produces the correct SP tree but requires more messages and complex termination detection.

Terminal Questions — Chapter 12 10 problems · No answers given

Work through independently. Q1–3 direct application. Q4–7 synthesis. Q8–10 careful argument.

1
Run the bounded search-tree for $k$-Vertex Cover on the graph with edges $\{(1,2),(2,3),(3,4),(4,5),(5,1)\}$ (a 5-cycle) with $k=2$. Draw the full recursion tree showing both branches at each level, and find the minimum vertex cover. Easy
2
Apply the $O(k^2)$ kernelization to the graph with $k=3$: $V=\{a,b,c,d,e,f,g,h\}$, edges $\{ab,ac,ad,ae,bc,cd,fg,gh\}$. Identify which vertices are forced into the cover (degree $>k$), compute the remaining kernel, and verify $|E'| \leq k^2$. Easy
3
Simulate the clique leader election algorithm with UIDs on a clique of 5 processes with UIDs $\{17, 42, 8, 31, 55\}$. Show the single round of UID exchange, which process is elected, and calculate the exact message count. Easy
4
Implement Luby’s MIS algorithm and verify it finds a maximal independent set on a random $G(n=20, p=0.3)$ Erdล‘sโ€“Rรฉnyi random graph. Count the number of rounds needed and verify both independence (no two adjacent MIS nodes) and maximality (every non-MIS node has a MIS neighbour). Medium
5
Prove the kernelization theorem for vertex cover: after kernelization, the kernel has at most $2k^2$ vertices. Show the argument: (a) max degree $\leq k$ after forcing, (b) $|E'| \leq k^2$ else NO, (c) each edge has 2 endpoints so $|V'| \leq 2|E'| \leq 2k^2$. Medium
6
Implement and simulate the synchronous BFS spanning tree algorithm on a 10-node graph. Track which round each node joins the tree, what its parent is, and the total message count. Verify it equals $O(|E|)$. Then simulate the same algorithm asynchronously with random delays on each edge โ€” show that the resulting tree may not be a BFS tree. Medium
7
The Hirschberg-Sinclair algorithm uses doubling probe distances to reduce ring leader election from $O(n^2)$ to $O(n\log n)$ messages. Prove: at the start of phase $r$, at most $n/2^r$ nodes are still active (have not lost a comparison). Use this to bound the total message count across all phases as $O(n\log n)$. Medium
8
The FPT-EPTAS connection: prove that if Vertex Cover has an EPTAS (runs in $f(1/\varepsilon) \cdot n^{O(1)}$ time giving $(1+\varepsilon)$-approximation), then the $k$-Vertex Cover decision problem is FPT. Specifically, set $\varepsilon = 1/(2k)$ and show this gives an exact answer for the decision problem in $f(2k) \cdot n^{O(1)}$ time. Hard
9
Prove Luby’s MIS terminates in $O(\log n)$ rounds in expectation. Key lemma: in each round, each edge is “killed” (at least one endpoint joins MIS or a neighbour does) with probability $\geq 1/2$. Show this implies the number of surviving edges halves in expectation each round, giving $O(\log |E|) = O(\log n)$ rounds. Hard
10
The W-hierarchy in parameterized complexity classifies problems by their hardness: FPT $\subseteq$ W[1] $\subseteq$ W[2] $\subseteq \ldots$. Clique parameterized by clique size is W[1]-complete (believed not FPT). Explain why this matters: (a) what would it mean if Clique were FPT? (b) Is Independent Set FPT? (c) What does W[1]-hardness of Clique imply about approximation? Hard