/
ProgrammingAlgorithm Design & AnalysisAdvanced Dynamic Programming
Progress
7 / 13
← All Chapters
Intermediate Chapter 07 of 13 · Algorithm Design and Analysis

Advanced Dynamic Programming

Beyond the standard DP patterns: interval DP on substrings, game theory DP with adversarial choices, DP on partial orders, and counting DP. Each problem follows the same discipline — identify subproblem structure, write the recurrence, count subproblems, analyse the cost per subproblem — but the subproblem spaces are richer and the recurrences more subtle.

The DP Design Framework

Every DP problem in this chapter follows four steps:

1. Subproblem definition. What information must the subproblem encode? Usually a prefix/suffix/interval of the input.
2. Recurrence. Express the solution to a subproblem in terms of strictly smaller subproblems. The “guess” is the choice that splits the problem.
3. Base cases. Smallest valid subproblems with known answers.
4. Complexity. (subproblems) × (time per subproblem) = total time. Verify by counting the subproblem DAG.

Problem 1 — Longest Palindromic Subsequence

Given string $X[1\ldots n]$, find the length of the longest subsequence that is a palindrome. Example: “character” $\to$ “carac” (length 5).

Subproblem: $L(i,j)$ = length of longest palindromic subsequence of $X[i\ldots j]$.

Recurrence: $$L(i,j) = \begin{cases} 1 & i = j \\ 2 & i+1=j \text{ and } X[i]=X[j] \\ 2 + L(i+1,\,j-1) & X[i]=X[j] \text{ and } i+1 < j \\ \max(L(i+1,\,j),\; L(i,\,j-1)) & X[i] \neq X[j] \end{cases}$$
Intuition: if the endpoints match, include both and recurse inward. If not, drop one endpoint and take the better option. Each subproblem is an interval $(i,j)$ with $i \leq j$.

Complexity: $\Theta(n^2)$ subproblems, $\Theta(1)$ per subproblem → $\Theta(n^2)$. Fill table in order of increasing $j-i$.

Why naive recursion is $\Theta(2^n)$: if all characters are distinct, every call branches into two subproblems of size $n-1$: $T(n) = 2T(n-1)$. Memoisation collapses this to $\Theta(n^2)$ distinct $(i,j)$ pairs.

Problem 2 — Optimal Binary Search Tree

Given keys $K_1 < K_2 < \cdots < K_n$ with search probabilities (weights) $w_1, w_2, \ldots, w_n$, build a BST minimising expected search cost: $\sum_{i=1}^n w_i \cdot (\text{depth}(K_i) + 1)$. Example: if $w = [1, 10, 8, 9]$, putting $K_2$ (weight 10) at the root (depth 0) costs far less than $K_1$ (weight 1) at root.

Why greedy fails: placing the maximum-weight key at the root can force high-weight keys into deep subtrees. See the example: $w=[1,10,8,9]$, greedy (max-weight root) gives cost 55; optimal gives 49 by placing $K_3$ at the root.

Subproblem: $e(i,j)$ = cost of optimal BST on keys $K_i, \ldots, K_j$. Let $W(i,j) = \sum_{k=i}^j w_k$.

Recurrence: guess the root $K_r$ for $i \leq r \leq j$. When $K_r$ is root, all keys in the subtrees increase depth by 1, adding $W(i,j)$ to the cost: $$e(i,j) = \min_{i \leq r \leq j}\!\bigl(e(i,\,r-1) + e(r+1,\,j) + W(i,j)\bigr)$$ Base case: $e(i,i) = w_i$, $e(i,j) = 0$ if $i > j$.

Complexity: $\Theta(n^2)$ subproblems, $\Theta(n)$ time per subproblem (iterate over all roots) → $\Theta(n^3)$. (Knuth’s speedup reduces this to $\Theta(n^2)$ using the monotone root property.)
Longest Palindromic Subsequence + Optimal BST · O(n²) / O(n³)
from functools import lru_cache

# ── LONGEST PALINDROMIC SUBSEQUENCE ── O(n^2) ───────────────────────
def lps(X):
    """
    Length of longest palindromic subsequence of string X.
    Uses interval DP: L[i][j] = LPS of X[i..j].
    Bottom-up: fill by increasing interval length.
    """
    n = len(X)
    # L[i][j] = LPS length of X[i..j]
    L = [[0]*n for _ in range(n)]

    # Base case: single characters
    for i in range(n):
        L[i][i] = 1

    # Fill for increasing lengths 2, 3, ..., n
    for length in range(2, n+1):
        for i in range(n - length + 1):
            j = i + length - 1
            if X[i] == X[j]:
                L[i][j] = 2 + (L[i+1][j-1] if length > 2 else 0)
            else:
                L[i][j] = max(L[i+1][j], L[i][j-1])
    return L[0][n-1]

def lps_reconstruct(X):
    """Reconstruct the actual LPS string."""
    n = len(X)
    L = [[0]*n for _ in range(n)]
    for i in range(n): L[i][i] = 1
    for length in range(2, n+1):
        for i in range(n - length + 1):
            j = i + length - 1
            if X[i] == X[j]:
                L[i][j] = 2 + (L[i+1][j-1] if length > 2 else 0)
            else:
                L[i][j] = max(L[i+1][j], L[i][j-1])

    # Traceback
    result, i, j = [], 0, n-1
    while i <= j:
        if i == j:
            result.append(X[i]); break
        if X[i] == X[j]:
            result.append(X[i])
            i += 1; j -= 1
        elif L[i+1][j] > L[i][j-1]:
            i += 1
        else:
            j -= 1
    left = result
    right = result[:-1][::-1] if L[0][n-1] % 2 == 1 else result[::-1]
    return ''.join(left + right)


# ── OPTIMAL BINARY SEARCH TREE ── O(n^3) ────────────────────────────
def optimal_bst(weights):
    """
    Minimum expected search cost BST for keys 1..n with given weights.
    Returns (cost, root_table) where root_table[i][j] = optimal root for keys i..j.
    Cost e[i][j] = min over roots r of: e[i][r-1] + e[r+1][j] + W(i,j).
    O(n^3) time, O(n^2) space.
    """
    n = len(weights)
    W = weights[:]          # W[i] = weight of key i (0-indexed)

    # Prefix sums for W(i,j) = sum(W[i..j])
    prefix = [0] * (n+1)
    for i in range(n): prefix[i+1] = prefix[i] + W[i]
    def range_sum(i, j): return prefix[j+1] - prefix[i]

    # e[i][j] = optimal cost for keys i..j (0-indexed)
    e    = [[0.0]*n for _ in range(n)]
    root = [[0]*n   for _ in range(n)]

    # Base cases: single keys
    for i in range(n):
        e[i][i]    = W[i]
        root[i][i] = i

    # Fill for increasing lengths
    for length in range(2, n+1):
        for i in range(n - length + 1):
            j = i + length - 1
            Wij = range_sum(i, j)
            e[i][j] = float('inf')
            for r in range(i, j+1):
                cost = (e[i][r-1] if r > i else 0) + \
                       (e[r+1][j] if r < j else 0) + Wij
                if cost < e[i][j]:
                    e[i][j] = cost
                    root[i][j] = r
    return e[0][n-1], root

# Example from lecture: w = [1, 10, 8, 9]
cost, roots = optimal_bst([1, 10, 8, 9])
print(f"Optimal BST cost: {cost}")          # 49
print(f"Root of full tree: key {roots[0][3]+1}")  # key 3 (weight=8)
// Longest Palindromic Subsequence + Optimal BST
#include <vector>
#include <string>
#include <algorithm>
using namespace std;

// LPS — O(n^2) interval DP
int lps(const string& X) {
    int n = X.size();
    vector<vector<int>> L(n, vector<int>(n, 0));
    for (int i = 0; i < n; i++) L[i][i] = 1;
    for (int len = 2; len <= n; len++)
        for (int i = 0; i <= n-len; i++) {
            int j = i + len - 1;
            if (X[i] == X[j])
                L[i][j] = 2 + (len > 2 ? L[i+1][j-1] : 0);
            else
                L[i][j] = max(L[i+1][j], L[i][j-1]);
        }
    return L[0][n-1];
}

// Optimal BST — O(n^3)
pair<double,vector<vector<int>>> optimal_bst(vector<double> w) {
    int n = w.size();
    // prefix sums
    vector<double> pre(n+1, 0);
    for (int i = 0; i < n; i++) pre[i+1] = pre[i] + w[i];
    auto rsum = [&](int i, int j){ return pre[j+1] - pre[i]; };

    vector<vector<double>> e(n, vector<double>(n, 0));
    vector<vector<int>>    root(n, vector<int>(n, 0));
    for (int i = 0; i < n; i++) { e[i][i] = w[i]; root[i][i] = i; }

    for (int len = 2; len <= n; len++)
        for (int i = 0; i <= n-len; i++) {
            int j = i + len - 1;
            double Wij = rsum(i, j);
            e[i][j] = 1e18;
            for (int r = i; r <= j; r++) {
                double cost = (r>i ? e[i][r-1] : 0)
                            + (r<j ? e[r+1][j] : 0) + Wij;
                if (cost < e[i][j]) { e[i][j]=cost; root[i][j]=r; }
            }
        }
    return {e[0][n-1], root};
}
↑ Run C++ on Compiler Explorer

Problem 3 — Alternating Coin Game

$n$ coins (n even) with values $V_1, V_2, \ldots, V_n$ in a row. Two players alternate; each turn a player takes either the leftmost or rightmost coin. You move first. Maximise your total value.

Key insight: the first player can always guarantee at least $\max(\sum_{\text{odd}} V_i,\; \sum_{\text{even}} V_i)$. Here’s why: all odd-indexed coins are available simultaneously (take the leftmost, which leaves even-indexed as the new ends; the opponent must take one, restoring odd-indexed as ends). So the first player can force collection of all odd-indexed or all even-indexed coins.

But to maximise, we need DP. Let $V(i,j)$ = maximum value the current player can guarantee from coins $V_i, \ldots, V_j$. $$V(i,j) = \max\!\Bigl(V_i + \min\bigl(V(i+1,j-1),\; V(i+2,j)\bigr),\;\; V_j + \min\bigl(V(i,j-2),\; V(i+1,j-1)\bigr)\Bigr)$$ The $\min$ captures the opponent’s optimal counter-move: after you take $V_i$, opponent chooses between $V_{i+1}$ (leaving $V(i+2,j)$ for you) and $V_j$ (leaving $V(i+1,j-1)$ for you); they will pick whichever leaves you less.

Complexity: $\Theta(n^2)$ subproblems (intervals of even length), $\Theta(1)$ per subproblem → $\Theta(n^2)$.

Problem 4 — Rectangular Blocks (DP on Partial Orders)

Given $n$ 3D blocks, each with dimensions $(l_i, w_i, h_i)$. Stack them as high as possible: block $i$ can go on block $j$ only if $l_i \leq l_j$ and $w_i \leq w_j$. This defines a partial order on blocks.

Subproblem: Sort blocks by $(l, w)$ in non-increasing order. Let $H[i]$ = height of tallest tower with block $i$ on top. $$H[i] = h_i + \max_{j < i,\; l_j \geq l_i,\; w_j \geq w_i} H[j]$$ Base case: $H[i] = h_i$ if no valid block can go below. Answer: $\max_i H[i]$.

Complexity: $O(n)$ subproblems, $O(n)$ per subproblem → $O(n^2)$ DP. Sorting: $O(n\log n)$. Total: $O(n^2)$.

This is essentially the Longest Increasing Subsequence (LIS) problem on a 2D partial order.

Problem 5 — Boolean Parenthesisation (Counting DP)

Given a boolean expression with $n$ operands and $n-1$ operators (AND, OR, XOR), count the number of ways to parenthesise it so it evaluates to True.

Subproblems: $T[i,j]$ = ways $S[i\ldots j]$ evaluates to True; $F[i,j]$ = ways to False. Let $\text{Tot}[i,j] = T[i,j] + F[i,j]$.

Recurrence (split at operator $S[k]$ for $i \leq k < j$): $$T[i,j] = \sum_{k=i}^{j-1} \begin{cases} T[i,k] \cdot T[k+1,j] & S[k]=\text{AND} \\ \text{Tot}[i,k]\cdot\text{Tot}[k+1,j] - F[i,k]\cdot F[k+1,j] & S[k]=\text{OR} \\ T[i,k]\cdot F[k+1,j] + F[i,k]\cdot T[k+1,j] & S[k]=\text{XOR} \end{cases}$$ $F[i,j]$ is symmetric. Base cases: $T[i,i]=1$ if $S[i]=\text{True}$, else $0$; vice versa for $F[i,i]$.

Complexity: $O(n^2)$ subproblems, $O(n)$ per subproblem → $O(n^3)$.
Coin Game + Boolean Parenthesisation · O(n²) / O(n³)
# ── ALTERNATING COIN GAME ── O(n^2) ─────────────────────────────────
def coin_game(V):
    """
    V: list of coin values (length must be even).
    Returns maximum value first player can guarantee.
    V(i,j) = max value current player earns from coins V[i..j].
    Opponent plays optimally (min), we play optimally (max).
    """
    n = len(V)
    dp = [[0]*n for _ in range(n)]

    # Base cases
    for i in range(n):
        dp[i][i] = V[i]
    for i in range(n-1):
        dp[i][i+1] = max(V[i], V[i+1])

    # Fill for increasing lengths (even only, but we compute all)
    for length in range(3, n+1):
        for i in range(n - length + 1):
            j = i + length - 1
            # Take V[i]: opponent sees [i+1..j], picks better for them
            take_left  = V[i] + min(
                dp[i+2][j] if i+2 <= j else 0,     # opp takes V[i+1]
                dp[i+1][j-1] if i+1 <= j-1 else 0  # opp takes V[j]
            )
            # Take V[j]: opponent sees [i..j-1], picks better for them
            take_right = V[j] + min(
                dp[i+1][j-1] if i+1 <= j-1 else 0, # opp takes V[i]
                dp[i][j-2] if i <= j-2 else 0       # opp takes V[j-1]
            )
            dp[i][j] = max(take_left, take_right)
    return dp[0][n-1]


# ── BOOLEAN PARENTHESISATION ── O(n^3) ──────────────────────────────
def bool_paren(symbols, operators):
    """
    Count ways to parenthesise symbols[0..n-1] with operators[0..n-2]
    such that the expression evaluates to True.
    symbols: list of booleans (True/False)
    operators: list of 'AND', 'OR', 'XOR'
    Returns count (may be exponential).
    """
    n = len(symbols)
    T = [[0]*n for _ in range(n)]  # T[i][j]: ways to True
    F = [[0]*n for _ in range(n)]  # F[i][j]: ways to False

    # Base cases
    for i in range(n):
        T[i][i] = 1 if symbols[i] else 0
        F[i][i] = 0 if symbols[i] else 1

    # Fill for increasing lengths
    for length in range(2, n+1):
        for i in range(n - length + 1):
            j = i + length - 1
            for k in range(i, j):     # split at operator k (between k and k+1)
                op  = operators[k]
                tl, fl = T[i][k], F[i][k]
                tr, fr = T[k+1][j], F[k+1][j]
                tot_l, tot_r = tl+fl, tr+fr
                if op == 'AND':
                    T[i][j] += tl * tr
                    F[i][j] += tot_l*tot_r - tl*tr
                elif op == 'OR':
                    T[i][j] += tot_l*tot_r - fl*fr
                    F[i][j] += fl * fr
                elif op == 'XOR':
                    T[i][j] += tl*fr + fl*tr
                    F[i][j] += tl*tr + fl*fr
    return T[0][n-1]

# Example: True AND False XOR True
syms = [True, False, True]
ops  = ['AND', 'XOR']
print(bool_paren(syms, ops))   # 2
// Coin Game + Boolean Parenthesisation
#include <vector>
#include <string>
#include <algorithm>
using namespace std;

// Coin Game — O(n^2)
int coin_game(const vector<int>& V) {
    int n = V.size();
    vector<vector<int>> dp(n, vector<int>(n, 0));
    for (int i = 0; i < n; i++) dp[i][i] = V[i];
    for (int i = 0; i < n-1; i++) dp[i][i+1] = max(V[i], V[i+1]);
    for (int len = 3; len <= n; len++)
        for (int i = 0; i <= n-len; i++) {
            int j = i + len - 1;
            int take_left  = V[i] + min(
                i+2 <= j   ? dp[i+2][j]   : 0,
                i+1 <= j-1 ? dp[i+1][j-1] : 0);
            int take_right = V[j] + min(
                i+1 <= j-1 ? dp[i+1][j-1] : 0,
                i   <= j-2 ? dp[i][j-2]   : 0);
            dp[i][j] = max(take_left, take_right);
        }
    return dp[0][n-1];
}

// Boolean parenthesisation — O(n^3)
long long bool_paren(const vector<bool>& sym,
                     const vector<string>& ops) {
    int n = sym.size();
    vector<vector<long long>> T(n,vector<long long>(n,0)),
                               F(n,vector<long long>(n,0));
    for (int i=0;i<n;i++){T[i][i]=sym[i];F[i][i]=!sym[i];}
    for (int len=2;len<=n;len++)
        for (int i=0;i<=n-len;i++) {
            int j=i+len-1;
            for (int k=i;k<j;k++) {
                auto tl=T[i][k],fl=F[i][k],tr=T[k+1][j],fr=F[k+1][j];
                auto tot=( tl+fl)*(tr+fr);
                if(ops[k]=="AND"){T[i][j]+=tl*tr;F[i][j]+=tot-tl*tr;}
                else if(ops[k]=="OR"){T[i][j]+=tot-fl*fr;F[i][j]+=fl*fr;}
                else{T[i][j]+=tl*fr+fl*tr;F[i][j]+=tl*tr+fl*fr;}
            }
        }
    return T[0][n-1];
}
↑ Run C++ on Compiler Explorer

Problem 6 — Rectangular Blocks (DP on Partial Orders)

Rectangular blocks tower + make-change DP · O(n²) / O(mN)
# ── RECTANGULAR BLOCKS TOWER ── O(n^2) ──────────────────────────────
def max_tower(blocks):
    """
    blocks: list of (length, width, height) tuples.
    Returns maximum achievable tower height.
    Block i can sit on block j only if l_i <= l_j AND w_i <= w_j.
    DP: H[i] = max height of tower with block i on top.
    H[i] = h_i + max(H[j] for j < i if l_j >= l_i and w_j >= w_i)
    """
    # Sort descending by (length, width)
    blocks = sorted(blocks, key=lambda b: (-b[0], -b[1]))
    n = len(blocks)
    H = [b[2] for b in blocks]    # H[i] starts as just its own height

    for i in range(1, n):
        li, wi, hi = blocks[i]
        for j in range(i):
            lj, wj, _ = blocks[j]
            if lj >= li and wj >= wi:  # block j can support block i
                H[i] = max(H[i], hi + H[j])

    return max(H)

# Example
blocks = [(4,3,2), (1,1,5), (3,2,3), (2,2,4)]
print(max_tower(blocks))   # 11 = (4,3,2)+(3,2,3)+(2,2,4)+(1,1,5) impossible;
                           # actual: (4,3,2) -> (3,2,3) -> (2,2,4) = 9


# ── MAKE CHANGE ── O(mN) pseudo-polynomial ───────────────────────────
def min_coins(coins, N):
    """
    Minimum coins to make N cents using coin denominations in `coins`.
    C[p] = min C[p - S_i] + 1 for all S_i <= p.
    O(m * N) time, O(N) space. Pseudo-polynomial (not polynomial in bits of N).
    """
    INF = float('inf')
    C = [INF] * (N + 1)
    C[0] = 0
    for p in range(1, N + 1):
        for s in coins:
            if s <= p and C[p - s] + 1 < C[p]:
                C[p] = C[p - s] + 1
    return C[N] if C[N] != INF else -1

# Reconstruct which coins were used
def min_coins_with_path(coins, N):
    INF = float('inf')
    C    = [INF] * (N + 1)
    last = [-1]  * (N + 1)
    C[0] = 0
    for p in range(1, N + 1):
        for s in coins:
            if s <= p and C[p-s]+1 < C[p]:
                C[p] = C[p-s]+1; last[p] = s
    # Traceback
    path, p = [], N
    while p > 0:
        path.append(last[p]); p -= last[p]
    return C[N], path

cost, used = min_coins_with_path([1, 5, 10, 25], 41)
print(f"Coins needed: {cost}, using: {used}")  # 4 coins: [25,10,5,1]
// Rectangular blocks + make-change
#include <vector>
#include <algorithm>
#include <tuple>
#include <numeric>
using namespace std;

// Tower of blocks — O(n^2)
int max_tower(vector<tuple<int,int,int>> blocks) {
    sort(blocks.begin(), blocks.end(),
         [](auto& a, auto& b){
             return get<0>(a) != get<0>(b) ?
                    get<0>(a) > get<0>(b) :
                    get<1>(a) > get<1>(b); });
    int n = blocks.size();
    vector<int> H(n);
    for (int i=0;i<n;i++) H[i]=get<2>(blocks[i]);
    for (int i=1;i<n;i++)
        for (int j=0;j<i;j++)
            if (get<0>(blocks[j])>=get<0>(blocks[i]) &&
                get<1>(blocks[j])>=get<1>(blocks[i]))
                H[i]=max(H[i], get<2>(blocks[i])+H[j]);
    return *max_element(H.begin(),H.end());
}

// Make change — O(m*N)
int min_coins(vector<int> coins, int N) {
    const int INF = 1e9;
    vector<int> C(N+1, INF);
    C[0] = 0;
    for (int p=1;p<=N;p++)
        for (int s : coins)
            if (s<=p && C[p-s]+1 < C[p])
                C[p] = C[p-s]+1;
    return C[N]==INF ? -1 : C[N];
}
↑ Run C++ on Compiler Explorer
Worked Example · Optimal BST for Options Strike Lookup 🇮🇳 TTT / Pashupati

An options trading system queries strikes by name millions of times per session. Strike frequency is non-uniform: ATM strikes (near 23,400) are queried far more than deep OTM strikes. An optimal BST minimises expected lookup depth.

Build optimal BST from strike query frequencies
# Strike keys (sorted) and their normalised query frequencies
strikes    = [22500, 23000, 23500, 24000, 24500]
# ATM near 23500: highest frequency
frequencies = [0.05, 0.15, 0.45, 0.25, 0.10]

cost, root_table = optimal_bst(frequencies)
print(f"Expected search cost: {cost:.4f}")
# vs naive balanced BST (all at depth ~log2(5) ā‰ˆ 2.3):
balanced_cost = sum(f * (2+1) for f in frequencies)   # depth 2 for all
print(f"Balanced BST cost:    {balanced_cost:.4f}")
# Optimal BST places 23500 (freq=0.45) near root -> shallower for hot keys
Result: Optimal BST cost $\approx 1.95$ vs balanced BST $\approx 3.0$. The optimal tree places the most-queried strike (23500, freq=0.45) at or near the root, reducing expected search depth by $\sim 35\%$. This matters at millions of queries per session.
The coin change DP ($O(mN)$) appears directly in options strategy construction: given target payoff $N$ and available option strikes as “denominations,” find the minimum number of legs. The boolean parenthesisation DP structure also appears in order-routing logic — given a sequence of market conditions (True/False) joined by AND/OR triggers, count valid execution paths. The rectangular blocks problem is a disguised form of the Longest Increasing Subsequence on a 2D partial order — used in scheduling algorithms where instruments or strategies must be ordered by multiple criteria (expiry and strike must both increase in a calendar spread ladder).

Practice Problems
4 questions · Chapter 07
prog / daa / ch07 / q01 ★ LPS Recurrence

For the string “ABCBA”, what is $L(0, 4)$ (LPS of the full string), and what is the base case that prevents infinite recursion when $X[i] = X[j]$ and $i+1 = j$?

A
$L(0,4) = 4$; base case returns $2 + L(1,3) = 2 + 1 = 3$
B
$L(0,4) = 5$ (entire string is a palindrome); when $i+1=j$ and $X[i]=X[j]$, return $2$ directly without recursing into $L(i+1, j-1) = L(j, i-1)$ which would have $j > i$
C
$L(0,4) = 3$; the base case for adjacent equal characters returns $1$
D
$L(0,4) = 5$; the base case for $i+1=j$ is unnecessary since $L(i+1,j-1) = L(j,j-1) = 0$ automatically
Answer: B.

“ABCBA” is itself a palindrome, so $L(0,4) = 5$. The key base case: when $X[i]=X[j]$ and $i+1=j$ (two adjacent equal characters), return $2$ directly. Without this guard, we would recurse into $L(i+1, j-1) = L(j, j-1)$, where $j-1 < j$, meaning the left index exceeds the right. Option D suggests $L(j, j-1) = 0$ would work automatically — but this requires treating out-of-bounds index pairs as 0, which must be explicitly coded. The lecture code handles this by the explicit $i+1 == j$ check. For “ABCBA”: $L(0,4): X[0]=A=X[4] \Rightarrow 2+L(1,3)$. $L(1,3): X[1]=B=X[3] \Rightarrow 2+L(2,2)=2+1=3$. So $L(0,4)=2+3=5$.
prog / daa / ch07 / q02 ★★ Optimal BST

In the Optimal BST recurrence $e(i,j) = \min_{r}(e(i,r-1) + e(r+1,j) + W(i,j))$, why does $W(i,j)$ appear in every term regardless of the choice of root $r$?

A
$W(i,j)$ accounts for the weight of the root $K_r$ only
B
When $K_r$ becomes root, every other key's depth increases by 1, adding $w_k$ for each $k \neq r$ to the cost; combined with $w_r$ itself at depth 0, the total addition is exactly $W(i,j) = \sum_{k=i}^j w_k$
C
$W(i,j)$ is a normalisation constant ensuring the recurrence sums to a valid probability
D
$W(i,j)$ accounts for the cost of searching the root $K_r$ at depth 1
Answer: B.

The cost function is $\sum_k w_k \cdot (\text{depth}(K_k) + 1)$. When we build a BST rooted at $K_r$ over keys $K_i, \ldots, K_j$: the subproblems $e(i, r-1)$ and $e(r+1, j)$ compute costs as if their subtrees were independent trees rooted at depth 0. But in the actual tree, those subtrees are at depth 1 (since $K_r$ is the root). This shifts every key’s depth by 1, multiplying their contribution by $+w_k$ per key. Summing over all keys in $K_i, \ldots, K_j$: the extra cost from depth shift is $\sum_{k=i}^j w_k = W(i,j)$. This $W(i,j)$ is independent of $r$ because all $n-1$ non-root keys each increase depth by 1, and $K_r$ itself contributes $w_r \cdot 1$ (at depth 0 $+$ 1).
prog / daa / ch07 / q03 ★★ Coin Game Strategy

For coins $[4, 42, 39, 17, 25, 6]$, the first player can guarantee at least which amount using the odd/even strategy, and does the DP confirm this is optimal?

A
Odd sum $= 4+39+25 = 68$; even sum $= 42+17+6 = 65$; first player guarantees $68$ and this is optimal
B
Odd sum $= 4+39+25=68$; even sum $= 42+17+6=65$; first player guarantees at least $68$ using the parity strategy. The DP may give a higher value by exploiting specific game sequences
C
The first player cannot guarantee more than $65$; the second player always wins with the even-indexed coins
D
The first player always wins the maximum coin ($42$) by playing greedily, guaranteeing $42 + 25 + 4 = 71$
Answer: B.

Odd-indexed coins (1-indexed): $V_1=4$, $V_3=39$, $V_5=25$. Sum $= 68$. Even-indexed: $V_2=42$, $V_4=17$, $V_6=6$. Sum $= 65$. The parity strategy guarantees the first player can collect all of the higher-sum parity class — at least $68$. However, the DP recurrence computes the true minimax optimum, which can be $\geq 68$: if the opponent makes a suboptimal move, the first player may collect more. The parity strategy is a lower bound on what the first player can guarantee — the DP gives the exact value. For this input, the DP gives $\geq 68$ (run it to find the exact value).
prog / daa / ch07 / q04 ★★★ Subproblem Count

The boolean parenthesisation problem has $O(n^2)$ subproblems and $O(n)$ time per subproblem, giving $O(n^3)$. The coin game has $O(n^2)$ subproblems and $O(1)$ per subproblem, giving $O(n^2)$. What is the fundamental reason one is cheaper than the other?

A
The coin game uses a simpler data structure (array vs dictionary)
B
Boolean parenthesisation has more base cases, making it inherently slower
C
The coin game recurrence depends on only $O(1)$ previously computed subproblems per cell ($V(i,j)$ needs only 4 sub-values); boolean parenthesisation must iterate over all $O(n)$ split points $k$ between $i$ and $j$
D
The coin game has fewer overlapping subproblems so memoisation is less effective
Answer: C — the number of dependencies per subproblem.

Both problems have $\Theta(n^2)$ subproblems (intervals $(i,j)$). The difference is in how many previously computed values each subproblem depends on. Coin game $V(i,j)$: depends on exactly 4 values ($V(i+2,j)$, $V(i+1,j-1)$ for taking left; $V(i+1,j-1)$, $V(i,j-2)$ for taking right) — $O(1)$ work per cell. Boolean parenthesisation $T(i,j)$: must try every split point $k$ from $i$ to $j-1$ — $O(j-i)$ work per cell, averaging $O(n)$. The same gap appears in Optimal BST vs LPS: BST is $O(n^3)$ (iterate over all roots), LPS is $O(n^2)$ (only check endpoints). The “guess” space determines the per-subproblem cost.

Terminal Questions — Chapter 07 10 problems · No answers given

Work through independently. Q1–3 direct application. Q4–7 synthesis. Q8–10 careful argument.

1
Fill the complete $5 \times 5$ LPS table for “BBABB”. Start from base cases $L[i][i]=1$, then fill lengths 2, 3, 4, 5. What is the LPS and what subsequence achieves it? Easy
2
Compute the Optimal BST for keys $K_1, K_2, K_3$ with weights $w=[0.25, 0.50, 0.25]$. Fill $e(i,j)$ and $W(i,j)$ for all valid $(i,j)$. Which key is the root? What is the expected search cost? Compare to a balanced BST. Easy
3
Run the coin game DP on $V = [3, 9, 1, 2]$. Compute $V(i,j)$ for all pairs. What is the first player’s guaranteed value? Trace the optimal sequence of moves for both players. Easy
4
Prove the parity strategy for the coin game: the first player can always guarantee collecting all coins of the higher-sum parity class. Specifically, show that after the first player picks a coin, they can always maintain the invariant of having access to all remaining coins of their chosen parity. Medium
5
Implement Knuth’s speedup for the Optimal BST: instead of iterating over all $O(n)$ roots, show that the optimal root $r^*(i,j) \in [r^*(i,j-1), r^*(i+1,j)]$ (monotone root property). This reduces time from $O(n^3)$ to $O(n^2)$. Verify correctness against the naive $O(n^3)$ implementation. Medium
6
The text justification problem: given $n$ words of lengths $l_1, \ldots, l_n$ and line width $M$, minimise $\sum_{\text{lines}} (\text{slack})^3$ (sum of cubes of trailing spaces, ignoring last line). Define the subproblem, write the recurrence, analyse complexity, and implement it. Medium
7
The make-change problem has pseudo-polynomial complexity $O(mN)$. What does this mean exactly — is $N$ counted as a number or as its bit representation? Give an example showing the algorithm is exponential in the input size (number of bits). What would a truly polynomial algorithm require? Medium
8
Prove that the naive recursive LPS implementation runs in $\Omega(2^n)$ time when all characters are distinct by constructing the recursion tree and showing it is a complete binary tree of depth $n$. Then show memoisation with the 2D table reduces this to $\Theta(n^2)$ by bounding the number of distinct $(i,j)$ pairs and the lookup cost. Hard
9
Generalise the rectangular blocks problem to $d$ dimensions: block $i$ can go on block $j$ iff $l_i^{(k)} \leq l_j^{(k)}$ for all dimensions $k = 1, \ldots, d$. What is the DP recurrence and time complexity? For $d=2$, show it reduces to the longest chain in a poset and give an $O(n \log n)$ algorithm using patience sorting. Hard
10
The boolean parenthesisation problem counts parenthesisations. The Catalan number $C_{n-1}$ counts the total number of parenthesisations of $n$ operands. Verify for $n=3$: $C_2 = 2$ (two binary trees). Show that $T[1,n] + F[1,n] = C_{n-1}$ always. What does this imply about the maximum value of $T[1,n]$? Hard