/
ProgrammingAlgorithms & DSDynamic Programming I
Progress
11 / 13
← All Chapters
Dynamic Programming Chapter 11 of 13 · Algorithms and Data Structures

Dynamic Programming I

Dynamic programming is recursion where subproblem solutions overlap and are reused. The insight is simple: if you’re solving the same subproblem twice, store the answer the first time. The discipline is in identifying the right subproblems.

What Is Dynamic Programming?

Every recursive algorithm defines a subproblem DAG: vertices are subproblems, and there is an edge from $A$ to $B$ if solving $A$ requires solving $B$. For divide-and-conquer (merge sort, binary search), this DAG is a tree — each subproblem is used exactly once. Dynamic programming arises when the DAG has edges that merge: the same subproblem is needed by multiple parent problems.

Dynamic programming = recursion where subproblem dependencies overlap (subproblem DAG has in-degree > 1 at some vertices).

Two equivalent implementations:
Top-down (memoisation): recurse as normal, but cache each result. On re-entry, return the cached value.
Bottom-up (tabulation): process subproblems in topological order (smallest first), storing results in a table.

Both achieve the same time complexity. Bottom-up avoids recursion overhead and is usually preferred in production. Top-down is often easier to write correctly.
The name “dynamic programming” was coined by Richard Bellman (yes, the same Bellman) in the 1950s specifically to sound impressive to government funders — “dynamic” meant time-varying, “programming” meant scheduling. Don’t let the name mislead you: there is nothing especially dynamic about it. It is just careful recursion with reuse.

The SRT BOT Framework — Six Steps to Every DP

SRT BOT: the 6-step template for any DP.

1. Subproblem definition — what is $x(i)$? Describe it precisely in words. Common patterns: prefix $A[:i]$, suffix $A[i:]$, substring $A[i:j]$, or state with auxiliary variable.
2. Relate — express $x(i)$ in terms of strictly smaller subproblems. Identify a "local question" and brute-force all possible answers.
3. Topological order — argue the dependency DAG is acyclic. Give the order to process subproblems.
4. Base cases — subproblems where the relation breaks down. State solutions directly.
5. Original problem — express the answer in terms of computed subproblems.
6. Time analysis — (number of subproblems) × (work per subproblem, treating recursive calls as $O(1)$).

Canonical Example — Fibonacci Numbers

The simplest illustration of DP. Naively: $F(n) = F(n-1) + F(n-2)$, $F(0)=0$, $F(1)=1$. The recursive tree recomputes $F(k)$ exactly $F(n-k)$ times — exponential total work. The subproblem DAG has in-degree 2 at every vertex: both $F(n)$ and $F(n-1)$ depend on $F(n-2)$. Memoisation reduces this to $O(n)$.

Fibonacci · top-down (memo) and bottom-up
# โ”€โ”€ TOP-DOWN (memoisation) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def fib_memo(n):
    memo = {}
    def F(i):
        if i < 2: return i          # base cases: F(0)=0, F(1)=1
        if i not in memo:
            memo[i] = F(i-1) + F(i-2)  # relation, computed once
        return memo[i]
    return F(n)

# โ”€โ”€ BOTTOM-UP (tabulation) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def fib_dp(n):
    if n < 2: return n
    F = [0] * (n + 1)
    F[0], F[1] = 0, 1               # base cases
    for i in range(2, n + 1):       # topological order: increasing i
        F[i] = F[i-1] + F[i-2]     # relation
    return F[n]                     # original problem

# โ”€โ”€ SPACE-OPTIMISED (O(1) extra space) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
def fib_opt(n):
    a, b = 0, 1
    for _ in range(n): a, b = b, a + b
    return a
# SRT BOT: subproblems=F(0)..F(n), relation=F(i)=F(i-1)+F(i-2),
# topo=increasing i, bases=F(0),F(1), original=F(n), time=O(n).
#include <vector>
#include <unordered_map>
using namespace std;

// Bottom-up Fibonacci โ€” O(n) time, O(n) space
long long fib_dp(int n) {
    if (n < 2) return n;
    vector<long long> F(n + 1);
    F[0] = 0; F[1] = 1;             // base cases
    for (int i = 2; i <= n; ++i)
        F[i] = F[i-1] + F[i-2];    // relation in topological order
    return F[n];
}

// Space-optimised โ€” O(1) extra space
long long fib_opt(int n) {
    long long a = 0, b = 1;
    for (int i = 0; i < n; ++i) { long long t = a+b; a=b; b=t; }
    return a;
}
↑ Run C++ on Compiler Explorer

Problem 1 — Bowling (Suffix DP)

$n$ pins with values $v_0, \ldots, v_{n-1}$. A ball can hit one pin (score $v_i$) or two adjacent pins (score $v_i \cdot v_{i+1}$). Maximize total score. Key insight: express as a suffix DP.

SRT BOT for Bowling:

1. Subproblems: $B(i)$ = max score using only pins $i, i+1, \ldots, n-1$.   $0 \leq i \leq n$.
2. Relate: locally brute-force what happens to pin $i$: $$B(i) = \max\bigl(B(i+1),\; v_i + B(i+1),\; v_i \cdot v_{i+1} + B(i+2)\bigr)$$ 3. Topo order: decreasing $i$ (from $n$ down to 0).
4. Base cases: $B(n) = B(n+1) = 0$.
5. Original: $B(0)$.
6. Time: $\Theta(n)$ subproblems $\times$ $\Theta(1)$ work each $= \Theta(n)$.
Bowling · suffix DP · ฮ˜(n)
def bowl(v):
    """Max score from bowling pins with values v. ฮ˜(n) time."""
    n = len(v)
    # Bottom-up: process suffixes in decreasing order
    B = [0] * (n + 2)              # B[n] = B[n+1] = 0 (base cases)
    for i in reversed(range(n)):
        B[i] = max(
            B[i + 1],                          # skip pin i
            v[i] + B[i + 1],                   # hit pin i alone
            v[i] * v[i + 1] + B[i + 2]         # hit pins i and i+1
            if i + 1 < n else v[i] + B[i + 1]  # guard: no pin i+1
        )
    return B[0]

# Example: v = [-1, 1, 1, 1, 9, 9, 3, -3, -5, 2, 2]
# B(10)=2, B(9)=max(0,2)=2, B(8)=max(2,-5+2,-5*2+2)=2,...
# B(4)=max(B(5), 9+B(5), 9*9+B(6)) = max(9*9+3, ...) = 81+3 = 84... etc.
#include <vector>
#include <algorithm>
using namespace std;

int bowl(const vector<int>& v) {
    int n = v.size();
    vector<int> B(n + 2, 0);       // base cases: B[n]=B[n+1]=0
    for (int i = n - 1; i >= 0; --i) {
        B[i] = max({
            B[i + 1],               // skip pin i
            v[i] + B[i + 1],        // hit pin i alone
            (i + 1 < n ? v[i] * v[i+1] + B[i+2] : v[i] + B[i+1])
        });                         // hit pins i and i+1 (if exists)
    }
    return B[0];
}
↑ Run C++ on Compiler Explorer
Bowling’s subproblem DAG looks like: $B_0 \to B_1 \to B_2 \to \cdots \to B_n$, with additional skip-one edges $B_i \to B_{i+2}$ (when bowling two pins). It is a DAG shortest/longest path problem in disguise — find the max-weight path from node 0 to node $n$. DP on DAGs and DAG relaxation are the same algorithm, just stated differently.

Problem 2 — Longest Common Subsequence (LCS)

Given strings $A$ and $B$, find the length of their longest common subsequence (characters that appear in both in the same relative order, not necessarily contiguous).

This needs a 2D subproblem because the state requires a position in both strings simultaneously.

SRT BOT for LCS:

1. Subproblems: $x(i,j)$ = length of LCS of suffixes $A[i:]$ and $B[j:]$.   $0 \leq i \leq |A|$, $0 \leq j \leq |B|$.
2. Relate: compare first characters $A[i]$ and $B[j]$: $$x(i,j) = \begin{cases} x(i+1,j+1) + 1 & \text{if } A[i] = B[j] \\ \max(x(i+1,j),\; x(i,j+1)) & \text{otherwise} \end{cases}$$ 3. Topo order: decreasing $i+j$ (or: decreasing $i$, then decreasing $j$).
4. Base cases: $x(|A|, j) = x(i, |B|) = 0$ (empty suffix has LCS 0 with anything).
5. Original: $x(0, 0)$.
6. Time: $O(|A| \cdot |B|)$ subproblems $\times$ $O(1)$ work $= O(|A| \cdot |B|)$.
Longest Common Subsequence · O(|A|ยท|B|)
def lcs(A, B):
    """Length of longest common subsequence of A and B. O(|A|*|B|)."""
    a, b = len(A), len(B)
    # x[i][j] = LCS of A[i:] and B[j:]
    x = [[0] * (b + 1) for _ in range(a + 1)]   # base cases: last row/col = 0
    for i in reversed(range(a)):                 # decreasing i
        for j in reversed(range(b)):             # decreasing j
            if A[i] == B[j]:
                x[i][j] = x[i+1][j+1] + 1       # match: use both
            else:
                x[i][j] = max(x[i+1][j], x[i][j+1])  # skip A[i] or B[j]
    return x[0][0]

# Example: A = "hieroglyphology", B = "michaelangelo"
# lcs(A, B) = 5  (one solution: "hello")

def lcs_string(A, B):
    """Recover the actual LCS string using parent pointers."""
    a, b = len(A), len(B)
    x = [[0]*(b+1) for _ in range(a+1)]
    for i in reversed(range(a)):
        for j in reversed(range(b)):
            if A[i] == B[j]: x[i][j] = x[i+1][j+1] + 1
            else: x[i][j] = max(x[i+1][j], x[i][j+1])
    # Traceback
    result, i, j = [], 0, 0
    while i < a and j < b:
        if A[i] == B[j]: result.append(A[i]); i += 1; j += 1
        elif x[i+1][j] >= x[i][j+1]: i += 1
        else: j += 1
    return ''.join(result)
#include <vector>
#include <string>
#include <algorithm>
using namespace std;

int lcs(const string& A, const string& B) {
    int a = A.size(), b = B.size();
    // x[i][j] = LCS length of A[i:] and B[j:]
    vector<vector<int>> x(a+1, vector<int>(b+1, 0));
    for (int i = a-1; i >= 0; --i)
        for (int j = b-1; j >= 0; --j)
            x[i][j] = (A[i]==B[j]) ? x[i+1][j+1]+1
                                    : max(x[i+1][j], x[i][j+1]);
    return x[0][0];
}
↑ Run C++ on Compiler Explorer

Problem 3 — Longest Increasing Subsequence (LIS)

Given sequence $A$, find the length of the longest strictly increasing subsequence. The key insight: a plain suffix DP on $A[i:]$ doesn’t work because “starts after $i$” doesn’t constrain the subsequence to be increasing relative to what came before. We need to constrain the subproblem to fix this.

SRT BOT for LIS:

1. Subproblems: $x(i)$ = length of LIS of suffix $A[i:]$ that must include $A[i]$ as its first element. $0 \leq i < |A|$.
2. Relate: $A[i]$ is the first element. Brute-force the second element: $$x(i) = 1 + \max\bigl(\{x(j) \mid j > i,\; A[j] > A[i]\} \cup \{0\}\bigr)$$ 3. Topo order: decreasing $i$.
4. Base cases: none needed (the $\{0\}$ in the relation handles the case where $A[i]$ is the last element).
5. Original: $\max_i x(i)$ (try all possible first elements).
6. Time: $O(n)$ subproblems $\times$ $O(n)$ work each $= O(n^2)$.
Longest Increasing Subsequence · O(nยฒ) DP, O(n log n) possible
def lis(A):
    """Length of longest strictly increasing subsequence. O(nยฒ)."""
    n = len(A)
    x = [1] * n                    # x[i] = LIS starting at A[i], min length 1
    for i in reversed(range(n)):
        for j in range(i + 1, n):
            if A[j] > A[i]:        # A[j] can follow A[i]
                x[i] = max(x[i], 1 + x[j])
    return max(x) if x else 0      # original: try all starting points

# Example: A = "carbohydrate" (treat as list of chars)
# or numerically: A = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3]
# LIS = [1, 4, 5, 9] or [1, 2, 6] etc., length depends on input

# O(n log n) version uses patience sorting (bisect module):
import bisect
def lis_fast(A):
    tails = []                     # tails[k] = smallest tail of IS of length k+1
    for a in A:
        pos = bisect.bisect_left(tails, a)
        if pos == len(tails): tails.append(a)
        else: tails[pos] = a
    return len(tails)
#include <vector>
#include <algorithm>
using namespace std;

// O(nยฒ) DP version
int lis_dp(const vector<int>& A) {
    int n = A.size();
    vector<int> x(n, 1);          // x[i] = LIS starting at A[i]
    for (int i = n-1; i >= 0; --i)
        for (int j = i+1; j < n; ++j)
            if (A[j] > A[i]) x[i] = max(x[i], 1 + x[j]);
    return *max_element(x.begin(), x.end());
}

// O(n log n) patience sorting version
int lis_fast(const vector<int>& A) {
    vector<int> tails;
    for (int a : A) {
        auto it = lower_bound(tails.begin(), tails.end(), a);
        if (it == tails.end()) tails.push_back(a);
        else *it = a;
    }
    return tails.size();
}
↑ Run C++ on Compiler Explorer
Worked Example · Optimal Option Strike Selection 🇮🇳 TTT Strategy

You have $n$ NIFTY option strikes sorted by expiry. Each strike $i$ has a premium $p_i$. You can enter at most one position per expiry, and if you enter strike $i$ followed by strike $j$ (same or later expiry, $j > i$), you must have $p_j > p_i$ (only take trades with increasing premium, to ensure you’re buying improving setups). Maximize the number of positions you can enter.

Map to LIS

This is exactly the LIS problem: find the longest strictly increasing subsequence of premiums $[p_0, p_1, \ldots, p_{n-1}]$. The constraint “$p_j > p_i$” for $j > i$ is precisely the increasing subsequence condition.

Apply the O(n log n) version
premiums = [45, 38, 52, 31, 67, 55, 81, 72, 91, 88]
print(lis_fast(premiums))  # โ†’ 5  (e.g., 38, 52, 67, 81, 91)
Result: 5 positions. The $O(n \log n)$ patience sorting algorithm gives the length in milliseconds even for thousands of strikes. The $O(n^2)$ DP can also reconstruct the actual sequence using parent pointers — useful when you need to know which specific strikes to enter, not just how many.

Subproblem Expansion — Alternating Coin Game

Two players alternately pick from either end of a coin sequence $v_0, \ldots, v_{n-1}$. You go first and want to maximise your total. The state is: which coins remain (a contiguous subarray $[i..j]$) AND whose turn it is. Adding the player to the state is an example of subproblem expansion.

SRT BOT for Alternating Coin Game:

1. Subproblems: $x(i,j,p)$ = max value I can collect when coins $[i..j]$ remain and player $p \in \{\text{me}, \text{you}\}$ moves next.
2. Relate: $$x(i,j,\text{me}) = \max(v_i + x(i+1,j,\text{you}),\; v_j + x(i,j-1,\text{you}))$$ $$x(i,j,\text{you}) = \min(x(i+1,j,\text{me}),\; x(i,j-1,\text{me}))$$ 3. Topo order: increasing $j - i$.
4. Base cases: $x(i,i,\text{me}) = v_i$, $x(i,i,\text{you}) = 0$.
5. Original: $x(0, n-1, \text{me})$.
6. Time: $\Theta(n^2)$ subproblems $\times$ $\Theta(1)$ work $= \Theta(n^2)$.
The SRT BOT framework applies directly to options problems: “what is the maximum P&L achievable from strikes $i$ to $j$ given the market can move in any direction?” is a minimax coin-game variant. DP on substrings $x(i,j)$ underlies options pricing on binomial lattices, interval scheduling in execution algorithms, and the matrix chain multiplication problem (optimal order of multiplying large matrices, relevant in ML). Once you recognise the subproblem structure, the code is almost automatic.

Practice Problems
4 questions · Chapter 11
prog / dsa / ch11 / q01 ★ Memoisation

The naive Fibonacci recursion $F(n) = F(n-1) + F(n-2)$ computes $F(30)$ how many times more slowly than the memoised version?

A
About $30\times$ — each level does a constant extra call
B
About $900\times$ — $O(n^2)$ vs $O(n)$
C
About $2^{15} \approx 32{,}768\times$ — naive is $\Omega(2^{n/2})$, memo is $O(n)$
D
They are the same — Python caches function calls automatically
Answer: C — exponential slowdown.

Naive $F(n)$ satisfies $T(n) \geq 2T(n-2) \Rightarrow T(n) = \Omega(2^{n/2})$. For $n=30$: $2^{15} \approx 32{,}768$ times the $O(30)$ work of the memoised version. This is the classic DP motivation: identical subproblems recomputed exponentially many times. Python does NOT cache by default — you need functools.lru_cache or explicit memoisation. Bottom-up is faster still because it avoids recursion overhead entirely.
prog / dsa / ch11 / q02 ★★ LCS

$A = \text{"ABCBDAB"}$, $B = \text{"BDCAB"}$. What is the length of the LCS?

A
3 — longest match is “BDB” or “BCB”
B
4 — one LCS is “BDAB” or “BCAB”
C
5 — all of “BDCAB” is a subsequence of $A$
D
2 — only “AB” is common
Answer: B — LCS length 4.

“BDAB” is a subsequence of $A = \text{ABCBDAB}$ (positions 1,3,5,6) and also of $B = \text{BDCAB}$ (positions 0,1,3,4). Length 4. “BCAB” also works. Option C is wrong: check whether “BDCAB” is a subsequence of $A$. We need B,D,C,A,B in order in ABCBDAB: B(1), D(4), C? — C appears at position 2 which comes before D(4), so we cannot find C after D. So “BDCAB” is NOT a subsequence of $A$.
prog / dsa / ch11 / q03 ★★ SRT BOT

In the LIS dynamic program, the subproblem is defined as “LIS of suffix $A[i:]$ that must include $A[i]$.” Why is this constraint necessary?

A
Without it, the number of subproblems would be exponential
B
Without it, we have no way to enforce the “increasing” constraint when combining results from $x(i+1)$ back to $x(i)$
C
Without it, the base cases are undefined
D
Without it, the relation would require O(n) work per subproblem instead of O(1)
Answer: B.

If we define $x(i)$ = “LIS of suffix $A[i:]$” without the constraint, then $x(i) = \max(x(i+1), \ldots)$. But $x(i+1)$ doesn’t tell us what value the LIS starts with — so we can’t check whether $A[i]$ can be prepended (we need $A[i] < \text{first element of the LIS starting at } i+1$).

By forcing $A[i]$ to be included, we know the first element of the LIS for subproblem $x(i)$ is always $A[i]$. Then when computing $x(i)$, we can check for any $j > i$: “can $A[j]$ follow $A[i]$?” (i.e., $A[j] > A[i]$). This is the general DP principle: when the relation lacks information to check constraints, add state to the subproblem.
prog / dsa / ch11 / q04 ★★★ Subproblem Design

In the Bowling DP, the recurrence is $B(i) = \max(B(i+1),\ v_i + B(i+1),\ v_i \cdot v_{i+1} + B(i+2))$. What does the topological order “decreasing $i$” ensure?

A
That we process pins from left to right, which is the natural bowling order
B
That the time complexity is $O(n)$ rather than $O(n^2)$
C
That when computing $B(i)$, both $B(i+1)$ and $B(i+2)$ are already computed and stored
D
That $B(i)$ does not depend on any $B(j)$ with $j < i$
Answer: C.

The relation $B(i) = \max(\ldots, B(i+1), \ldots, B(i+2))$ depends on $B(i+1)$ and $B(i+2)$ — both have larger indices than $i$. Processing in decreasing order of $i$ (from $n$ down to 0) means that when we compute $B(i)$, we have already computed and stored $B(i+1)$ and $B(i+2)$. This is exactly the topological order of the subproblem DAG: edges go from smaller $i$ to larger $i$, so larger $i$ must be computed first.

Options A and D are wrong: we go right to left (decreasing $i$), not left to right. Option B: the $O(n)$ complexity comes from $O(n)$ subproblems × $O(1)$ work each, not from the ordering itself.

Terminal Questions — Chapter 11 10 problems · No answers given

Work through independently. Q1–3 direct application. Q4–7 synthesis. Q8–10 careful argument.

1
Apply SRT BOT to the Fibonacci problem and write both the top-down and bottom-up Python implementations. Verify that both give the same answer for $n = 10, 20, 30$. Time both implementations for $n = 35$ and report the speedup. Easy
2
Run the bowling DP on $v = [-1, 1, 1, 1, 9, 9, 3, -3, -5, 2, 2]$. Compute $B(i)$ for all $i$ from $i=11$ down to $i=0$. Show all steps. Which pins does the optimal strategy hit? Easy
3
Compute the LCS of $A = \text{"ABCBDAB"}$ and $B = \text{"BDCAB"}$ using the bottom-up DP. Fill in the full $8 \times 6$ table $x[i][j]$ and trace back the LCS string. Easy
4
Implement the alternating coin game using both formulations (2D and 3D subproblems). Verify both give the same answer on $v = [3, 9, 1, 2]$. Which player wins? What is each player’s optimal strategy? Medium
5
Design a DP for the Edit Distance problem: given strings $A$ and $B$, find the minimum number of insertions, deletions, and substitutions to transform $A$ into $B$. Write the full SRT BOT specification and Python code. Medium
6
Apply LIS to the following options problem: given NIFTY weekly strikes $[23000, 23500, 24000, 23750, 24250, 24500, 24200, 24750]$ and their corresponding premiums $[45, 52, 68, 61, 79, 88, 83, 95]$, find the longest sequence of strikes (in given order) with strictly increasing premiums. Report the actual strikes in the sequence. Medium
7
Speed up the LIS DP to $O(n \log n)$ using the patience sorting algorithm (already given in the code above). Prove that the tails array in patience sorting is always sorted, and that its length at termination equals the LIS length. Medium
8
Prove that the bowling DP relation $B(i) = \max(B(i+1), v_i + B(i+1), v_i v_{i+1} + B(i+2))$ is correct by induction. Base case: $B(n) = 0$. Inductive step: assuming $B(k)$ is correct for all $k > i$, prove $B(i)$ is the maximum score achievable from pins $i, \ldots, n-1$. Hard
9
Generalise the bowling DP to allow balls of size 3 (hitting three adjacent pins $i, i+1, i+2$ for score $v_i \cdot v_{i+1} \cdot v_{i+2}$). Write the updated recurrence. Does the time complexity change? What is the new time complexity? Hard
10
The Weighted Interval Scheduling problem: given $n$ intervals $[s_i, f_i]$ with weights $w_i$, find a subset of non-overlapping intervals maximising total weight. Design a DP using the SRT BOT framework. Sort intervals by finish time and define $x(i)$ as the max weight using intervals from $\{1, \ldots, i\}$ where interval $i$ is included. What is the time complexity? Hard