/
ProgrammingAlgorithms & DSComputation Models
Progress
1 / 13
← All Chapters
Foundational Chapter 01 of 13 · Algorithms and Data Structures

Computation Models and Asymptotic Analysis

Before we can claim an algorithm is fast, we need to agree on what "fast" means. Before we can prove an algorithm correct, we need to agree on what "correct" means. This chapter builds both foundations — and they will hold up every structure we build on top of them.

What Is a Problem? What Is an Algorithm?

The word "algorithm" is used loosely everywhere. In this module, we use it precisely. Start with the distinction that matters most: a problem is not the same as an algorithm.

A problem is a binary relation — a specification mapping inputs to correct outputs. The problem does not say how to compute the answer, only what a correct answer looks like. Most interesting problems have infinitely many possible inputs, so we cannot just list every input-output pair. Instead, we give a verifiable predicate: a property that any correct output must satisfy.

An algorithm is a deterministic procedure: given an input, it produces exactly one output. An algorithm solves a problem if, for every possible input, it produces an output that satisfies the problem's correctness predicate.

NSE processes roughly 10 crore order events on a busy trading day. The matching engine must, for each incoming order, find the best available counterparty. The problem is: given a set of resting orders and an incoming order, find the best match (highest bid meets lowest ask, price-time priority). The algorithm is how the matching engine actually finds that match — and the difference between an O(n) algorithm and an O(log n) algorithm at 10 crore events per day is the difference between a system that keeps up and one that falls behind.

The Birthday Matching Problem — A Running Example

Consider a concrete problem that will let us demonstrate correctness, efficiency, and data structure choice all at once.

Birthday Matching Problem. Given a sequence of $n$ students, each with a name and birthday, return any pair of students who share the same birthday. If no such pair exists, return None.

The input is a tuple of (name, birthday) pairs. The output predicate: if a pair is returned, both students must be in the input, and their birthdays must match.

Here is a straightforward algorithm: for each student, check all previously seen students for a birthday match. If a match is found, return the pair. If we finish without finding one, return None.

birthday_match · O(n²) naive
def birthday_match(students):
    """
    Find a pair of students sharing a birthday.
    Input:  tuple of (name, bday) tuples
    Output: tuple of two names, or None
    """
    n = len(students)                    # O(1)
    record = []                          # O(1)

    for k in range(n):                   # n iterations
        name1, bday1 = students[k]       # O(1)
        for i in range(k):               # k iterations
            name2, bday2 = record[i]     # O(1)
            if bday1 == bday2:           # O(1)
                return (name1, name2)    # O(1)
        record.append((name1, bday1))    # O(1) amortised

    return None                          # O(1)
#include <vector>
#include <string>
#include <optional>
using namespace std;

using Student = pair<string, string>;   // (name, birthday)
using Match   = pair<string, string>;

optional<Match> birthday_match(const vector<Student>& students) {
    int n = students.size();             // O(1)
    vector<Student> record;             // O(1)
    record.reserve(n);

    for (int k = 0; k < n; ++k) {      // n iterations
        const auto& [name1, bday1] = students[k];   // O(1)
        for (int i = 0; i < k; ++i) {               // k iterations
            const auto& [name2, bday2] = record[i]; // O(1)
            if (bday1 == bday2)                      // O(1)
                return Match{name1, name2};
        }
        record.push_back(students[k]);  // O(1) amortised
    }
    return nullopt;                     // O(1)
}
↗ Run C++ on Compiler Explorer (Godbolt)

The outer loop runs $n$ times. For each iteration $k$, the inner loop runs $k$ times. Total operations:

$$\sum_{k=0}^{n-1} k = \frac{n(n-1)}{2} = \Theta(n^2)$$

Quadratic. For NSE's 10 crore events this is unusable — but let us prove it correct first, then improve it.

Proving Correctness — Induction Is the Only Tool

A program has fixed size but must work on arbitrarily large inputs. The only mathematical tool that connects a finite description to an infinite family of cases is induction. This is why recursion and loops are central to computer science — they are the syntactic form of induction.

Loop Invariant. A loop invariant is a predicate that holds before and after each iteration of a loop. To prove an algorithm correct using a loop invariant, show three things: (1) Initialisation — the invariant holds before the first iteration. (2) Maintenance — if it holds before iteration $k$, it holds after. (3) Termination — when the loop ends, the invariant implies correctness.

For birthday_match, the invariant on the outer loop is: after $k$ iterations, record contains exactly the first $k$ students, and if any pair among them shares a birthday, we have already returned it.

Initialisation: Before any iteration ($k = 0$), record is empty — vacuously correct, no pairs to check.

Maintenance: At iteration $k$, the inner loop compares students[k] against every student in record (which holds students $0, \ldots, k{-}1$ by the inductive hypothesis). If a match exists involving student $k$, the inner loop finds it and returns. If not, we append student $k$ to record, preserving the invariant for $k+1$.

Termination: The loop ends at $k = n$. The invariant now says: record holds all $n$ students, and if any matching pair existed, it was already returned. So returning None is correct.

Every loop in every algorithm you write for the rest of your career has an implicit invariant. Making it explicit is not bureaucracy — it is the difference between code you can reason about and code you are guessing at. In a trading system processing crores of events, guessing is not an option.

Efficiency — What Does "Fast" Mean?

We cannot measure time in seconds — that depends on hardware, compiler, system load. We want a machine-independent measure. The answer: count the number of fixed-time operations the algorithm performs, as a function of the input size $n$.

"Fixed-time" needs a precise model. We use the Word-RAM model.

Word-RAM Model. The machine operates on words of $w$ bits, where $w \geq \log_2 n$ (so memory addresses fit in a word). The following operations each take $\Theta(1)$ time:

Integer arithmetic: $+, -, \times, //, \%$
Comparison and logic: $==, <, >, \leq, \geq, \&\&, ||, !$
Bitwise: $\&, |, \ll, \gg$
Memory: read or write one word at a given address

On a 64-bit machine, $w = 64$, so integers up to $2^{64}$ and memory up to ~16 exabytes are addressable in $\Theta(1)$.
Python is a higher-level model built on top of Word-RAM. Python integers are arbitrary precision — adding two 10,000-digit numbers is not $\Theta(1)$ in Python. For algorithm analysis, we assume inputs are reasonably-sized integers (fit in $O(1)$ words). For financial calculations involving large integers or high-precision decimals, this distinction is real and bites hard — Python's Decimal type is not O(1) per operation.

Asymptotic Notation — Ignoring the Noise

We do not need the exact number of operations — we need the growth rate as $n$ gets large. Constant factors depend on the machine. Low-order terms become irrelevant for large $n$. Asymptotic notation captures exactly the part that matters.

Let $f, g : \mathbb{N} \to \mathbb{R}^+$.

$f(n) = O(g(n))$ — $f$ grows at most as fast as $g$: $\exists\, c > 0, n_0$ such that $f(n) \leq c \cdot g(n)$ for all $n \geq n_0$.

$f(n) = \Omega(g(n))$ — $f$ grows at least as fast as $g$: $\exists\, c > 0, n_0$ such that $f(n) \geq c \cdot g(n)$ for all $n \geq n_0$.

$f(n) = \Theta(g(n))$ — $f$ and $g$ grow at the same rate: $f = O(g)$ and $f = \Omega(g)$ simultaneously.

The practical hierarchy, from fastest to slowest growth:

Class Name n = 1,000 n = 1,000,000 Verdict
Θ(1)Constant1 op1 opPerfect
Θ(log n)Logarithmic~10 ops~20 opsExcellent
Θ(n)Linear1,000 ops1,000,000 opsGood
Θ(n log n)Log-linear~10,000 ops~20,000,000 opsAcceptable
Θ(n²)Quadratic1,000,000 ops10¹² opsAvoid
2^Θ(n)Exponential2¹⁰⁰⁰ ops2^10⁶ opsUnusable
At 1 GHz (10⁹ operations/second): a $\Theta(n^2)$ algorithm on $n = 10^6$ inputs takes about 10⁶ seconds — over 11 days. The same problem with a $\Theta(n \log n)$ algorithm takes about 20 seconds. This is not a microoptimisation. It is the difference between a system that works and one that does not. On NSE's order book, $n$ is not 10⁶ — it is 10⁸ on a heavy day. The asymptotic class of your algorithm is the most important design decision you will make.

Improving Birthday Matching — Data Structures Change Everything

The naive algorithm is $\Theta(n^2)$ because checking "is bday1 in record?" takes $\Theta(k)$ time using a list. If we use a hash set — a data structure supporting $\Theta(1)$ expected lookup — the entire algorithm becomes $\Theta(n)$.

birthday_match · O(n) with hash set
def birthday_match_fast(students):
    """
    Same problem, O(n) expected time using a hash map.
    """
    seen = {}                            # bday -> name, O(1) lookup

    for name, bday in students:         # n iterations
        if bday in seen:                 # O(1) expected
            return (seen[bday], name)    # match found
        seen[bday] = name               # O(1) expected

    return None
#include <unordered_map>
#include <vector>
#include <string>
#include <optional>
using namespace std;

using Student = pair<string, string>;
using Match   = pair<string, string>;

optional<Match> birthday_match_fast(const vector<Student>& students) {
    unordered_map<string, string> seen;  // bday -> name
    seen.reserve(students.size());        // avoid rehashing

    for (const auto& [name, bday] : students) {  // n iterations
        auto it = seen.find(bday);                // O(1) expected
        if (it != seen.end())
            return Match{it->second, name};
        seen[bday] = name;                        // O(1) expected
    }
    return nullopt;
}
↗ Run C++ on Compiler Explorer (Godbolt)

Same problem. Same correctness proof structure. One data structure change: list → hash map. Runtime drops from $\Theta(n^2)$ to $\Theta(n)$ expected. We will prove why hash maps give $\Theta(1)$ expected lookup in Chapter 4. For now, accept it as the key lesson: the choice of data structure determines the complexity of your algorithm.

Worked Example 1 · Running Time Analysis 🇮🇳 NSE Order Matching

An NSE order matching engine processes a sequence of orders. For each incoming order, it checks a list of $n$ resting orders to find the best counterparty using a naive linear scan. If the engine processes $n$ incoming orders in a session, what is the total work done? If $n = 10^6$ and the engine runs at $10^9$ operations per second, how long does this take?

Count total operations

For each of $n$ incoming orders, we scan up to $n$ resting orders. Total comparisons: at most $n \times n = n^2$. So the algorithm is $\Theta(n^2)$.

Plug in numbers
$$\text{Operations} = (10^6)^2 = 10^{12}$$ $$\text{Time} = \frac{10^{12}}{10^9 \text{ ops/sec}} = 10^3 \text{ seconds} \approx 16.7 \text{ minutes}$$
Compare with O(log n) alternative

Using a sorted order book (a balanced BST, Chapter 7), each lookup is $\Theta(\log n)$. Total work: $n \log n = 10^6 \times 20 = 2 \times 10^7$ operations.

$$\text{Time} = \frac{2 \times 10^7}{10^9} = 0.02 \text{ seconds}$$
Conclusion: The naive $\Theta(n^2)$ matcher takes ~17 minutes for a million orders. The $\Theta(n \log n)$ matcher takes 0.02 seconds — 50,000× faster. NSE's actual matching engine runs in microseconds using heap-based order books ($\Theta(\log n)$ per operation). The asymptotic class of the algorithm is not an academic concern. It is an engineering constraint with real rupee consequences.

How to Solve an Algorithms Problem

Every algorithms problem you encounter — in this module, in interviews, in production — has the same two-step solution strategy.

Step 1: Reduce to a known problem. Can you map this problem onto something you already know how to solve efficiently? The whole point of building a library of data structures and algorithms is to have reductions available. "This is a shortest-path problem." "This is a priority queue problem." Recognising the structure is half the solution.

Step 2: Design your own recursive algorithm. If no known reduction applies, design from scratch. The standard techniques are: brute force (enumerate all possibilities), divide and conquer (split into subproblems, combine), dynamic programming (save subproblem solutions to avoid recomputation), and greedy (make locally optimal choices at each step). We will cover all of these in depth.

The Fyers and XTS APIs return option chains with thousands of instruments. When your Python script needs to look up the instrument token for "NIFTY 24000 CE 25-Jan-2024", it is solving a lookup problem. If you store the chain in a Python list and scan linearly, that is $\Theta(n)$ per lookup, $\Theta(n^2)$ total for $n$ lookups. If you index by a dictionary (hash map) on the symbol string, each lookup is $\Theta(1)$. This is not theoretical — it is the difference between a strategy that processes the option chain in 200ms and one that processes it in 8 seconds. The data structure choice matters before the market opens.

Practice Problems
4 questions · Chapter 01
prog / dsa / ch01 / q01 ★ Conceptual

An algorithm runs in $\Theta(n^2)$ time. A second algorithm solves the same problem in $\Theta(n \log n)$ time. For which value of $n$ does the difference in runtime first become significant (say, the $\Theta(n^2)$ algorithm takes more than 100× longer)?

A
Around $n = 10$
B
Around $n = 100$
C
Around $n = 1{,}000$
D
The difference is always 100× regardless of $n$
Answer: C.

The ratio of runtimes is $\dfrac{n^2}{n \log n} = \dfrac{n}{\log n}$. At $n = 10$: ratio $\approx 3$. At $n = 100$: ratio $\approx 15$. At $n = 1{,}000$: ratio $\approx 100$. At $n = 10^6$: ratio $\approx 50{,}000$.

The gap grows without bound — this is what "asymptotically faster" means. For small $n$, the constant factors inside $\Theta(\cdot)$ often dominate, and the "slower" algorithm may actually run faster in practice. Always profile small-$n$ cases; always use asymptotic analysis for large-$n$ planning.
prog / dsa / ch01 / q02 ★ Conceptual

The Word-RAM model specifies that reading or writing a single word of memory takes $\Theta(1)$ time. Which of the following operations is NOT $\Theta(1)$ under this model, for a typical algorithmic input?

A
Adding two 64-bit integers
B
Comparing two stock symbols stored as short strings
C
Copying an array of $n$ integers to a new location
D
Reading the price of one instrument from memory
Answer: C.

Copying $n$ integers requires $n$ word-sized memory writes — that is $\Theta(n)$ work, not $\Theta(1)$. Options A, B, and D all involve a fixed, constant number of words regardless of $n$.

This distinction matters in practice: Python's list slicing (arr[:]) silently copies the entire array — $\Theta(n)$ work hiding behind one line of syntax. When you copy option chain data structures inside a hot loop, you may be doing far more work than you think. Prefer views and references over copies when possible.
prog / dsa / ch01 / q03 ★★ Computational

A script fetches the NIFTY option chain (5,000 instruments) and for each of the 5,000 instruments, searches a Python list to find its current LTP. What is the total complexity, and approximately how many operations does it perform?

A
$\Theta(n)$ — about 5,000 operations total
B
$\Theta(n \log n)$ — about 60,000 operations total
C
$\Theta(n^2)$ — about 25,000,000 operations total
D
$\Theta(1)$ — list lookup is always constant time
Answer: C.

A Python list lookup by value (linear search) is $\Theta(n)$ per query. With $n = 5{,}000$ instruments and $n$ queries, total work is $\Theta(n^2) = \Theta(25{,}000{,}000)$.

The fix is a dictionary: ltp_map = {inst['symbol']: inst['ltp'] for inst in chain}. Building the dict is $\Theta(n)$, each lookup is $\Theta(1)$ expected, total for all lookups is $\Theta(n)$. One line of Python drops 25 million operations to 5,000. This is exactly the kind of quadratic trap that slows down real option chain scripts — we have seen it in our own XTS pipeline work.
prog / dsa / ch01 / q04 ★★ Proof

Which of the following correctly uses the loop invariant to argue correctness of birthday_match_fast (the hash map version)?

A
After $k$ iterations, seen contains all $n$ students — so if we finish the loop, all pairs have been checked.
B
After $k$ iterations, seen maps the birthday of each of the first $k$ students to their name, and any matching pair among them has been returned.
C
The hash map guarantees $\Theta(1)$ lookup, so correctness follows automatically from efficiency.
D
Since Python dictionaries never lose data, the algorithm is trivially correct.
Answer: B.

This is the correct loop invariant. It says exactly what seen contains after $k$ steps and what has been guaranteed about matching pairs. With this invariant:

Initialisation: $k=0$, seen is empty — vacuously correct.
Maintenance: At step $k$, we check if student $k$'s birthday is in seen (which holds students $0, \ldots, k{-}1$). If yes, we return — correctly. If no, we add student $k$ to seen, restoring the invariant.
Termination: At $k = n$, all students are in seen and all pairs have been checked. Returning None is correct.

Option A is wrong — seen holds only the first $k$ students after $k$ iterations, not all $n$. Options C and D confuse efficiency with correctness — they are completely separate properties.

Terminal Questions — Chapter 01 10 problems · No answers given

Work through these independently. Q1–3 are direct applications of the chapter. Q4–7 require combining ideas. Q8–10 will make you think carefully. No solutions are provided — that is intentional.

1
Classify each of the following as $O(1)$, $O(\log n)$, $O(n)$, $O(n \log n)$, or $O(n^2)$: (a) accessing arr[42] in a Python list of length $n$; (b) finding the minimum of an unsorted Python list; (c) Python's built-in sorted() on a list of length $n$; (d) checking whether a key exists in a Python dict of size $n$. Easy
2
Write the loop invariant for the following algorithm: given a list of NIFTY closing prices, find the maximum. State the invariant precisely, then verify initialisation, maintenance, and termination. Easy
3
Prove or disprove: $3n^2 + 100n + 500 = \Theta(n^2)$. Find explicit constants $c_1$, $c_2$, and $n_0$ such that $c_1 n^2 \leq 3n^2 + 100n + 500 \leq c_2 n^2$ for all $n \geq n_0$. Easy
4
A strategy script checks, for each of $m$ option strikes, whether the strike is in a watchlist of $w$ instruments. Currently using a Python list for the watchlist. (a) What is the total complexity in terms of $m$ and $w$? (b) What data structure change makes this $\Theta(m)$ regardless of $w$? (c) What is the cost of the one-time setup for that structure? Medium
5
Is $2^{n+1} = O(2^n)$? Is $2^{2n} = O(2^n)$? Prove your answers using the formal definition of $O(\cdot)$. What does this say about the significance of constant factors in the exponent? Medium
6
Consider a modified birthday matching problem where instead of returning the first pair found, you must return all pairs with matching birthdays. How does the worst-case complexity change? What is the output size in the worst case, and why does this affect the lower bound on any correct algorithm? Medium
7
In the Word-RAM model, what is the minimum word size $w$ required to address a memory holding $10^9$ integers, each 64 bits wide? Express your answer exactly, then give the smallest power of 2 that satisfies the requirement. Medium
8
A junior developer argues: "Our option chain has only 5,000 instruments, so $O(n^2)$ is fine — $25 \times 10^6$ operations runs in milliseconds." Construct a concrete scenario within the same system where this argument breaks down, without changing $n$. What property of the system, not visible in the asymptotic analysis alone, causes the failure? Hard
9
The problem specification says "return any pair of students with matching birthdays." Does birthday_match_fast satisfy this specification if the hash map has collisions (two different keys hash to the same slot)? Carefully distinguish between a hash collision and a birthday match. Which one does the algorithm return, and why? Hard
10
Design an algorithm to solve the following problem in $\Theta(n \log n)$ time: given an array of $n$ NIFTY daily closing prices, find the pair of days $(i, j)$ with $i < j$ that maximises the profit $\text{price}[j] - \text{price}[i]$ (buy on day $i$, sell on day $j$). State your algorithm precisely, prove its correctness using a loop invariant, and analyse its complexity. Then show that $\Theta(n)$ is achievable with a single-pass approach. Hard
← Previous Next: Sequences →