Most ML courses teach you to call model.fit(). This one teaches you what's actually happening inside it — with full mathematical rigour, genuine intuition, and examples that matter to Indian traders and practitioners.
13
Chapters
8 Live
5 More Coming
32+
MCQ Problems
80+
Terminal Questions
₹0
Forever Free
Prerequisites
✓ +2 Mathematics✓ Basic Probability✓ Calculus (differentiation)Linear Algebra (helpful, not required)No ML background needed
Course Chapters
Each chapter builds on the last. Start from Chapter 1.
The attention mechanism, self-attention vs recurrence, encoder-decoder architecture, and how transformers replaced RNNs as the dominant sequence model.
AttentionSelf-AttentionTransformers
4 MCQ · 10 Terminal🔒
12
Ensembles and Decision TreesComing Soon
Decision trees, bagging, random forests, boosting — AdaBoost and XGBoost — and why ensemble methods are the dominant approach in tabular data ML.
Random ForestsBoostingXGBoost
4 MCQ · 10 Terminal🔒
13
Unsupervised Learning and Generative ModelsComing Soon
k-Means, Gaussian mixtures, PCA, dimensionality reduction, and a preview of generative models — GANs, VAEs, and diffusion models.
k-MeansPCAGANs · VAEsDiffusion
4 MCQ · 10 Terminal🔒
A note on pace. This module assumes no prior ML knowledge — only solid +2 mathematics and basic probability. Each chapter is self-contained but deliberately sequential: Chapter 3 (ERM) builds directly on Chapter 2 (loss functions), and Chapter 4 (MLE) is the payoff for both. If you've seen some of this material before, use the terminal questions at the end of each chapter to find your gaps. New chapters are added regularly — push the site, not the curriculum.
What you will actually understand by the end
✓
Why every ML problem is distribution estimation — and what $P_{XY}$ has to do with it
✓
What i.i.d. actually means, why it's almost never exactly true in finance, and when to worry
✓
Why MLE, ridge regression, and logistic regression are all the same idea in different clothes
✓
The bias-variance tradeoff — mathematically, not just as a cartoon
✓
What the EM algorithm is doing geometrically, and why it always increases the likelihood
✓
Why SVMs find the maximum margin, and what the kernel trick actually does to the input space
✓
Backpropagation as repeated chain rule — not magic, just calculus applied carefully
✓
Why model misspecification, distribution shift, and overfitting are the three ways every ML model fails in practice