AI Curriculum
AI, taught step by step
A clear, no-hype path to understanding modern AI. Designed for curious minds who want to build intuition, not just memorize jargon. We start with the basics of rules and chance, then climb all the way to the neural networks and language models changing our world today.
No PhD required. Just bring your curiosity. Weโll build up the math and concepts step-by-step, so you always know why things work, not just that they work.
Foundations
This module builds the mental model underneath everything else in the curriculum. We start with explicit rules, then add uncertainty, then explore search, so students can see AI as a chain of concrete decisions rather than a pile of mysterious buzzwords.
Question we are chasing: How can a machine move from rigid step-by-step instructions to making sensible choices in a messy, uncertain world?
#1
What is Computation?
How simple rules can create complex behavior
Meet the simplest model of computation and see how a machine with almost no memory, no intuition, and only a few rules can still perform meaningful work.
#2
Probability & Distributions
How AI talks about uncertainty
Build intuition for uncertainty, common distributions, and belief updates so predictions feel measurable rather than hand-wavy.
#3
Algorithms & Graph Search
How computers find good routes
See how a search algorithm compares routes, updates costs, and reliably finds a strong path through a network of choices.
Machine Learning
This module is where the course shifts from explicit rules to learned patterns. Instead of telling the machine exactly what to do in every case, we give it examples, define success, and let it infer a decision rule from the data.
Question we are chasing: How can a machine study examples, extract useful patterns, and make predictions on cases it has never seen before?
#4
What is Machine Learning?
From examples to predictions
Get clear on what it means to train on data, what a model actually learns, and why different problems require different learning setups.
#5
Regression & Classification
Predicting numbers and choosing categories
Understand the two most common prediction jobs in machine learning: estimating a value and assigning a label.
#6
Decision Trees & Random Forests
Learning by asking better questions
See how a model learns a sequence of split decisions, and why combining many trees often generalizes better than trusting a single one.
#7
Support Vector Machines
Finding the safest separating line
Learn how SVMs separate classes by choosing a boundary that maximizes the margin rather than merely drawing any line that works.
#8
Clustering & K-Means
Finding groups without labels
Explore how K-means groups unlabeled data by repeatedly assigning points to centers and then moving those centers to better represent the data.
#9
Dimensionality Reduction
Keeping the important information
Learn why too many features can blur patterns, and how dimensionality reduction creates simpler views that preserve much of the useful structure.
The ML Workshop
Theory is only half the story. In this module we roll up our sleeves and learn the craft behind every successful ML project: preparing data, engineering features, and strengthening our statistical intuition. These are the skills that separate a notebook experiment from a model you can actually trust.
Question we are chasing: What does raw, messy, real-world data need before a model can learn anything useful from it?
#10
Data Preprocessing
Cleaning, scaling, encoding, and splitting
Before any model can learn, the data needs a thorough clean-up. Here we learn how to wash, organize, and portion our data like a chef prepping ingredients.
#11
Feature Engineering
Selecting, creating, and guarding your inputs
The art of choosing and crafting the right inputs so a model sees the signal, not the noise.
#12
Probability & Statistics
Distributions, Bayes, and hypothesis testing
Go deeper into the statistical toolkit every ML practitioner reaches for: distributions, Bayes' rule, and the logic of hypothesis testing.
Training & Evaluation
Building a model is one thing; training it well and knowing whether it actually works is another. This module covers the engine room of ML: how optimization drives learning, how bias and variance shape model behavior, how to pick the right scoreboard, and how to run experiments you can trust.
Question we are chasing: How do we train a model effectively, measure its true performance, and make sure our results are not just a fluke?
#13
Optimization Basics
Gradient descent, learning rate, and loss functions
Meet the engine that powers every learning algorithm: gradient descent. We follow a model as it slides downhill toward better answers, one careful step at a time.
#14
Bias & Variance
Overfitting, underfitting, and regularization
Every model walks a tightrope between memorizing the training data and failing to learn enough. Here we learn to spot the fall and catch it with regularization.
#15
Model Evaluation
Accuracy, precision, recall, F1, and ROC-AUC
Accuracy alone can lie. Here we learn the full scorecardโprecision, recall, F1, confusion matrices, and ROC curvesโso we can measure what truly matters.
#16
Experimentation
Baselines, ablations, and reproducibility
Science demands proof, not just results. Learn to run experiments that are fair, repeatable, and actually convincing.
Responsible AI
A model that works is not enoughโit also has to be understandable, fair, and ready for the real world. This module covers the human side of AI: explaining predictions, confronting bias, and deploying models responsibly.
Question we are chasing: How do we make sure an AI system is not just accurate but also transparent, fair, and safe to deploy?
#17
Model Interpretability
Feature importance, SHAP, and LIME
A prediction is only useful if you can explain why. Learn to open the black box and show stakeholders what drove each decision.
#18
Ethics & Fairness
Bias, privacy, and responsible use
AI reflects the data and choices we feed it. Here we confront bias, respect privacy, and discuss what it means to deploy AI responsibly.
#19
Deployment Basics
Serving, monitoring, and drift
A model in a notebook helps no one. Learn how models reach real usersโand what can go wrong once they do.
Neural Networks
This module introduces the core architecture behind much of modern AI. Students follow information as it moves through layers, is transformed by weights and activations, and eventually becomes a prediction that can be improved through feedback.
Question we are chasing: How do large collections of simple numerical operations combine into a model that can recognize patterns humans struggle to hand-code?
#20
Feedforward Neural Networks
From neurons to layered predictions
See how layers, weights, biases, and activations combine to transform raw inputs into a usable prediction.
#21
Training & Backpropagation
How a network learns from mistakes
Follow how a network measures its mistakes, sends error information backward, and updates its weights to improve over time.
Sequence Models
Some data points only make sense when you know what came before them. This module studies models built for ordered information such as language, audio, weather, and time series, where sequence and memory matter as much as the current input.
Question we are chasing: How can a model represent the past well enough to make a strong decision about what is happening now or what should happen next?
#22
Hidden Markov Models
When the real state is hidden from view
Learn how observable clues can be used to infer hidden states and recover the most likely explanation underneath a sequence.
#23
RNNs & LSTMs
Neural networks with memory
See how recurrent models carry context forward through time, and how LSTMs improve that memory when long sequences start to strain basic RNNs.
Language & Transformers
This module explains the modern language-model stack from the inside out. Students see how words become vectors, how attention lets models choose context dynamically, and how large-scale next-token training turns those ingredients into systems that can write, summarize, and answer questions.
Question we are chasing: How can a machine represent meaning, decide which context matters, and then generate fluent language one token at a time?
#24
Embeddings & Word2Vec
How words become meaningful vectors
Learn how language models map words into vectors so similarity, context, and analogy can be represented numerically.
#25
Attention & Transformers
How models decide what to focus on
See how attention lets each token pull in the context it needs, making long-range relationships easier to capture than in older sequence models.
#26
Large Language Models
Predicting the next token at scale
Understand how transformer models trained on massive corpora learn to generate text one token at a time and why that process can produce both impressive and unreliable behavior.
Advanced Topics
This module looks beyond the standard supervised-learning workflow. Students explore systems that learn from delayed rewards and systems that train collaboratively while keeping raw data distributed, which introduces the real-world constraints of strategy, privacy, and deployment.
Question we are chasing: How can AI systems keep improving in realistic environments where feedback is delayed, data is sensitive, and decisions have long-term consequences?
#27
Reinforcement Learning
Learning by trying, failing, and improving
Watch an agent learn through trial, reward, and delayed consequences rather than from labeled examples with fixed answers.
#28
Federated Learning
Training together without sharing raw data
Explore how many devices or institutions can improve a shared model together while keeping raw local data where it was collected.