Skip to main content
See the Spring 2026 Academic Calendar for semester dates. Each week below lists the readings, lecture topics, and deliverables you should complete.
1

Review prerequisites

Python, linear algebra, probability theory, and basic control concepts. See Prerequisites.
2

Review lecture: Introduction to Robotics

AI and robotics from a systems perspective, with autonomous vehicles as the running example.
3

Watch videos

The Robotic AI Agent — A practical map for navigating robotic AI systems. Development Environment Setup — Setting up the ROS2 Docker-based development environment. Mathematical Prerequisites — Review the math foundations needed for the course.
4

Set up your development environment

Follow the Dev Environment guide to install Docker and configure your container.
5

Import the course repository

Import eng-ai-agents to your GitHub account and clone it locally.
1

Read TIF Chapters 9 & 10

Introduction to learning and gradient-based learning algorithms from Foundations of Computer Vision.
2

Read BISHOP Chapter 4

Single-variable and multivariate models, regularization, and Bayesian linear regression from Deep Learning: Foundations and Concepts.
3

Review lecture: Supervised Learning

Perception subsystem, reflexive agents, the learning problem. See The Learning Problem.
4

Review lecture: Linear Regression

Regression fundamentals and empirical risk minimization. See Linear Regression.
5

Review lecture: SGD Optimization

Stochastic gradient descent for minimizing the empirical risk. See SGD.
6

Read GERON Chapter 4 — SGD sections

Read the Gradient Descent, Batch Gradient Descent, Stochastic Gradient Descent, and Mini-Batch Gradient Descent sections from Chapter 4: Training Linear Models.
7

Run the GERON Chapter 4 notebook

Work through the Training Linear Models notebook.
8

Run the SGD notebook

Execute the SGD Sinusoidal Dataset notebook in your container.
9

Review lecture: Entropy

Information theory principles and cross-entropy. See Entropy.
10

Review lecture: Marginal Maximum Likelihood

Marginal likelihood and parameter estimation. See Marginal Maximum Likelihood.
11

Review lecture: Conditional Maximum Likelihood

Conditional likelihood for supervised learning. See Conditional Maximum Likelihood.
12

Review lecture: Classification Introduction

Classification fundamentals and decision boundaries. See Classification Introduction.
13

Review lecture: Logistic Regression

Binary classification with logistic regression. See Logistic Regression.
14

Watch videos

The Learning Problem — The Vapnik block diagram. Linear Regression — Extracting non-linear patterns with linear models. Gradient Descent — Optimizing complicated functions with iterative methods. Entropy — Information theory principles. Maximum Likelihood Estimation — The workhorse of statistical modeling. Binary Classification — Binary classification and Logistic Regression.
1

Read DL Chapters 9 & 10

Convolutional Neural Network architecture and applications.
2

Review lecture: CNN Introduction

Convolution operations, pooling, and spatial feature hierarchies. See CNN Introduction.
3

Review lecture: CNN Layers

Layer types and architectural patterns. See CNN Layers.
4

Review lecture: CNN Architectures and ResNets

ResNet, VGG, and other architectures. See CNN Example Architectures and Feature Extraction with ResNet.
5

Read GERON Chapter 12 — CNN sections

Read the Convolutional Layers, Pooling Layers, and CNN Architectures sections from Chapter 12: Deep Computer Vision with CNNs.
6

Run the GERON Chapter 12 notebook

Work through the Deep Computer Vision with CNNs notebook.
7

Submit Assignment 1

Complete and submit Assignment 1.
8

Watch videos

Feature Extraction — Using a simple network to understand how features are extracted. Backpropagation — How to calculate gradients in a neural network. Convolution and Correlation — A linear operation for extracting spatial features. CNN Architectures — Looking inside a CNN layer and understanding architectural patterns. ResNets — Residual Networks and skip connections.
1

Review lecture: Detection Metrics

Evaluation metrics for object detection. See Detection Metrics.
2

Review lecture: Object Detection

Detection pipelines and architectures. See Object Detection Introduction.
3

Review lecture: R-CNN

Region-based convolutional neural networks. See R-CNN.
4

Review lecture: Fast R-CNN

Efficient region-based detection. See Fast R-CNN.
5

Review lecture: Faster R-CNN

Region proposal networks and two-stage detection. See Faster R-CNN.
6

Watch videos

Introduction to Object Detection — Object detection in a physical security application. Computer Vision Datasets — What types of annotations are used in computer vision? Region-based Object Detectors — R-CNN, Fast R-CNN, Faster R-CNN.
1

Review lecture: Semantic Segmentation

Pixel-level labeling for scene understanding in navigation.
2

Run the Detectron2 notebook

Execute the Detectron2 Tutorial notebook.
3

Review lecture: Mask R-CNN

Instance segmentation for robotic perception. See Mask R-CNN.
4

Watch videos

Coming soon — Semantic segmentation video lectures are in development.
1

Read THRUN Chapters 2-3

Recursive estimation and Dynamic Bayesian Networks.
2

Review lecture: Recursive State Estimation

Bayes filter and its properties. See Recursive State Estimation.
3

Review lecture: Kalman Filters

Linear Gaussian models and the Kalman update equations. See Kalman Filters.
4

Watch videos

Introduction to State Estimation with HMM — Introducing Hidden Markov Models. The Bayes Filter — Implementing the Bayes filter algorithm. Discrete Bayes Filter Example — Discrete Bayes localization notebook. Continuous State Space and Kalman Filter — Localizing a drone under Gaussian assumptions. A Kalman Filter Example — Kalman localization notebook.
1

Read THRUN Chapters 5, 7; CORKE Chapter 4

Egomotion, velocity and odometry models, pose estimation.
2

Review lecture: Sensor Models

Beam model and likelihood field. See Beam Model.
3

Review lecture: HMM Localization

Discrete Bayesian filtering for localization. See HMM Localization.
4

Watch videos

Coming soon — Localization video lectures are in development.
1

Read THRUN Chapters 9-10; CORKE Chapter 6

Simultaneous Localization and Mapping, Visual SLAM with monocular cameras.
2

Review lecture: Occupancy Mapping

Grid-based environment representation. See Occupancy Mapping.
3

Review lecture: SLAM

The SLAM problem and solution approaches. See SLAM.
4

Midterm preparation

Review all material from Weeks 1-7. Focus on: statistical learning, perception, state estimation, sensor models, and localization.
5

Watch videos

Coming soon — SLAM video lectures are in development.
1

Review lecture: Search Algorithms

Optimal planning under uncertainty, A*, D*, RRT*, PRM algorithms. See Search.
2

Review lecture: A* Algorithm

Heuristic search for path planning. See A*.
3

Watch videos

Introduction to Planning — Typical planning problems and PDDL. Planning Domain Definition Language — The constructs of PDDL. Forward Search Algorithms — Finding global planning solutions with forward search. The A* Algorithm — Using heuristics to guide forward search.
1

Review lecture: MDP Introduction

Sequential decisions, reward signals, and Bellman equations. See MDP Introduction.
2

Review lecture: Bellman Equations

Expectation and optimality backups. See Bellman Expectation.
3

Review lecture: Policy Improvement

From value functions to optimal policies. See Policy Improvement.
4

Watch videos

Introduction to MDPs - Part 1 — Defining Markov Decision Processes. Introduction to MDPs - Part 2 — Defining Markov Decision Processes. Bellman Expectation Equations - Part 1 — Deriving the Bellman Expectation Equations. Bellman Expectation Equations - Part 2 — Deriving the Bellman Expectation Equations.
1

Review lecture: Reinforcement Learning

Model-free methods for robotic control policies. See Reinforcement Learning.
2

Review lecture: Temporal Difference Learning

TD methods and SARSA. See Temporal Difference.
3

Run the SARSA Gridworld notebook

Execute the SARSA Gridworld notebook.
4

Run the GERON Chapter 19 notebook

Work through the Reinforcement Learning notebook.
5

Watch videos

Bellman Optimal Value Functions — Deriving the Bellman Optimality Equations. Policy Iteration and Value Iteration — Using the Bellman Optimality Equations for optimal control.
1

Review lecture: Vision-Language Models

VLMs for human-robot collaboration. See VLM Introduction.
2

Review lecture: Imitation Learning

Learning from demonstrations for robotic manipulation. See Imitation Learning.
3

Watch videos

Introduction to Transformers — The transformer architecture and the simple attention mechanism. The Learnable Attention Mechanism — Implementing the scaled dot-product self attention mechanism. Multi-Head Self Attention — Using multiple attention heads to capture different aspects of input sequences.
1

Review lecture: VLA Agents

End-to-end Vision-Language-Action models for instruction parsing, perception, and action. See VLA Agents.
2

Review lecture: Sim-to-Real Transfer

Training in simulation and deploying to real robots. See Sim2Real.
3

Review lecture: Simulation

Simulation environments for robotics research. See Simulation.
4

Watch videos

Coming soon — VLA models video lectures are in development.