6 modules · 28 lessons · 8+ hours of content
Subscribe to our YouTube channel and explore the complete curriculum below.
1. Course Introduction
1. Course Introduction
The Robotic AI Agent
A practical map for navigating robotic AI systems.
Mathematical Prerequisites
What you need to know before diving into the course material.
2. Statistical Learning Theory
2. Statistical Learning Theory
The Learning Problem
The Vapnik block diagram.
Linear Regression
Extracting non-linear patterns with linear models.
Gradient Descent
Optimizing complicated functions with iterative methods.
Entropy
Information theory principles.
Maximum Likelihood Estimation
The workhorse of statistical modeling.
Binary Classification
Binary classification and Logistic Regression.
3. Neural Networks
3. Neural Networks
Feature Extraction
Using a simple network to understand how features are extracted.
Multiclass Classifier
A simple multiclass classifier example.
Backpropagation
How to calculate gradients in a neural network.
Regularization
How to regulate the complexity of a neural network.
4. Large Language Models
4. Large Language Models
Introduction to Transformers
The transformer architecture and the simple attention mechanism.
The Learnable Attention Mechanism
Implementing the scaled dot-product self attention mechanism.
Multi-Head Self Attention
Using multiple attention heads to capture different aspects of input sequences.
5. Task Planning
5. Task Planning
Introduction to Planning
Typical planning problems and PDDL.
Planning Domain Definition Language
The constructs of PDDL.
Forward Search Algorithms
Finding global planning solutions with forward search.
The A* Algorithm
Using heuristics to guide forward search.
6. Reinforcement Learning
6. Reinforcement Learning
Introduction to MDPs - Part 1
Defining Markov Decision Processes.
Introduction to MDPs - Part 2
Defining Markov Decision Processes.
Bellman Expectation Equations - Part 1
Deriving the Bellman Expectation Equations.
Bellman Expectation Equations - Part 2
Deriving the Bellman Expectation Equations.
Policy Evaluation - Part 1
Using the Bellman Expectation Equations for Policy Evaluation.
Policy Evaluation - Part 2
Using the Bellman Expectation Equations for Policy Evaluation.
Bellman Optimal Value Functions
Deriving the Bellman Optimality Equations.
Policy Iteration and Value Iteration
Using the Bellman Optimality Equations for optimal control.
Course Information
- CS-GY-6613: Introduction to AI, NYU Tandon (Spring 2026)
- CS670/370: Introduction to AI, NJIT (Spring 2026)

