Skip to main content

6 modules · 28 lessons · 8+ hours of content

Subscribe to our YouTube channel and explore the complete curriculum below.

The Robotic AI Agent

A practical map for navigating robotic AI systems.

Mathematical Prerequisites

What you need to know before diving into the course material.

The Learning Problem

The Vapnik block diagram.

Linear Regression

Extracting non-linear patterns with linear models.

Gradient Descent

Optimizing complicated functions with iterative methods.

Entropy

Information theory principles.

Maximum Likelihood Estimation

The workhorse of statistical modeling.

Binary Classification

Binary classification and Logistic Regression.

Feature Extraction

Using a simple network to understand how features are extracted.

Multiclass Classifier

A simple multiclass classifier example.

Backpropagation

How to calculate gradients in a neural network.

Regularization

How to regulate the complexity of a neural network.

Introduction to Transformers

The transformer architecture and the simple attention mechanism.

The Learnable Attention Mechanism

Implementing the scaled dot-product self attention mechanism.

Multi-Head Self Attention

Using multiple attention heads to capture different aspects of input sequences.

Introduction to Planning

Typical planning problems and PDDL.

Planning Domain Definition Language

The constructs of PDDL.

Forward Search Algorithms

Finding global planning solutions with forward search.

The A* Algorithm

Using heuristics to guide forward search.

Introduction to MDPs - Part 1

Defining Markov Decision Processes.

Introduction to MDPs - Part 2

Defining Markov Decision Processes.

Bellman Expectation Equations - Part 1

Deriving the Bellman Expectation Equations.

Bellman Expectation Equations - Part 2

Deriving the Bellman Expectation Equations.

Policy Evaluation - Part 1

Using the Bellman Expectation Equations for Policy Evaluation.

Policy Evaluation - Part 2

Using the Bellman Expectation Equations for Policy Evaluation.

Bellman Optimal Value Functions

Deriving the Bellman Optimality Equations.

Policy Iteration and Value Iteration

Using the Bellman Optimality Equations for optimal control.


Course Information

  • CS-GY-6613: Introduction to AI, NYU Tandon (Spring 2026)
  • CS670/370: Introduction to AI, NJIT (Spring 2026)

View Full Syllabus

See the complete course syllabus including assignments and schedule.