A container-based environment is the best way to work on AI/ML projects. The eng-ai-agents repository provides pre-configured Docker containers with PyTorch, common ML libraries, and a uv-managed virtual environment.
Installing Docker
We recommend VS Code as your IDE due to its support for Dev Containers. Install Docker for your operating system:
If you plan to buy a dedicated GPU, choose NVIDIA — CUDA remains the standard for ML workload acceleration. If you already have an AMD or Intel GPU, it may still work for some tasks.
Apple Silicon (M1/M2/M3/M4)
Macs with Apple Silicon can run course containers using Docker Desktop’s built-in ARM emulation. For most ML notebooks the CPU container (torch.dev.cpu) works well. For GPU-accelerated training, PyTorch supports Apple’s Metal Performance Shaders (MPS) backend, which uses the unified memory architecture of the M-series chips.
PyTorch with MPS acceleration
MPS acceleration is available when running PyTorch natively on macOS (outside Docker). To use it:
import torch
# Check MPS availability
if torch.backends.mps.is_available():
device = torch.device("mps")
else:
device = torch.device("cpu")
model = model.to(device)
tensor = tensor.to(device)
If a specific operation is not yet implemented in MPS, set this environment variable to fall back to CPU automatically:
export PYTORCH_ENABLE_MPS_FALLBACK=1
MPS support is maturing but not all PyTorch operations are implemented yet. For operations that fail on MPS, the fallback variable ensures training continues on CPU without errors. Check the MPS backend documentation for the latest compatibility.
ROS on Mac
ROS2 does not run natively on macOS. Mac users must use the ros.dev.gpu Docker container for all robotics coursework. The container provides a full ROS2 Jazzy environment with GUI support via X11 forwarding or VNC.
To enable GUI applications (RViz, Gazebo) on macOS, install XQuartz:
brew install --cask xquartz
After installing, open XQuartz, go to Preferences > Security, and enable Allow connections from network clients. Then allow connections:
Course environment setup
- Import the repository to your own GitHub account
- Clone it locally and copy
.env.example to .env
- Build and run the Docker container (
make docker-build-gpu or make docker-build-cpu)
- Submit your work to Canvas/Brightspace via GitHub
Configuration
All Docker services read their configuration from a .env file in the repository root. Copy the example to get started:
Key variables:
# .env
UV_EXTRA=gpu # PyTorch variant: gpu or cpu
WORKSPACE_DIR=/workspaces/eng-ai-agents
WORKSPACE_USER=vscode
WANDB_API_KEY=your_api_key_here # For experiment tracking (optional)
The docker-compose.yml loads .env via env_file, making all variables available inside every container.
Never commit your .env file to git. It contains your API keys. The repository’s .gitignore already excludes it.
Docker Compose services
Three services are defined in docker-compose.yml:
| Service | Container | GPU | Use case |
|---|
torch.dev.gpu | eng-ai-agents-dev | NVIDIA (all) | PyTorch notebooks requiring GPU acceleration |
torch.dev.cpu | eng-ai-agents-cpu | None | Lightweight notebooks, CI, non-GPU workloads |
ros.dev.gpu | eng-ai-agents-ros | NVIDIA (all) | ROS2 robotics notebooks with GPU support |
All services mount the repository as a workspace volume, so edits on the host are immediately reflected inside the container.
Build targets
Build containers using make from the repository root:
| Command | Description |
|---|
make docker-build-gpu | Build the GPU container (Dockerfile.nvidia.dgpu) |
make docker-build-cpu | Build the CPU container (Dockerfile.cpu.amd64) |
make docker-build | Build both GPU and CPU containers |
Run targets
| Command | Description |
|---|
make docker-run-gpu | Start an interactive GPU container with the workspace mounted |
make docker-run-cpu | Start an interactive CPU container with the workspace mounted |
Development setup
Once inside a container (or on the host with Python 3.11+), use these targets to set up the development environment:
| Command | Description |
|---|
make start | Recreate venv, sync dependencies, install package, and register Jupyter kernel |
make install | Install the package in the virtual environment |
make install-dev | Install with development extras (linting, testing) |
make install-notebooks | Install notebook extras and register the Jupyter kernel |
make setup-dev | Install dev dependencies and set up pre-commit hooks |
Port mappings
Each service exposes ports for development tools:
| Service | Quarto | Jupyter | Dev |
|---|
torch.dev.gpu | 4100 | 8888 | 8000 |
torch.dev.cpu | 4101 | 8889 | 8001 |
ros.dev.gpu | 4180 | 8880 | 8078 |
Port mappings can be customized through environment variables in .env (e.g., DEV_JUPYTER_PORT=8888).