Skip to main content
A container-based environment is the best way to work on AI/ML projects. The eng-ai-agents repository provides pre-configured Docker containers with PyTorch, common ML libraries, and a uv-managed virtual environment.

Installing Docker

We recommend VS Code as your IDE due to its support for Dev Containers. Install Docker for your operating system: If you plan to buy a dedicated GPU, choose NVIDIA — CUDA remains the standard for ML workload acceleration. If you already have an AMD or Intel GPU, it may still work for some tasks.

Apple Silicon (M1/M2/M3/M4)

Macs with Apple Silicon can run course containers using Docker Desktop’s built-in ARM emulation. For most ML notebooks the CPU container (torch.dev.cpu) works well. For GPU-accelerated training, PyTorch supports Apple’s Metal Performance Shaders (MPS) backend, which uses the unified memory architecture of the M-series chips.

PyTorch with MPS acceleration

MPS acceleration is available when running PyTorch natively on macOS (outside Docker). To use it:
import torch

# Check MPS availability
if torch.backends.mps.is_available():
    device = torch.device("mps")
else:
    device = torch.device("cpu")

model = model.to(device)
tensor = tensor.to(device)
If a specific operation is not yet implemented in MPS, set this environment variable to fall back to CPU automatically:
export PYTORCH_ENABLE_MPS_FALLBACK=1
MPS support is maturing but not all PyTorch operations are implemented yet. For operations that fail on MPS, the fallback variable ensures training continues on CPU without errors. Check the MPS backend documentation for the latest compatibility.

ROS on Mac

ROS2 does not run natively on macOS. Mac users must use the ros.dev.gpu Docker container for all robotics coursework. The container provides a full ROS2 Jazzy environment with GUI support via X11 forwarding or VNC. To enable GUI applications (RViz, Gazebo) on macOS, install XQuartz:
brew install --cask xquartz
After installing, open XQuartz, go to Preferences > Security, and enable Allow connections from network clients. Then allow connections:
xhost +localhost

Course environment setup

  1. Import the repository to your own GitHub account
  2. Clone it locally and copy .env.example to .env
  3. Build and run the Docker container (make docker-build-gpu or make docker-build-cpu)
  4. Submit your work to Canvas/Brightspace via GitHub

Configuration

All Docker services read their configuration from a .env file in the repository root. Copy the example to get started:
cp .env.example .env
Key variables:
# .env
UV_EXTRA=gpu                    # PyTorch variant: gpu or cpu
WORKSPACE_DIR=/workspaces/eng-ai-agents
WORKSPACE_USER=vscode
WANDB_API_KEY=your_api_key_here # For experiment tracking (optional)
The docker-compose.yml loads .env via env_file, making all variables available inside every container.
Never commit your .env file to git. It contains your API keys. The repository’s .gitignore already excludes it.

Docker Compose services

Three services are defined in docker-compose.yml:
ServiceContainerGPUUse case
torch.dev.gpueng-ai-agents-devNVIDIA (all)PyTorch notebooks requiring GPU acceleration
torch.dev.cpueng-ai-agents-cpuNoneLightweight notebooks, CI, non-GPU workloads
ros.dev.gpueng-ai-agents-rosNVIDIA (all)ROS2 robotics notebooks with GPU support
All services mount the repository as a workspace volume, so edits on the host are immediately reflected inside the container.

Build targets

Build containers using make from the repository root:
CommandDescription
make docker-build-gpuBuild the GPU container (Dockerfile.nvidia.dgpu)
make docker-build-cpuBuild the CPU container (Dockerfile.cpu.amd64)
make docker-buildBuild both GPU and CPU containers

Run targets

CommandDescription
make docker-run-gpuStart an interactive GPU container with the workspace mounted
make docker-run-cpuStart an interactive CPU container with the workspace mounted

Development setup

Once inside a container (or on the host with Python 3.11+), use these targets to set up the development environment:
CommandDescription
make startRecreate venv, sync dependencies, install package, and register Jupyter kernel
make installInstall the package in the virtual environment
make install-devInstall with development extras (linting, testing)
make install-notebooksInstall notebook extras and register the Jupyter kernel
make setup-devInstall dev dependencies and set up pre-commit hooks

Port mappings

Each service exposes ports for development tools:
ServiceQuartoJupyterDev
torch.dev.gpu410088888000
torch.dev.cpu410188898001
ros.dev.gpu418088808078
Port mappings can be customized through environment variables in .env (e.g., DEV_JUPYTER_PORT=8888).