Week 08 — MLOps and deployment

Module 8: containerize, deploy, track, monitor. FastAPI + Docker + free-tier cloud + MLflow + drift detection.

Module M8  |  ← schedule |  ← week 07 |  week 09 →

What it takes for the model to keep working after the notebook is closed.

What you ship this week

A deployed model: trained model wrapped in a FastAPI service, containerized, deployed to a free-tier cloud, with MLflow tracking and a basic drift-monitoring dashboard.

Due Friday 18:00 (Africa/Lagos (UTC+1))
Submission Drop the repo URL into the week's cohort channel. Peer-review pairing announced Monday of next week.
Rubric Pass / revise. Pass requires green CI, tests covering the public API, and a README a stranger can follow to install and run the code.

Live sessions and labs

Default weekly cadence below. Cohort-specific dates and Zoom links fill in at intake.

Day Time Block Recording
Mon 09:00-12:00 Live instruction + code-along (post-session)
Mon 14:00-16:00 Independent lab work + TA office hours (post-session)
Tue 09:00-12:00 Live instruction + code-along (post-session)
Tue 14:00-16:00 Independent lab work + TA office hours (post-session)
Wed 09:00-12:00 Live instruction + code-along (post-session)
Wed 14:00-16:00 Independent lab work + TA office hours (post-session)
Thu 09:00-12:00 Live instruction + code-along (post-session)
Thu 14:00-16:00 Independent lab work + TA office hours (post-session)
Fri 10:00-11:00 Industry speaker (post-session)
Fri 11:30-12:30 Lab review (post-session)
Fri 14:00-15:00 Cohort retrospective (post-session)

Learning outcomes

By the end of the week, every participant will:

  1. Version code, data, and models in a way that supports reproducibility.
  2. Containerize a model and deploy it as a REST API.
  3. Set up experiment tracking, model registry, and monitoring.
  4. Build a basic CI/CD pipeline for an ML system.

Topics covered

Reproducibility (Git, DVC, MLflow) · experiment tracking and model registry (MLflow, Weights & Biases) · containerization (Docker, docker-compose) · serving (FastAPI, BentoML, model registries) · monitoring (data drift, prediction drift, performance drift, latency, cost) · CI/CD for ML pipelines · the difference between the model works on my laptop and the model works in production for six months.

Labs

Lab 1 — FastAPI + Docker + free-tier deploy

Wrap any trained model from a previous week in a FastAPI service. Containerize. Deploy to Render or Railway. Call the deployed endpoint from a fresh notebook.

Dataset: Bring your own model from week 3, 6, or 7.

Lab 2 — MLflow tracking for a retraining loop

Set up MLflow tracking for a model retraining loop. Log hyperparameters, metrics, artifacts. Promote a model from staging to production via the Model Registry.

Dataset: Reuse one of the earlier weeks' datasets.

Lab 3 — Drift detection

Simulate gradual data drift on a deployed model. Build a Grafana or Evidently dashboard that detects it. Wire an alert that fires on a PSI threshold.

Dataset: Synthetic drift overlay on a previous week's data.

Readings

Mandatory

Optional deepening

Builds on (course catalogue)