Category · 8 terms
MLOps & AI Platforms
defined clearly.
The lifecycle: experiment tracking, model serving, monitoring, drift, evals.
CI/CD for ML
Continuous integration and delivery pipelines adapted for machine learning — automating model training, evaluation, and deployment on every code or data change.
Data Drift
A change in the statistical distribution of model inputs over time, often the leading cause of silent ML model degradation in production.
Experiment Tracking
Logging the inputs, outputs, parameters, and metrics of every ML training run so that experiments are reproducible and comparable.
Feature Store
A centralized system for defining, computing, storing, and serving ML features — the canonical source for feature values used in both training and inference.
MLOps
The practice of reliably deploying, operating, and maintaining machine-learning systems in production — DevOps adapted for the particular pain points of ML.
Model Registry
A versioned store for trained models with metadata, lineage, and lifecycle stages (staging, production, archived) — the source of truth for what is deployed.
Predictive Maintenance
ML applied to asset telemetry so equipment failures are detected — and scheduled around — before they trigger unplanned downtime.
Shadow Deployment
Running a new model in production alongside the current one, scoring real traffic but not serving its predictions to users — used to validate before a full cutover.