Data drift is the change over time in the statistical distribution of inputs a model sees relative to the distribution it was trained on. Three related phenomena matter. **Covariate shift (input drift)** — input features change distribution (user demographics shift, sensor noise profile changes, vocabulary evolves). **Concept drift** — the relationship between inputs and the target changes (fraud patterns evolve, buying behaviour shifts after a regulation change). **Prior drift (label drift)** — the distribution of the target itself changes (class imbalance shifts seasonally). All three manifest as model quality silently degrading in production while offline eval on the original test set still looks fine.
The 2026 tooling landscape matured around drift monitoring as a named product category. **Evidently** (open source, Python-native) is the go-to baseline for drift reports and dashboards. **Arize** and **WhyLabs** (SaaS) cover production ML monitoring at scale with drift as the flagship feature. **Fiddler**, **Aporia**, **Mona** compete in the same space. **Great Expectations** and **Soda** handle upstream data-quality drift. Standard drift statistics include Population Stability Index (PSI), Kolmogorov-Smirnov (KS), Kullback-Leibler divergence (KL), Jensen-Shannon divergence, and Wasserstein distance. Cloud MLOps platforms (Vertex AI Model Monitoring, SageMaker Model Monitor) ship drift detection bundled.
For APAC mid-market teams, the practical drift programme has three parts. **Automated monitoring** — PSI or KS on the top 10-20 input features, daily, with thresholds tuned per feature. **Triage rubric** — drift alert → small-team review within 48 hours → categorise (expected seasonal, benign, actionable) → decision (monitor / retrain / investigate). **Retrain cadence** — either on schedule (monthly, quarterly) or triggered by drift thresholds, with documented criteria and a rollback plan. Avoid alerting on every input feature (alert fatigue) and avoid treating drift as automatically bad (some drift is expected and healthy).
The non-obvious failure mode is **alerting without action**. A team configures drift detection, wires alerts to a Slack channel, and watches the channel fill with notifications that nobody actions because the decision rubric was never written. Six months later the model is materially degraded and nobody knows which of the 200 drift alerts was the real one. Define the action per alert class before turning monitoring on, route alerts to named owners not channels, and escalate unactioned alerts after a defined window. A drift detector with no attached decision process is expensive dashboard decoration.
Where AIMenta applies this
Service lines where this concept becomes a deliverable for clients.
Beyond this term
Where this concept ships in practice.
Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.
Other service pillars
By industry