Skip to main content
South Korea
AIMenta
Acronym foundational · MLOps & AI Platforms

MLOps

The practice of reliably deploying, operating, and maintaining machine-learning systems in production — DevOps adapted for the particular pain points of ML.

MLOps is the practice of reliably deploying, operating, and maintaining machine-learning systems in production. The discipline emerged as ML teams hit the realisation that the software-engineering practices that worked for traditional applications — CI/CD, monitoring, rollback, version control — needed adaptation for the particular pain points of ML: model artefacts alongside code, training datasets and their lineage, experiment tracking, data drift, model drift, feature-store semantics, and the fact that a model that worked in development may silently degrade in production as inputs shift.

A mature MLOps stack has recognisable components. **Experiment tracking** (MLflow, Weights & Biases, Neptune) — logs every training run with parameters, metrics, and artefacts. **Feature stores** (Feast, Tecton, Databricks, Hopsworks) — serve the same feature computations consistently at training time and at inference time. **Model registry** (MLflow, SageMaker Model Registry, Vertex AI Model Registry) — catalogues trained models with lineage, approval status, and deployment history. **Serving infrastructure** (KServe, BentoML, Seldon, Triton, or managed endpoints) — runs inference reliably at production scale. **Monitoring** — tracks prediction quality, latency, throughput, input distribution drift, and label-available feedback. **CI/CD for ML** — automates data validation, model retraining, and deployment approvals.

For APAC mid-market enterprises, the practical MLOps question is which subset of this stack is actually needed for their current ML maturity. A single-model-in-production team needs experiment tracking plus basic monitoring plus a reliable serving path — not a full feature store and multi-region A/B infrastructure. The common failure is either too much (premature investment in sophisticated MLOps platforms before any model is in production) or too little (shipping a model with no monitoring and discovering three months later that it has been broken since a data pipeline change). Grow the stack with the needs.

The non-obvious operational principle: **model monitoring in production is the single highest-ROI MLOps investment**. Most silent ML failures — a schema change that broke feature computation, a data pipeline that started returning stale values, a retrained model whose validation metrics concealed a subgroup regression — are invisible without monitoring and obvious with it. The monitoring stack pays for itself on the first incident it catches. Prioritise it before experimenting-tracking cleanups, feature-store ambitions, or serving-framework migrations.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies