Key features
- Virtual Kubernetes clusters — full K8s API server per APAC tenant without dedicated host cluster access
- Namespace-scoped isolation — APAC vClusters run in host namespaces with full RBAC and CRD isolation
- Fast provisioning — APAC vCluster creation in <30 seconds vs 8-15 minutes for managed clusters
- Resource syncing — APAC pods scheduled on host cluster nodes using existing autoscaling infrastructure
- Ephemeral CI clusters — create/delete APAC vClusters per CI run for real Kubernetes integration tests
- Multi-tenant APAC control — platform teams own host cluster, tenants get full vCluster admin access
- Loft Platform — commercial SaaS for APAC vCluster fleet management, SSO, and cost attribution
Best for
- APAC platform engineering teams managing shared Kubernetes clusters for multiple development teams who need to provide isolated cluster-scoped resources (CRDs, ClusterRoles) to APAC teams without granting access to the shared host cluster control plane
- APAC CI/CD platform teams needing real Kubernetes environments for integration tests — vCluster enables APAC pipelines to create an isolated K8s cluster per test run in <30 seconds, run tests, and delete the vCluster after, without EKS/GKE provisioning latency
- APAC organisations building multi-tenant developer platforms where each APAC business unit or product team needs isolated Kubernetes environments with their own namespace hierarchy, RBAC, and CRDs without a dedicated cluster per team
- APAC cost-conscious platform teams who need to host many Kubernetes environments (dev, test, feature branch, per-developer) on shared infrastructure without paying for dedicated managed cluster control planes per environment
Limitations to know
- ! Shared node resources — vCluster pods run on host cluster nodes; APAC tenant workloads share CPU, memory, and network bandwidth with other tenants on the same nodes; APAC platform teams must configure ResourceQuotas on vCluster namespaces to prevent APAC resource noisy-neighbour issues
- ! Host cluster dependency — vCluster availability depends on the host cluster's health; an APAC host cluster outage affects all vClusters on it, unlike dedicated clusters; APAC platform teams should run critical vClusters on separate host clusters from development vClusters
- ! CRD synchronisation limits — some CRDs with cluster-scoped side effects (creating APAC host-level resources) may not sync cleanly between virtual and host cluster; APAC platform teams should test specific CRD requirements before using vCluster for CRD-heavy workloads
- ! Commercial fleet management — Loft Platform (commercial) is required for APAC vCluster fleet management, SSO, and usage reporting at scale; APAC platform teams managing >20 vClusters should evaluate Loft Platform cost against self-managed vCluster automation
About vCluster
vCluster is an open-source tool from Loft Labs that enables APAC platform engineering teams to create lightweight virtual Kubernetes clusters running inside namespaces of a shared APAC host Kubernetes cluster — where each virtual cluster has its own Kubernetes API server (using k3s or k0s as the backing control plane), its own etcd (or SQLite for lightweight vClusters), and presents a complete Kubernetes API surface to APAC tenants, while the actual workload pods are synced back to the host cluster for scheduling and execution.
vCluster's multi-tenancy model — where APAC platform engineering teams create a separate vCluster per APAC development team, CI/CD environment, or feature branch testing environment, with each vCluster providing isolated Kubernetes RBAC, isolated custom resource definitions (CRDs), and isolated namespace scoping that doesn't affect other vClusters on the same host cluster — enables APAC platform teams to offer development teams the full Kubernetes API experience (including cluster-admin access to the vCluster, CRD installation, ClusterRole creation) without granting access to the shared APAC host cluster's control plane.
vCluster's resource syncing model — where APAC pods, services, and persistent volumes defined in the virtual cluster are synced to the host cluster by the vCluster syncer component, which creates corresponding resources in a dedicated host namespace with translated naming, allowing the actual APAC pod scheduling and execution to use the host cluster's nodes, node pools, and autoscaling infrastructure — enables APAC virtual clusters to benefit from the host cluster's existing Kubernetes infrastructure (Karpenter autoscaling, Cilium networking, APAC node GPU access) without duplicating infrastructure.
vCluster's ephemeral cluster model — where APAC CI/CD pipelines create a vCluster for the duration of an integration test run (running `vcluster create apac-test-cluster`, executing tests against a real Kubernetes API, then running `vcluster delete apac-test-cluster`) — enables APAC platform engineering teams to provide integration tests with real Kubernetes environments without the latency and cost of creating and destroying EKS/GKE clusters for each APAC CI run, with vCluster creation completing in under 30 seconds versus 8-15 minutes for managed APAC cluster provisioning.
vCluster's isolation modes — from soft multi-tenancy (vCluster pods share the host node's userspace, appropriate for APAC development environments where teams are trusted) to hard multi-tenancy (vCluster pods run in sandboxed environments using gVisor or Kata Containers, appropriate for APAC environments hosting untrusted tenant workloads) — enable APAC platform engineering teams to tune the isolation level for each vCluster deployment based on the APAC tenant trust model.
Beyond this tool
Where this category meets practice depth.
A tool only matters in context. Browse the service pillars that operationalise it, the industries where it ships, and the Asian markets where AIMenta runs adoption programs.
Other service pillars
By industry