Skip to main content
Global
AIMenta
Blog

APAC Kubernetes Platform Engineering Essentials 2026: vCluster, External Secrets, and ExternalDNS for Internal Developer Platforms

A practitioner guide for APAC platform engineering teams building Kubernetes-based internal developer platforms in 2026 — covering vCluster multi-tenant isolation, External Secrets Operator for GitOps-compatible secret management, and ExternalDNS for automated DNS provisioning across Singapore, Japan, South Korea, and Southeast Asia.

AE By AIMenta Editorial Team ·

The APAC Internal Developer Platform Foundation Layer

APAC platform engineering teams building internal developer platforms (IDPs) in 2026 spend a disproportionate amount of time on three recurring operational problems: how to give APAC development teams isolated Kubernetes environments without dedicating a cluster per team, how to securely distribute secrets to APAC Kubernetes workloads without embedding credentials in Git or cluster manifests, and how to automate DNS record management for the dozens of APAC services being deployed and updated daily.

vCluster, External Secrets Operator, and ExternalDNS solve these three problems with a consistent principle: declarative Kubernetes-native resources that platform teams deploy once, and APAC development teams use continuously without platform team involvement for routine operations.

This guide covers how APAC mid-market enterprises (Singapore fintech, Japanese enterprise, South Korean technology, Southeast Asian SaaS) are using these three tools as the foundation of their Kubernetes internal developer platforms.


vCluster: APAC Multi-Tenant Kubernetes Isolation

The APAC cluster proliferation problem

The most common APAC Kubernetes multi-tenancy pattern in 2020–2023 was namespace-based isolation: one namespace per APAC team or application, with Kubernetes RBAC restricting team access to their namespaces. This approach has a fundamental limitation: Kubernetes RBAC namespace scoping does not cover cluster-scoped resources (ClusterRoles, ClusterRoleBindings, Custom Resource Definitions, StorageClasses, PodSecurityAdmission). An APAC development team that installs a Kubernetes operator or a custom CRD in their namespace affects all namespaces on the APAC host cluster.

The alternative — dedicated Kubernetes cluster per APAC team — solves isolation but creates operational overhead: APAC platform teams end up managing 20, 30, or 50 EKS/GKE/AKS clusters with separate control planes, node groups, VPC configurations, and monitoring stacks.

vCluster provides a middle path: virtual Kubernetes clusters that run inside host cluster namespaces but present a complete Kubernetes API surface to APAC tenants — including cluster-scoped resources — at the cost of running a lightweight control plane (k3s or k0s) per virtual cluster.

vCluster topology for APAC platform teams

A typical APAC internal developer platform using vCluster creates one vCluster per APAC development team or major project:

host-cluster (managed by APAC platform team)
├── namespace: team-payments-vcluster
│   └── vCluster (k3s control plane)
│       ├── team-payments can create any namespace
│       ├── team-payments can install CRDs
│       └── team-payments has cluster-admin in vCluster
├── namespace: team-logistics-vcluster
│   └── vCluster (k3s control plane)
│       └── team-logistics has full cluster-admin in vCluster
└── namespace: ci-ephemeral
    └── vCluster (created per CI run, deleted after tests)

APAC development teams access their vCluster with a standard kubeconfig, use kubectl and Helm normally, install operators and CRDs without platform team approval, and have the full Kubernetes experience — while the APAC platform team manages the host cluster and controls what runs on the actual nodes.

Deploying vCluster for APAC teams

# Install vCluster CLI
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster && mv vcluster /usr/local/bin/

# Create vCluster for APAC payments team
vcluster create apac-payments-team \
  --namespace team-payments-vcluster \
  --connect=false \
  --values apac-vcluster-values.yaml

# vcluster values for APAC team isolation
# apac-vcluster-values.yaml:
# sync:
#   ingresses:
#     enabled: true     # sync Ingresses to host cluster for APAC routing
#   hoststorageclasses:
#     enabled: true     # APAC teams can use host storage classes
# resourceQuota:
#   enabled: true
#   quota:
#     requests.cpu: "4"
#     requests.memory: "8Gi"
#     pods: "20"

# Give APAC payments team their kubeconfig
vcluster connect apac-payments-team \
  --namespace team-payments-vcluster \
  --kube-config-context-name apac-payments-team \
  --print > apac-payments-team-kubeconfig.yaml

Ephemeral APAC CI clusters with vCluster

For APAC platform teams using Tekton or Buildkite CI/CD, vCluster enables real Kubernetes integration tests in ephemeral clusters without EKS/GKE provisioning time:

# Buildkite pipeline step: create ephemeral APAC vCluster for integration tests
vcluster create ci-${BUILDKITE_BUILD_ID} \
  --namespace ci-ephemeral \
  --connect=false \
  --wait

# Export APAC test cluster kubeconfig
vcluster connect ci-${BUILDKITE_BUILD_ID} \
  --namespace ci-ephemeral \
  --print > /tmp/ci-kubeconfig.yaml

# Run APAC integration tests against real Kubernetes cluster
KUBECONFIG=/tmp/ci-kubeconfig.yaml helm install apac-app ./charts/apac-app
KUBECONFIG=/tmp/ci-kubeconfig.yaml ./test/integration/run-tests.sh

# Cleanup APAC ephemeral cluster
vcluster delete ci-${BUILDKITE_BUILD_ID} --namespace ci-ephemeral

The vCluster creation takes <30 seconds, providing APAC integration tests with a real Kubernetes environment (real API server, real CRD support, real RBAC) without 8-15 minute managed cluster provisioning delays.


External Secrets Operator: APAC Secret Management for GitOps

The APAC GitOps secret problem

GitOps — using Git as the source of truth for APAC Kubernetes configuration — works cleanly for application manifests, Helm chart values, and Kyverno policies. Secrets don't fit the GitOps model: APAC teams cannot commit plaintext credentials to Git, and encrypted secrets (SealedSecrets, git-crypt) create rotation and key management overhead.

External Secrets Operator resolves this by separating the secret reference (in Git) from the secret value (in the external store). APAC teams commit ExternalSecret CRDs to Git specifying which secrets to pull from Vault or AWS Secrets Manager — the actual credential values remain in the approved APAC secret store, not in Git or Kubernetes etcd.

ESO configuration for APAC AWS deployment

For APAC teams running on AWS EKS with IAM Roles for Service Accounts (IRSA):

# ClusterSecretStore: define APAC AWS Secrets Manager backend
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
  name: apac-aws-secrets-manager
spec:
  provider:
    aws:
      service: SecretsManager
      region: ap-southeast-1      # Singapore AWS region
      auth:
        jwt:
          serviceAccountRef:
            name: external-secrets-sa
            namespace: external-secrets
# ExternalSecret: sync APAC database credentials
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: apac-payments-db-credentials
  namespace: apac-payments
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: apac-aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: apac-payments-db-secret    # Kubernetes Secret name
    creationPolicy: Owner
  data:
    - secretKey: DB_HOST
      remoteRef:
        key: apac/payments/database
        property: host
    - secretKey: DB_PASSWORD
      remoteRef:
        key: apac/payments/database
        property: password

APAC application pods reference the created Kubernetes Secret normally — no awareness of the external store:

spec:
  containers:
    - name: apac-payments-api
      envFrom:
        - secretRef:
            name: apac-payments-db-secret

ESO with HashiCorp Vault for APAC on-premise deployments

For APAC platform teams with Vault deployed for secrets management (common in APAC FSI with data sovereignty requirements):

# SecretStore: APAC Vault backend with Kubernetes authentication
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: apac-vault
  namespace: apac-payments
spec:
  provider:
    vault:
      server: https://vault.internal.apac.example.com
      path: secret
      version: v2
      auth:
        kubernetes:
          mountPath: kubernetes
          role: apac-payments-role
          serviceAccountRef:
            name: apac-payments-sa

ESO uses Kubernetes ServiceAccount token exchange with Vault's Kubernetes authentication backend — no long-lived Vault tokens stored in Kubernetes. APAC platform teams configure Vault Kubernetes auth roles that grant specific APAC service namespaces access to specific Vault secret paths, implementing least-privilege APAC secret access.

Secret rotation with ESO

When APAC credentials are rotated in AWS Secrets Manager or Vault, ESO's refreshInterval syncs the new value to the Kubernetes Secret within the configured window. APAC workloads that read secrets from environment variables require a pod restart to pick up the new value — ESO's secretSyncRefreshInterval annotation or Kubernetes Reloader (watching Secret changes and triggering pod restarts) automates this:

# Add Reloader annotation to APAC deployment for auto-restart on secret rotation
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    secret.reloader.stakater.com/reload: "apac-payments-db-secret"

ExternalDNS: APAC Automated DNS Management

Eliminating the APAC DNS ticket queue

Before ExternalDNS, every new APAC service deployment in Kubernetes required a DNS ticket: "create A record api.apac.example.com pointing to the Kubernetes load balancer IP 43.xxx.xxx.xxx." As APAC services scaled from 10 to 100 deployments, these DNS tickets created a platform team bottleneck — and deleted services left stale DNS records pointing at decommissioned APAC load balancer IPs.

ExternalDNS automates this: APAC application teams add a hostname to their Ingress or Service resource, and ExternalDNS creates (and maintains, and deletes) the DNS record automatically.

ExternalDNS with Cloudflare for APAC

Many APAC enterprises use Cloudflare for DNS and CDN (Cloudflare has strong presence in Singapore, Hong Kong, and Tokyo). ExternalDNS supports Cloudflare:

# ExternalDNS deployment for APAC Cloudflare DNS automation
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
  namespace: kube-system
spec:
  template:
    spec:
      containers:
        - name: external-dns
          image: registry.k8s.io/external-dns/external-dns:v0.14.1
          args:
            - --source=ingress
            - --source=service
            - --domain-filter=apac.example.com
            - --provider=cloudflare
            - --cloudflare-proxied          # enable Cloudflare proxy for APAC
            - --txt-owner-id=apac-cluster-01
          env:
            - name: CF_API_TOKEN
              valueFrom:
                secretKeyRef:
                  name: cloudflare-api-token
                  key: token

APAC application teams deploy Ingresses with standard hostname annotations:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apac-payments-api-ingress
  annotations:
    # ExternalDNS: create DNS record for this hostname
    external-dns.alpha.kubernetes.io/hostname: api-payments.apac.example.com
    external-dns.alpha.kubernetes.io/ttl: "60"
spec:
  rules:
    - host: api-payments.apac.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apac-payments-api-service
                port:
                  number: 80

ExternalDNS creates the DNS A record automatically within 30 seconds of Ingress creation. When the Ingress is deleted, ExternalDNS deletes the DNS record — no stale DNS orphans.

ExternalDNS for APAC multi-cluster environments

APAC enterprises operating Kubernetes clusters in multiple regions (Singapore, Tokyo, Seoul) use ExternalDNS with weighted Route53 records for global load balancing. Each regional APAC cluster runs ExternalDNS with a unique --txt-owner-id:

# Singapore APAC cluster ExternalDNS
--txt-owner-id=apac-sg-cluster-01
--provider=aws

# Tokyo APAC cluster ExternalDNS
--txt-owner-id=apac-jp-cluster-01
--provider=aws

Route53 weighted routing policies are configured separately (via Terraform or AWS CDK), but ExternalDNS maintains the A record values for each APAC regional cluster's load balancer IPs as they change (during cluster upgrades or load balancer replacements) without manual DNS updates.


The Complete APAC Platform Engineering Essential Stack

vCluster, External Secrets Operator, and ExternalDNS form a cohesive foundation layer for APAC internal developer platforms:

APAC developer team workflow (self-service, no platform team tickets):

1. vCluster create → isolated Kubernetes environment with cluster-admin
   - team can install operators, CRDs, create any namespace
   - team uses their own kubeconfig context

2. ExternalSecret CRD → secrets synced from Vault/AWS SM to Kubernetes Secret
   - team commits ExternalSecret to Git (no credentials in Git)
   - ESO syncs values automatically on deployment

3. Ingress hostname annotation → DNS record created automatically
   - team adds hostname to Ingress
   - ExternalDNS creates Cloudflare/Route53 record in <30s

The common thread: APAC development teams self-service what they need (cluster environments, secrets, DNS records) through standard Kubernetes resource annotations and CRDs — reducing APAC platform team involvement from daily ticket handling to weekly policy management.


APAC Compliance Integration

For APAC regulated industries (FSI, healthcare), all three tools have specific compliance implications:

vCluster: APAC regulated workloads in isolated vClusters maintain Kubernetes audit log separation — each vCluster's API server has its own audit log stream, enabling APAC compliance teams to produce audit evidence for individual APAC tenant workloads without filtering shared cluster audit logs.

External Secrets Operator: ESO's ExternalSecret resource audit trail — Kubernetes events on ExternalSecret creation, sync failures, and rotation — combined with AWS CloudTrail or Vault audit logs creates a complete APAC secret access audit trail from external store to Kubernetes Secret, satisfying APAC FSI secret access audit requirements.

ExternalDNS: ExternalDNS provides DNS change audit via the configured provider's change log (Route53 CloudTrail events, Cloudflare Audit Log). APAC platform teams can demonstrate that DNS changes are exclusively driven by Kubernetes Ingress/Service changes — no manual DNS modifications bypass APAC change management processes.


Related APAC Kubernetes Platform Engineering Resources

For the security policies that govern what runs in vCluster and host clusters, see the APAC Kubernetes DevSecOps guide covering Kyverno, Cosign, and Kubescape.

For the CI/CD tooling that uses vCluster ephemeral environments for APAC integration testing, see the APAC CI/CD platform engineering guide covering Tekton, Buildkite, and Gradle.

For the Infrastructure as Code layer that provisions the host Kubernetes clusters where vCluster, ESO, and ExternalDNS are deployed, see the APAC Infrastructure as Code guide covering OpenTofu, Ansible, and AWS CDK.

Beyond this insight

Cross-reference our practice depth.

If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.

Keep reading

Related reading

Want this applied to your firm?

We use these frameworks daily in client engagements. Let's see what they look like for your stage and market.