The Kubernetes Networking Decision in APAC Platform Engineering
APAC platform engineering teams setting up Kubernetes clusters face an early decision that significantly impacts cluster architecture, network performance, observability, and compliance: which CNI (Container Network Interface) plugin to use. The CNI choice determines how pod-to-pod traffic is routed, how network policies are enforced, what debugging tools are available, and how the cluster integrates with APAC datacenter networking infrastructure.
The three most production-relevant Kubernetes CNI plugins for APAC enterprise teams occupy distinct niches:
Calico — the most widely deployed Kubernetes CNI, used for its mature BGP routing, comprehensive NetworkPolicy support, and WireGuard inter-node encryption for APAC compliance requirements.
Antrea — OVS-based CNI with tiered ClusterNetworkPolicy for APAC multi-tenant clusters and native VMware NSX integration for APAC enterprise network management.
Cilium — eBPF-native CNI delivering high-performance networking, transparent service mesh (Cilium Service Mesh), and Hubble network observability without a sidecar proxy.
APAC Kubernetes Networking Fundamentals
CNI plugin responsibilities
Kubernetes CNI plugin handles:
1. APAC Pod IP assignment
→ Each APAC pod gets a routable IP from the pod CIDR
→ Calico/Antrea/Cilium manage the APAC IP address pool
2. APAC Pod-to-pod routing
→ How does pod-A in node-1 reach pod-B in node-2?
→ Options: overlay (VXLAN/IPIP/Geneve) or native routing (BGP)
3. APAC Network policy enforcement
→ Which pods can talk to which pods?
→ CNI implements NetworkPolicy API + optional extensions
4. APAC Service proxy (kube-proxy or eBPF)
→ How do ClusterIP services route to backend pods?
→ Cilium can replace kube-proxy with eBPF for APAC performance
APAC CNI selection matrix
APAC Requirement → Calico → Antrea → Cilium
--------------------------------------------------------------------
Widest APAC production adoption → ★★★★★ → ★★★ → ★★★★
APAC BGP datacenter integration → ★★★★★ → ★★★ → ★★★★
APAC VMware / NSX integration → ★★★ → ★★★★★ → ★★
APAC eBPF high-performance → ★★★ → ★★ → ★★★★★
APAC service mesh (no sidecar) → ★★ → ★★ → ★★★★★
APAC network observability → ★★★ → ★★★★ → ★★★★★
APAC multi-tenant policy tiers → ★★★ → ★★★★★ → ★★★★
APAC WireGuard encryption → ★★★★★ → ★★ → ★★★★
APAC community / APAC ecosystem → ★★★★★ → ★★★ → ★★★★★
Calico: APAC Production-Grade Network Policy
Calico NetworkPolicy — APAC namespace isolation
# APAC: Allow apac-payment-api to reach apac-database only
# Block all other APAC ingress to apac-database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: apac-database-isolation
namespace: apac-production
spec:
podSelector:
matchLabels:
app: apac-database
tier: data
policyTypes:
- Ingress
- Egress
ingress:
# APAC: Allow only payment API → database connections
- from:
- podSelector:
matchLabels:
app: apac-payment-api
ports:
- protocol: TCP
port: 5432
# APAC: Allow monitoring from APAC Prometheus namespace
- from:
- namespaceSelector:
matchLabels:
apac-purpose: monitoring
ports:
- protocol: TCP
port: 9187 # APAC postgres_exporter
egress:
# APAC: Database only needs to respond (deny all outbound)
[]
Calico GlobalNetworkPolicy — APAC cluster-wide baseline
# APAC: Calico GlobalNetworkPolicy (cluster-scoped, not namespace-scoped)
# Deny all pod-to-pod traffic by default, then allow explicitly
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: apac-default-deny
spec:
# APAC: Apply to all non-system pods
selector: "apac-env == 'production'"
order: 1000 # APAC lowest priority (other rules take precedence)
types:
- Ingress
- Egress
ingress: [] # APAC: deny all ingress by default
egress:
# APAC: Allow DNS resolution for all pods
- action: Allow
protocol: UDP
destination:
ports: [53]
# APAC: Allow all pods to reach kube-apiserver
- action: Allow
destination:
services:
name: kubernetes
namespace: default
Calico WireGuard — APAC inter-node encryption
# APAC: Enable WireGuard encryption for all inter-node pod traffic
# Enable WireGuard in Calico FelixConfiguration
kubectl patch felixconfiguration default \
--type=merge \
--patch='{"spec": {"wireguardEnabled": true}}'
# APAC verify WireGuard is active on nodes
kubectl get nodes -o wide \
--output=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations.projectcalico\.org/WireguardPublicKey}{"\n"}{end}'
# APAC output (WireGuard public key present = encryption active):
# apac-node-1 Kx8bJ2Mx+... (WireGuard active)
# apac-node-2 7pQ9cN3Yx+... (WireGuard active)
# apac-node-3 Lm4aR8Kz+... (WireGuard active)
# APAC: All inter-node pod traffic now encrypted transparently
# No APAC application code changes required
# APAC compliance: data-in-transit encryption for financial/healthcare workloads
Antrea: APAC Tiered Network Policy for Multi-Tenant Clusters
Antrea ClusterNetworkPolicy — APAC platform governance tier
# APAC: Platform-level ClusterNetworkPolicy (admin-controlled, developer cannot override)
# Tier: apac-platform (priority 1000 — evaluated before developer namespace policies)
apiVersion: crd.antrea.io/v1alpha1
kind: ClusterNetworkPolicy
metadata:
name: apac-platform-security-baseline
spec:
tier: apac-platform
priority: 1000
ingress:
# APAC: Allow only APAC ingress controllers to receive external traffic
- action: Allow
from:
- namespaceSelector:
matchLabels:
apac-role: ingress-controller
to:
- namespaceSelector:
matchLabels:
apac-env: production
ports:
- protocol: TCP
port: 8080
# APAC: Block all direct external access to production pods
- action: Drop
from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8 # APAC internal networks
- 172.16.0.0/12
egress:
# APAC: Block production pods from reaching APAC dev/staging namespaces
- action: Drop
to:
- namespaceSelector:
matchLabels:
apac-env: dev
Antrea Traceflow — APAC network policy debugging
# APAC: Interactive packet trace to debug network policy
# Is APAC payment-api allowed to reach APAC database on port 5432?
kubectl apply -f - <<EOF
apiVersion: crd.antrea.io/v1alpha1
kind: Traceflow
metadata:
name: apac-payment-to-db-trace
spec:
source:
namespace: apac-production
pod: apac-payment-api-7f4b9c-xk2jp
destination:
namespace: apac-production
pod: apac-database-0
port: 5432
protocol: TCP
liveTraffic: false
packet:
ipHeader:
protocol: 6
EOF
kubectl get traceflow apac-payment-to-db-trace -o yaml
# APAC Output:
# status:
# phase: Succeeded
# results:
# - node: apac-node-1
# observations:
# - component: Forwarding
# action: Forwarded ← APAC pod egress: allowed
# - node: apac-node-2
# observations:
# - component: NetworkPolicy
# action: Forwarded ← APAC NetworkPolicy: allowed
# - component: Forwarding
# action: Delivered ← APAC packet delivered to database pod
Cilium: APAC eBPF-Native High-Performance Networking
Cilium NetworkPolicy — APAC L7 HTTP-aware policy
# APAC: Cilium CiliumNetworkPolicy — Layer 7 HTTP path-based policy
# Standard K8s NetworkPolicy only controls L3/L4 (IP + port)
# Cilium extends to L7 HTTP — control by path and method
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: apac-api-gateway-l7
namespace: apac-production
spec:
endpointSelector:
matchLabels:
app: apac-payment-service
ingress:
- fromEndpoints:
- matchLabels:
app: apac-api-gateway
# APAC: Only allow specific APAC HTTP paths from gateway
toPorts:
- ports:
- port: "8080"
protocol: TCP
rules:
http:
- method: "GET"
path: "^/apac/payments/[0-9]+$" # APAC: payment lookup only
- method: "POST"
path: "^/apac/payments$" # APAC: payment creation
# Block: PUT, DELETE, admin endpoints, internal paths
Hubble — APAC Cilium network observability
# APAC: Hubble real-time network flow observability (no sidecar required)
# Install Hubble CLI
brew install cilium/tap/hubble
# APAC: Observe all pod-to-pod flows in apac-production namespace
hubble observe \
--namespace apac-production \
--follow \
--output jsonpb
# APAC sample output:
# {
# "time": "2026-04-24T08:00:00Z",
# "source": {"namespace": "apac-production", "pod_name": "apac-payment-api-7f4"},
# "destination": {"namespace": "apac-production", "pod_name": "apac-database-0"},
# "destination_port": 5432,
# "verdict": "FORWARDED",
# "l4": {"tcp": {"source_port": 49201, "destination_port": 5432}},
# "type": "L3_L4"
# }
# APAC: Find dropped flows (NetworkPolicy violations)
hubble observe \
--namespace apac-production \
--verdict DROPPED \
--since 1h
# APAC: Identify which APAC pods are trying to reach blocked destinations
APAC Kubernetes CNI Tool Selection
APAC Kubernetes Networking Need → Tool → Why
APAC production default / broadest → Calico Widest APAC adoption;
(APAC general enterprise clusters) → BGP routing; WireGuard;
extensive APAC community
APAC VMware / NSX-T environments → Antrea Native NSX integration;
(APAC enterprise VMware datacenter) → tiered ClusterNetworkPolicy;
OVS APAC enterprise features
APAC eBPF high-performance → Cilium kube-proxy replacement;
(APAC large-scale / high-throughput) → L7 policy; Hubble APAC
observability; service mesh
APAC multiple interfaces per pod → Multus CNI APAC telco/ML workloads;
(APAC 5G NFV, GPU RDMA networking) → SR-IOV; DPDK; secondary
APAC network interfaces
APAC managed cloud clusters → Cloud CNI EKS (VPC CNI); GKE
(AWS EKS / GKE / AKS) → (Dataplane V2 = Cilium);
AKS (Azure CNI)
Related APAC Platform Engineering Resources
For Kubernetes policy as code (Conftest, Gatekeeper, Polaris) that enforce infrastructure compliance for the workloads these CNI plugins route traffic between, see the APAC Kubernetes policy as code guide.
For Kubernetes runtime security tools (OPA, Falco, KEDA) that complement CNI network policy with behavioral anomaly detection and workload scaling, see the APAC Kubernetes runtime security guide.
For the service mesh tools (Istio, Linkerd, Envoy) that operate above the CNI layer to provide mTLS, traffic management, and observability for APAC microservices, see the APAC service mesh guide.
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.