Why eBPF Is Changing APAC Kubernetes Observability
Traditional Kubernetes observability required code changes: add the OpenTelemetry SDK, configure exporters, redeploy. For APAC teams with dozens of microservices in multiple languages, instrumentation becomes a multi-quarter project. eBPF (extended Berkeley Packet Filter) changes this: probes run in the Linux kernel, capturing application behavior from outside the process — HTTP requests, database queries, network flows — without touching APAC application code. The promise is instant deep observability without instrumentation debt.
Three tools cover the APAC eBPF observability spectrum:
Hubble — Cilium's eBPF network observability layer with real-time service dependency maps and network flow inspection for APAC Kubernetes clusters.
Pixie — CNCF sandbox auto-instrumentation platform that collects application traces, SQL queries, and logs via eBPF without code changes across APAC Kubernetes workloads.
groundcover — eBPF-native APM combining auto-collected traces, metrics, and logs with Kubernetes infrastructure correlation in a single APAC observability platform.
How eBPF Observability Works in APAC Kubernetes
Traditional APAC observability:
Application code → OTel SDK → APAC sidecars/agents → external APAC backend
Requires: SDK integration per language, sidecar injection, APAC app restart
eBPF APAC observability:
Linux kernel eBPF probe → intercepts syscalls/network events
→ captures HTTP headers, SQL text, network flows
→ NO APAC application code change, NO APAC restart
What eBPF can capture from APAC Kubernetes workloads:
✓ HTTP/gRPC request + response headers (L7 protocol parsing)
✓ SQL query text + execution time (PostgreSQL/MySQL protocol parsing)
✓ DNS queries and responses (including resolution time)
✓ TCP connection establishment and teardown (L3/L4 flows)
✓ Process CPU and memory usage (per-container granularity)
✓ File I/O patterns (read/write per process)
What eBPF CANNOT capture from APAC workloads:
✗ Application-level business context (user ID, order ID, APAC tenant)
✗ Custom span attributes (need OTel SDK for APAC business events)
✗ Encrypted payload content (TLS termination inside process)
✗ Application-specific metrics beyond protocol patterns
Hubble: APAC Cilium Network Flow Visibility
Hubble APAC installation (with Cilium)
# APAC: Install Hubble alongside Cilium CNI
# Assuming APAC Cilium already installed via Helm
helm upgrade cilium cilium/cilium \
--namespace kube-system \
--reuse-values \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
# APAC: Hubble Relay aggregates flows from all APAC nodes
# APAC: Hubble UI provides service dependency map dashboard
# APAC: Install Hubble CLI
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
curl -L --fail --remote-name-all \
"https://github.com/cilium/hubble/releases/download/${HUBBLE_VERSION}/hubble-linux-amd64.tar.gz"
tar xzvf hubble-linux-amd64.tar.gz
chmod +x hubble && mv hubble /usr/local/bin/hubble
# APAC: Verify Hubble Relay is reachable
hubble status
Hubble APAC network flow inspection
# APAC: Inspect live network flows from Hubble CLI
# APAC: Show all flows to/from the apac-payments service
hubble observe \
--namespace apac-payments \
--follow \
--output json | jq '.flow | {src: .source.pod_name, dst: .destination.pod_name, verdict: .verdict, proto: .l4}'
# APAC: Find all DNS queries from apac-order-service
hubble observe \
--from-pod apac-production/apac-order-service \
--protocol DNS \
--follow
# APAC: Show dropped flows (network policy blocks)
hubble observe \
--verdict DROPPED \
--namespace apac-production \
--last 100
# APAC: Output:
# DROPPED: apac-production/apac-frontend → apac-production/apac-database
# (port 5432) — CiliumNetworkPolicy apac-db-policy blocking frontend
# APAC: Reveals misconfigured APAC network policy without test traffic
Pixie: APAC Zero-Code Auto-Instrumentation
Pixie APAC deployment
# APAC: Deploy Pixie to Kubernetes cluster — one command
px deploy
# APAC: Requires: Kubernetes 1.21+, Linux kernel 4.14+, APAC cluster admin
# APAC: Deploys as DaemonSet — one Pixie collector per APAC node
# APAC: Ready in ~2 minutes after deploy
# APAC: Verify all nodes have Pixie running
px get viziers
# NAME CLUSTER STATUS AGE
# apac-prod-cluster asia-northeast1 Healthy 2m
Pixie APAC PxL script — slowest SQL queries
# APAC: PxL script — find slowest PostgreSQL queries across APAC cluster
import px
# APAC: Query Pixie's in-cluster DB query data (last 5 minutes)
df = px.DataFrame(table='pgsql_events', start_time='-5m')
# APAC: Filter to production namespace
df = df[df.ctx['namespace'] == 'apac-production']
# APAC: Add service label from pod context
df.service = df.ctx['service']
# APAC: Calculate query stats
df = df.groupby(['service', 'req_body']).agg(
latency_p99=('latency', px.percentile(99)),
count=('latency', px.count),
)
# APAC: Filter to slow queries (>100ms p99)
df = df[df.latency_p99 > 100 * px.MILLISECONDS]
df = df.sort('latency_p99', ascending=False)
df = df.head(20)
px.display(df, 'APAC Slowest SQL Queries')
# → Shows top 20 slow APAC SQL queries without any application instrumentation
Pixie APAC HTTP request breakdown
# APAC: PxL script — HTTP error rate by APAC service and endpoint
import px
df = px.DataFrame(table='http_events', start_time='-10m')
df = df[df.ctx['namespace'] == 'apac-production']
df.service = df.ctx['service']
# APAC: Group by service and endpoint
df = df.groupby(['service', 'req_path']).agg(
total=('latency', px.count),
errors=('resp_status', lambda x: px.sum(x >= 500)),
latency_p95=('latency', px.percentile(95)),
)
df.error_rate = df.errors / df.total
# APAC: Show endpoints with >5% error rate
df = df[df.error_rate > 0.05]
px.display(df, 'APAC High Error Rate Endpoints')
# → HTTP error analysis from eBPF — no OTel SDK required
groundcover: APAC Correlated APM
groundcover APAC deployment
# APAC: Deploy groundcover to Kubernetes cluster via Helm
helm repo add groundcover https://helm.groundcover.com
helm repo update
helm install groundcover groundcover/groundcover \
--namespace groundcover \
--create-namespace \
--set global.groundcoverToken="APAC_GC_TOKEN" \
--set global.clusterId="apac-production"
# APAC: groundcover deploys:
# - DaemonSet: eBPF collector on each APAC node
# - Deployment: in-cluster storage (Clickhouse + object storage)
# - Service: APAC Sensor API for OTel ingest
groundcover APAC OpenTelemetry enrichment
# APAC: Enrich eBPF auto-traces with custom business context via OTel SDK
# eBPF captures HTTP traces; OTel SDK adds APAC business attributes
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# APAC: Send custom spans to groundcover's OTel endpoint
provider = TracerProvider()
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(
endpoint="http://groundcover-sensor.groundcover:4317" # APAC in-cluster
)
)
)
trace.set_tracer_provider(provider)
apac_tracer = trace.get_tracer("apac-order-service")
def process_apac_order(order_id: str, customer_id: str):
with apac_tracer.start_as_current_span("process_apac_order") as span:
span.set_attribute("apac.order_id", order_id)
span.set_attribute("apac.customer_id", customer_id)
span.set_attribute("apac.region", "sg")
# APAC: groundcover correlates this custom span
# with the eBPF-captured HTTP and SQL traces
# for the same request — unified APAC trace view
...
APAC eBPF Observability Tool Selection
APAC Need → Tool → Why
APAC Cilium network debugging → Hubble Built-in Cilium;
(network policy, DNS, flows) → zero extra agents;
APAC L7 flow search
APAC instant K8s observability → Pixie Zero code changes;
(no instrumentation, fast) → SQL/HTTP auto-capture;
APAC in 5 minutes
APAC Datadog-like platform → groundcover APM + infra correlated;
(APM + infra, lower cost) → OTel compatible;
APAC in-cluster storage
APAC full-stack OTel platform → Grafana LGTM Mature; scalable;
(manual instrumentation acceptable) → APAC self-hosted option
APAC managed APM (no ops burden) → Datadog / NR Mature; APAC support;
→ higher cost per host
Related APAC Observability Resources
For the tracing tools (Jaeger, OpenTelemetry, SigNoz) that receive APAC traces from both eBPF auto-instrumentation and manual OTel SDK instrumentation, see the APAC distributed tracing guide.
For the Cilium CNI that Hubble extends with network observability, see the APAC Kubernetes networking guide.
For the continuous profiling tools (Pyroscope, Parca) that use eBPF for CPU profiling alongside Pixie's trace collection, see the APAC continuous profiling guide.
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.