Why APAC Teams Are Replacing ELK with Modern Log Stacks
The Elasticsearch-Logstash-Kibana (ELK) stack has dominated APAC log management for a decade, but APAC engineering teams face compounding costs: Elasticsearch requires SSD storage ($0.10+/GB), demands significant cluster management overhead, and scales poorly for APAC multi-TB/month log volumes. Modern APAC log stacks replace ELK with object-storage-backed platforms (OpenObserve, Parseable) collecting via OTel-native agents (Grafana Alloy) — reducing APAC log storage costs by 80-140x.
Three tools cover the modern APAC log management stack:
Grafana Alloy — OTel-native telemetry collector replacing Grafana Agent for APAC Kubernetes log, metric, and trace collection.
OpenObserve — Rust-native Elasticsearch-compatible platform providing 140x lower storage costs for APAC logs, metrics, and traces.
Parseable — Lightweight Rust-native Parquet-based log storage for APAC edge and resource-constrained environments.
APAC Log Stack Architecture
Traditional APAC ELK Stack:
APAC Apps → Logstash → Elasticsearch (SSD) → Kibana
Cost: ~$0.10-0.15/GB SSD storage
Ops: cluster management, index lifecycle, APAC shard tuning
Modern APAC Log Stack:
APAC Apps → Grafana Alloy (OTel) → OpenObserve (S3) → Grafana
Cost: ~$0.023/GB S3 storage (140x cheaper)
Ops: DaemonSet + object storage + single binary dashboard
APAC Use Case Sizing:
<10GB/day APAC logs → Parseable (single binary, edge/regional)
10-500GB/day → OpenObserve (distributed, object storage)
Any APAC volume → Grafana Alloy (collection layer, any backend)
Need APAC ML/full-text → Elasticsearch (still valid for advanced search)
Grafana Alloy: APAC OTel Collection
Grafana Alloy APAC Kubernetes DaemonSet
# APAC: Grafana Alloy — Kubernetes DaemonSet with Helm
# helm install alloy grafana/alloy -f apac-alloy-values.yaml
# apac-alloy-values.yaml
alloy:
configMap:
content: |
// APAC: Collect Kubernetes pod logs via Loki source
loki.source.kubernetes "apac_pods" {
targets = discovery.kubernetes.apac_pods.targets
forward_to = [loki.write.apac_backend.receiver]
}
// APAC: Discover all pods in APAC cluster
discovery.kubernetes "apac_pods" {
role = "pod"
namespaces {
own_namespace = false
}
}
// APAC: Write logs to OpenObserve (Loki-compatible API)
loki.write "apac_backend" {
endpoint {
url = "http://openobserve.monitoring:5080/api/apac_org/loki/api/v1/push"
basic_auth {
username = env("APAC_OO_USER")
password = env("APAC_OO_PASS")
}
}
}
// APAC: Scrape Prometheus metrics from APAC pods
prometheus.scrape "apac_pods" {
targets = discovery.kubernetes.apac_pods.targets
forward_to = [prometheus.remote_write.apac_mimir.receiver]
}
// APAC: Remote write to Grafana Mimir or Prometheus
prometheus.remote_write "apac_mimir" {
endpoint {
url = env("APAC_PROMETHEUS_REMOTE_WRITE_URL")
}
}
Grafana Alloy APAC multi-backend fan-out
// APAC: Alloy fan-out — send logs to multiple backends simultaneously
loki.source.kubernetes "apac_pods" {
targets = discovery.kubernetes.apac_pods.targets
// APAC: Fan out to both OpenObserve and Parseable
forward_to = [
loki.write.apac_openobserve.receiver,
loki.write.apac_parseable.receiver,
]
}
// APAC: Primary: OpenObserve for long-retention search
loki.write "apac_openobserve" {
endpoint {
url = "http://openobserve:5080/api/apac/loki/api/v1/push"
}
}
// APAC: Secondary: Parseable for edge-local short-term access
loki.write "apac_parseable" {
endpoint {
url = "http://parseable-edge:8000/api/v1/logstream/apac-k8s"
headers = { "X-P-Stream" = "apac-k8s" }
}
}
// APAC: OTel traces forwarded from APAC application SDKs
otelcol.receiver.otlp "apac_apps" {
grpc { endpoint = "0.0.0.0:4317" }
http { endpoint = "0.0.0.0:4318" }
output {
traces = [otelcol.exporter.otlp.apac_tempo.input]
}
}
otelcol.exporter.otlp "apac_tempo" {
client {
endpoint = env("APAC_TEMPO_ENDPOINT")
}
}
OpenObserve: APAC Elasticsearch-Compatible Log Search
OpenObserve APAC deployment on Kubernetes
# APAC: OpenObserve — Kubernetes deployment with S3 storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: openobserve
namespace: monitoring
spec:
replicas: 3
template:
spec:
containers:
- name: openobserve
image: public.ecr.aws/zinclabs/openobserve:latest
env:
# APAC: S3-compatible storage (MinIO for on-prem APAC)
- name: ZO_S3_BUCKET_NAME
value: "apac-logs-openobserve"
- name: ZO_S3_REGION_NAME
value: "ap-southeast-1" # APAC Singapore
- name: ZO_S3_ACCESS_KEY
valueFrom:
secretKeyRef:
name: apac-s3-credentials
key: access-key
- name: ZO_S3_SECRET_KEY
valueFrom:
secretKeyRef:
name: apac-s3-credentials
key: secret-key
# APAC: Auth
- name: ZO_ROOT_USER_EMAIL
value: "[email protected]"
- name: ZO_ROOT_USER_PASSWORD
valueFrom:
secretKeyRef:
name: openobserve-auth
key: password
ports:
- containerPort: 5080 # HTTP API
- containerPort: 5081 # gRPC
OpenObserve APAC SQL-based log queries
-- APAC: OpenObserve SQL query interface for log analytics
-- APAC: Find error rate by service in last hour
SELECT
service_name,
COUNT(*) AS total_logs,
COUNTIF(level = 'error') AS error_count,
ROUND(COUNTIF(level = 'error') * 100.0 / COUNT(*), 2) AS apac_error_rate_pct
FROM "apac-k8s"
WHERE _timestamp >= NOW() - INTERVAL '1 hour'
GROUP BY service_name
ORDER BY apac_error_rate_pct DESC
LIMIT 20;
-- APAC: Slow APAC API endpoint detection
SELECT
request_path,
AVG(response_time_ms) AS avg_ms,
MAX(response_time_ms) AS max_ms,
COUNT(*) AS request_count
FROM "apac-api-logs"
WHERE _timestamp >= NOW() - INTERVAL '24 hours'
AND response_time_ms > 1000
GROUP BY request_path
HAVING COUNT(*) > 10
ORDER BY avg_ms DESC;
Parseable: APAC Lightweight Log Storage
Parseable APAC Docker deployment
# APAC: Parseable — single binary deployment for APAC edge
# APAC: Local mode (Parquet files on disk)
docker run -d \
--name parseable-apac \
-p 8000:8000 \
-v /apac/log-data:/data \
-e P_USERNAME=apac-admin \
-e P_PASSWORD=apac-secure-pass \
-e P_STAGING_DIR=/data/staging \
-e P_HOT_TIER_SIZE_GB=50 \
containers.parseable.com/parseable/parseable:latest \
parseable local-store
# APAC: S3 mode (object storage backend)
docker run -d \
--name parseable-apac-s3 \
-p 8000:8000 \
-e P_S3_URL=https://s3.ap-southeast-1.amazonaws.com \
-e P_S3_ACCESS_KEY=${APAC_S3_ACCESS_KEY} \
-e P_S3_SECRET_KEY=${APAC_S3_SECRET_KEY} \
-e P_S3_BUCKET=apac-parseable-logs \
-e P_S3_REGION=ap-southeast-1 \
containers.parseable.com/parseable/parseable:latest \
parseable s3-store
Parseable APAC log ingestion from Alloy
// APAC: Grafana Alloy → Parseable ingestion configuration
// APAC: HTTP output to Parseable ingest API
loki.write "apac_parseable" {
endpoint {
url = "http://parseable-edge:8000/api/v1/logstream/apac-apps"
headers = {
"Authorization" = "Basic <base64-apac-creds>",
"X-P-Stream" = "apac-apps",
}
}
}
# APAC: Parseable query API (SQL over Parquet)
curl -s "http://parseable-edge:8000/api/v1/query" \
-H "Authorization: Basic <creds>" \
-H "Content-Type: application/json" \
-d '{
"query": "SELECT level, message, service FROM \"apac-apps\" WHERE level = '\''error'\'' LIMIT 50",
"startTime": "2026-05-01T00:00:00+08:00",
"endTime": "2026-05-01T23:59:59+08:00"
}'
APAC Log Stack Selection Guide
APAC Scenario → Tool Choice → Reason
APAC high-volume Kubernetes logs → Alloy + OpenObserve OTel-native +
(>10GB/day, Elasticsearch too costly) → + S3 backend 140x cost reduction
APAC edge office log collection → Alloy + Parseable Single binary;
(<10GB/day, limited infra) → + local Parquet 80% storage saving
APAC full observability (logs + → Alloy + OpenObserve Unified APAC
metrics + traces, unified platform) → + Grafana Cloud telemetry backend
APAC advanced full-text log search → Elasticsearch ML ranking,
(scoring, ML, complex aggregations) → (retain for this) full-text scoring
APAC existing ELK migration → Alloy collector Elasticsearch API
(keep existing Kibana dashboards) → → OpenObserve compatible ingest
Related APAC Observability Resources
For the Kubernetes observability tools (Hubble, Pixie, groundcover) that provide eBPF-based network-level observability complementing log-based monitoring, see the APAC eBPF Kubernetes observability guide.
For the distributed tracing and metrics tools (Jaeger, OpenTelemetry, SigNoz) that Grafana Alloy forwards trace data to as part of the full APAC observability pipeline, see the APAC observability cluster guide.
For the AIOps tools (Dynatrace, PagerDuty, Datadog) that consume APAC log and metric streams from these collection platforms for intelligent alerting, see the APAC AIOps observability guide.
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.