The APAC Distributed Data Infrastructure Problem
APAC engineering teams operating at scale face three converging data infrastructure problems that single-node databases cannot solve:
MySQL and PostgreSQL scale limits: APAC e-commerce and fintech applications generating 50,000+ transactions per second exhaust single-node MySQL and PostgreSQL capacity — and horizontal MySQL sharding introduces application-level complexity that requires rewriting query patterns and managing shard routing in APAC application code.
Multi-region data sovereignty: APAC enterprises operating in Singapore, Japan, and Korea simultaneously face regulatory requirements that customer data remain within jurisdiction — while APAC business operations require a logically unified view across regions that single-region databases cannot provide.
Redis license restrictions on APAC SaaS: APAC SaaS companies that bundle in-memory caching in their product encountered a 2024 inflection: Redis Ltd changed Redis from BSD to SSPL/RSALv2, restricting competing managed service providers and certain SaaS distribution models that APAC SaaS teams rely on.
TiDB solves the MySQL scale problem. CockroachDB solves the multi-region PostgreSQL consistency problem. Valkey solves the Redis license problem. APAC platform teams should understand all three before selecting distributed data infrastructure.
TiDB: MySQL-Compatible HTAP Scale-Out for APAC Fintech
The APAC MySQL sharding escape hatch
The standard APAC MySQL scaling journey ends at one of two painful junctures: either the single writer node becomes a throughput bottleneck for APAC write-heavy workloads (order processing, payment ledger, inventory updates), or the APAC team implements MySQL sharding — splitting the database into shards with shard keys embedded in the APAC application — and inherits permanent operational complexity.
TiDB is the escape hatch: a MySQL-compatible distributed SQL database where APAC applications continue using MySQL wire protocol, MySQL client libraries, and MySQL SQL syntax while TiDB distributes data automatically across multiple TiKV storage nodes without APAC application-layer shard routing.
TiDB's HTAP architecture for APAC analytics
-- APAC OLTP query: TiDB routes to TiKV row storage (low latency)
SELECT order_id, customer_id, amount, status
FROM apac_orders
WHERE customer_id = 12345678
AND created_at > NOW() - INTERVAL 30 DAY
ORDER BY created_at DESC
LIMIT 20;
-- APAC OLAP query: TiDB routes to TiFlash columnar storage (high throughput)
-- Same connection, same SQL — TiDB optimizer picks the right storage engine
SELECT
DATE(created_at) AS apac_order_date,
country_code AS apac_market,
SUM(amount) AS daily_gmv,
COUNT(DISTINCT customer_id) AS unique_buyers,
AVG(amount) AS avg_order_value
FROM apac_orders
WHERE created_at BETWEEN '2026-01-01' AND '2026-03-31'
GROUP BY DATE(created_at), country_code
ORDER BY apac_order_date, daily_gmv DESC;
APAC engineering teams running both queries against the same TiDB cluster eliminates the ETL delay and infrastructure cost of maintaining separate APAC MySQL OLTP and ClickHouse/BigQuery OLAP systems.
TiDB deployment for APAC Kubernetes
# TiDB Operator cluster for APAC on-premise Kubernetes
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: apac-prod-tidb
namespace: tidb-system
spec:
version: "7.5.0"
timezone: "Asia/Singapore"
# PD: Placement Driver (metadata + region assignment)
pd:
replicas: 3
requests:
storage: 50Gi
cpu: "2"
memory: 4Gi
# TiKV: Distributed row storage (APAC OLTP)
tikv:
replicas: 3
requests:
storage: 500Gi
cpu: "4"
memory: 16Gi
# TiDB: SQL layer (stateless, horizontally scalable)
tidb:
replicas: 2
requests:
cpu: "4"
memory: 8Gi
service:
type: LoadBalancer
# TiFlash: Columnar storage (APAC OLAP analytics)
tiflash:
replicas: 2
storageClaims:
- resources:
requests:
storage: 200Gi
-- Enable TiFlash replica for APAC analytics table
ALTER TABLE apac_orders SET TIFLASH REPLICA 1;
-- Verify TiFlash sync status (1 = synced, 0 = syncing)
SELECT TABLE_NAME, REPLICA_COUNT, AVAILABLE, PROGRESS
FROM information_schema.tiflash_replica
WHERE TABLE_SCHEMA = 'apac_ecommerce';
When APAC teams choose TiDB
TiDB is the right choice for APAC engineering teams when:
- Existing MySQL workloads need horizontal scale-out: The MySQL wire protocol compatibility means APAC applications using
mysql2,pymysql,go-sql-driver/mysql, or any MySQL connector work without modification. - APAC HTAP in one cluster: Real-time dashboard queries on transactional data without separate APAC analytics infrastructure.
- PingCAP APAC support matters: PingCAP has engineering teams in Singapore, Beijing, and San Francisco; APAC enterprises get English and Chinese support with APAC timezone coverage.
TiDB Cloud (PingCAP's managed service) provides APAC regional deployment on AWS ap-southeast-1 (Singapore) and ap-northeast-1 (Tokyo) for APAC teams that don't want to operate TiDB themselves.
CockroachDB: Multi-Region PostgreSQL Consistency for APAC Financial Systems
The APAC multi-region consistency problem
APAC fintech teams deploying in multiple APAC jurisdictions simultaneously face a choice that most distributed databases force on them: either sacrifice consistency (use eventual consistency across APAC regions with the risk of reading stale data) or sacrifice availability (require cross-region quorum acknowledgment for every APAC write, adding latency).
CockroachDB makes a different trade: it uses the Raft consensus protocol to provide serializable ACID transactions across distributed APAC nodes — accepting the latency cost of cross-region quorum in exchange for never serving stale data. For APAC financial ledger and payment systems where data accuracy is non-negotiable, this is the correct trade.
CockroachDB multi-region data placement for APAC compliance
-- Create APAC multi-region database with regional zones
CREATE DATABASE apac_financial
PRIMARY REGION "ap-southeast-1" -- Singapore primary
REGIONS "ap-northeast-1", "ap-northeast-2" -- Tokyo, Seoul secondary
SURVIVE REGION FAILURE; -- Tolerate loss of any single APAC region
-- REGIONAL BY ROW: pin each row to the customer's jurisdiction
CREATE TABLE apac_customer_accounts (
account_id UUID DEFAULT gen_random_uuid(),
customer_id UUID NOT NULL,
crdb_region crdb_internal_region AS (
CASE country_code
WHEN 'SG' THEN 'ap-southeast-1' -- Singapore MAS jurisdiction
WHEN 'MY' THEN 'ap-southeast-1' -- Malaysia PDPA co-located with SG
WHEN 'JP' THEN 'ap-northeast-1' -- Japan APPI jurisdiction
WHEN 'KR' THEN 'ap-northeast-2' -- Korea PIPA jurisdiction
ELSE 'ap-southeast-1' -- Default: Singapore
END
) STORED,
country_code CHAR(2) NOT NULL,
account_balance DECIMAL(18,4) NOT NULL DEFAULT 0,
currency_code CHAR(3) NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW(),
PRIMARY KEY (crdb_region, account_id)
);
-- Singapore MAS queries hit ap-southeast-1 locally (low latency)
-- Japanese FSA queries hit ap-northeast-1 locally (low latency)
-- Cross-region aggregations are consistent across APAC jurisdictions
CockroachDB ACID transaction across APAC services
import psycopg2
from psycopg2 import sql
# Standard psycopg2 connection — CockroachDB is wire-compatible with PostgreSQL
conn = psycopg2.connect(
host="apac-crdb-loadbalancer.internal",
port=26257,
database="apac_financial",
user="apac_service_account",
sslmode="require"
)
def transfer_between_apac_accounts(
from_account_id: str,
to_account_id: str,
amount: float,
currency: str
) -> bool:
"""Serializable cross-account transfer with APAC retry logic."""
max_retries = 5
for attempt in range(max_retries):
try:
with conn.cursor() as cur:
# CockroachDB SAVEPOINT for optimistic concurrency
cur.execute("SAVEPOINT cockroach_restart")
# Debit APAC source account
cur.execute("""
UPDATE apac_customer_accounts
SET account_balance = account_balance - %s,
updated_at = NOW()
WHERE account_id = %s
AND currency_code = %s
AND account_balance >= %s
RETURNING account_balance
""", (amount, from_account_id, currency, amount))
if cur.rowcount == 0:
conn.rollback()
return False # Insufficient APAC balance
# Credit APAC destination account
cur.execute("""
UPDATE apac_customer_accounts
SET account_balance = account_balance + %s,
updated_at = NOW()
WHERE account_id = %s
AND currency_code = %s
""", (amount, to_account_id, currency))
cur.execute("RELEASE SAVEPOINT cockroach_restart")
conn.commit()
return True
except psycopg2.errors.SerializationFailure:
# CockroachDB detected serialization conflict — retry APAC transaction
conn.rollback()
if attempt == max_retries - 1:
raise
continue
When APAC teams choose CockroachDB
CockroachDB fits APAC use cases where:
- Multi-region PostgreSQL ACID is required: Payment ledger, financial transfer, and identity systems where data loss or stale reads across APAC regions are not acceptable.
- APAC data residency per jurisdiction:
REGIONAL BY ROWsatisfies Singapore MAS, Japan APPI, and Korea PIPA simultaneously with a single database — without a separate database per APAC jurisdiction. - PostgreSQL ecosystem compatibility matters: APAC teams using PostgreSQL tools (psycopg2, SQLAlchemy, pgAdmin, pgBouncer) can connect to CockroachDB with minimal changes.
CockroachDB Serverless provides consumption-based APAC managed deployments; CockroachDB Dedicated provides VPC-isolated APAC deployments on AWS or GCP.
Valkey: BSD-Licensed Redis Replacement for APAC SaaS Teams
Why the Redis license change matters for APAC
When Redis Ltd announced the license change from BSD to SSPL/RSALv2 in March 2024, the immediate practical impact for APAC platform teams was:
- APAC SaaS teams bundling Redis: Products that incorporated Redis in their distributed APAC deployment fell into the SSPL scope — potentially requiring commercial licensing.
- APAC self-hosted teams: Internal Redis users were not directly affected by RSALv2, but the precedent of single-vendor license control created governance risk.
- APAC managed service providers: AWS, Google Cloud, and others could no longer offer new Redis-compatible managed services without licensing.
Valkey — forked from Redis 7.2 by the Linux Foundation in April 2024 — restores BSD licensing under multi-vendor open-source governance.
Valkey drop-in migration from Redis
# Before: Redis client pointing to managed Redis
import redis
r = redis.Redis(
host='apac-redis-cluster.cache.amazonaws.com',
port=6379,
ssl=True,
decode_responses=True
)
# After: Same redis-py client pointing to Valkey
# No code changes — Valkey is Redis 7.2 wire-compatible
r = redis.Redis(
host='apac-valkey-cluster.cache.amazonaws.com', # AWS ElastiCache for Valkey
port=6379,
ssl=True,
decode_responses=True
)
# All Redis commands work identically on Valkey:
r.set('apac:session:user:12345', json.dumps(session_data), ex=3600)
r.hset('apac:rate-limit:ip:203.0.113.1', 'count', 45, 'window', '2026-04-28T10:00')
r.lpush('apac:job-queue:notifications', json.dumps(notification_payload))
r.zadd('apac:leaderboard:singapore', {'user_12345': 9850.5})
r.publish('apac:events:payment-completed', json.dumps(payment_event))
Valkey on Kubernetes with Bitnami Helm
# values.yaml: Bitnami Valkey chart (drop-in for bitnami/redis chart)
# Identical configuration schema — no Helm values changes required
architecture: replication
auth:
enabled: true
existingSecret: apac-valkey-secret
existingSecretPasswordKey: valkey-password
master:
replicaCount: 1
resources:
requests:
cpu: "1"
memory: 2Gi
limits:
cpu: "2"
memory: 4Gi
persistence:
enabled: true
size: 20Gi
storageClass: "apac-fast-ssd"
replica:
replicaCount: 2
resources:
requests:
cpu: "500m"
memory: 1Gi
sentinel:
enabled: true # APAC automatic failover via Redis Sentinel protocol
quorum: 2
# Migrate from bitnami/redis to bitnami/valkey
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Uninstall Redis release (after draining APAC connections)
helm uninstall apac-redis -n data-plane
# Install Valkey with identical values file
helm install apac-valkey bitnami/valkey \
--namespace data-plane \
--values values.yaml \
--wait
Valkey on AWS ElastiCache
AWS launched ElastiCache for Valkey alongside the Linux Foundation fork:
# AWS CLI: Create Valkey cluster in ap-southeast-1 (Singapore)
aws elasticache create-replication-group \
--replication-group-id apac-valkey-prod \
--replication-group-description "APAC production Valkey cluster" \
--engine valkey \
--engine-version 7.2 \
--cache-node-type cache.r7g.large \
--num-cache-clusters 3 \
--automatic-failover-enabled \
--multi-az-enabled \
--at-rest-encryption-enabled \
--transit-encryption-enabled \
--region ap-southeast-1
The Valkey cluster endpoint is compatible with all Redis client libraries — the application sees a Redis 7.2 cluster.
When APAC teams choose Valkey
Valkey is the right choice for APAC teams when:
- APAC SaaS bundling: Products that distribute Redis in their deployment need BSD licensing to avoid SSPL scope.
- Governance risk mitigation: Linux Foundation multi-vendor stewardship reduces the single-vendor license change risk that Redis 2024 demonstrated.
- Kubernetes-native deployment: Bitnami Valkey Helm chart is a drop-in replacement with identical configuration schema.
- AWS APAC managed service: ElastiCache for Valkey is available in ap-southeast-1, ap-northeast-1, and ap-southeast-2.
For APAC teams that are self-hosting Redis on-premise with no SaaS distribution concern, the urgency of migration is lower — but Linux Foundation governance is still a long-term risk reduction.
APAC Distributed Database Selection Matrix
APAC Workload → Database → Primary reason
MySQL write scale-out → TiDB MySQL wire compat; auto-sharding;
(e-commerce orders, payment → no app-level shard routing
ledger, social media feed)
Real-time APAC analytics on → TiDB TiFlash columnar engine on same
transactional data → cluster; no ETL to APAC warehouse
Multi-region ACID PostgreSQL → CockroachDB REGIONAL BY ROW data residency;
(financial ledger, identity, → serializable across Singapore/Tokyo/Seoul
cross-APAC payment settlement)
APAC data residency per → CockroachDB One DB, multiple jurisdictions;
jurisdiction (MAS/APPI/PIPA) → row-level region pinning
APAC caching / pub-sub / sessions → Valkey Redis 7.2 compatible; BSD license;
(SaaS bundling, platform teams) → Linux Foundation governance
APAC Redis replacement with → Valkey ElastiCache for Valkey; Bitnami
managed cloud service → Helm drop-in replacement
APAC MySQL single-region, <50K TPS → Stay on TiDB overhead not justified;
(< horizontal scale need) MySQL RDS managed RDS MySQL is cheaper
APAC PostgreSQL single-region, → Stay on Raft latency overhead not justified
no multi-region requirement PostgreSQL without multi-region benefit
Related APAC Data Infrastructure Resources
For the OLAP and stream processing tools that complement TiDB for APAC analytics workloads, see the APAC OLAP and stream processing guide covering ClickHouse, DuckDB, and Apache Flink.
For the RAG infrastructure that uses pgvector and integrates with APAC distributed databases, see the APAC RAG infrastructure guide covering pgvector, Haystack, and Instructor.
For the data governance and catalog tools that manage APAC distributed data assets, see the APAC data governance and catalog guide covering Collibra, Alation, and Atlan.
Beyond this insight
Cross-reference our practice depth.
If this article matches your stage of thinking, the underlying capabilities ship across all six pillars, ten verticals, and nine Asian markets.