Skip to main content
Malaysia
AIMenta
Product launch

AWS unveils Trainium3 at re:Invent with 4x training throughput

AWS announced Trainium3 with major performance improvements vs Trainium2, alongside expanded Bedrock model coverage and Q product roadmap.

AE By AIMenta Editorial Team ·
AIMenta editorial take

Trainium economics increasingly compelling for sustained training workloads. Worth a comparison if you spend >$5M/year on training.

Amazon Web Services announced Trainium3, its third-generation custom AI training chip, at its annual re:Invent conference. Trainium3 delivers approximately 4x training throughput improvement over Trainium2, achieved through a combination of higher memory bandwidth, more efficient inter-chip communication (RDMA-based NVLink equivalent), and a redesigned on-chip memory hierarchy. The chip is paired with updated SageMaker training infrastructure and a new Neuron SDK release that expands model architecture compatibility.

**What this means for enterprises training their own models.** Trainium3 is relevant for the subset of APAC enterprises that train foundation models or large domain-specific models rather than using pre-trained APIs. In the APAC context, this is predominantly: large technology companies (Alibaba, Samsung, Rakuten, NTT) running research programmes; specialised AI companies building vertical models (financial, medical, legal); and government-funded AI labs. The 4x throughput improvement reduces training cost per compute-hour, which directly reduces the economic barrier to training a mid-sized (7B–70B parameter) custom model.

**Inference use case.** AWS has also released Trainium3-based inference instances, which compete with its existing Inferentia2 chips for cost-optimised inference at scale. For APAC enterprises running high-volume inference (millions of API calls per day), Trainium3-based EC2 instances offer a cost-per-token advantage over GPU-based alternatives — typically 20–40% cheaper for supported model architectures. The Neuron SDK compatibility list includes Llama 2/3/4, Mistral, and selected smaller models, but not all proprietary frontier models.

**Asia Pacific availability.** Trainium3 is initially available in US-East, with Asia Pacific rollout (Tokyo, Singapore, Sydney) expected within 6–9 months of the US launch based on AWS's historical pattern for custom chip deployment. Enterprises planning large-scale APAC training workloads should factor this timeline into their infrastructure roadmaps.

**AIMenta's editorial read.** For the majority of APAC mid-market enterprises using pre-trained AI APIs, Trainium3 is background infrastructure news — it reduces AWS's compute costs, which may eventually reduce API pricing. For the minority doing custom model training, the 4x throughput improvement makes AWS's SageMaker training pipeline meaningfully more competitive with Azure ML and Google Vertex for large-scale training jobs.

Beyond this story

Cross-reference our practice depth.

News pieces sit on top of working capability. Browse the service pillars, industry verticals, and Asian markets where AIMenta turns these stories into engagements.

Tagged
#aws #hardware #training-chip #nvidia-alternative

Related stories