Sakana AI releases Japanese-native open-weights LLM trained on curated Japanese corpora — outperforms English-primary models on Japanese enterprise tasks. Addresses the LLM quality gap blocking adoption at Japanese enterprises with Japanese-language operational workflows.
Tokyo-based AI research company Sakana AI has released a Japanese-native open-weights language model trained on curated high-quality Japanese corpora and optimised for enterprise deployment. The model demonstrates benchmark-leading performance on Japanese-language tasks including document summarisation, structured data extraction from Japanese documents, and customer communication generation — outperforming Japanese-language fine-tunes of English-primary models on key enterprise use cases.
The release addresses a persistent gap in APAC AI adoption: most frontier LLMs (GPT-4, Claude, Gemini) are trained primarily on English data, with Japanese as a secondary language. For Japanese enterprises with Japanese-language workflows — financial reports, customer communications, regulatory filings, internal knowledge management — this creates quality gaps that limit real-world deployment. A Japanese-native open-source model enables Japanese enterprises to deploy AI for Japanese-language workflows without the quality compromises inherent in English-primary models. The open-weights release also enables fine-tuning on proprietary Japanese-language corpora, a critical capability for industries like financial services and pharmaceuticals where specialised vocabulary is essential for accuracy.
How AIMenta helps clients act on this
Where this story lands in our practice — explore the relevant service line and market.
Beyond this story
Cross-reference our practice depth.
News pieces sit on top of working capability. Browse the service pillars, industry verticals, and Asian markets where AIMenta turns these stories into engagements.
Other service pillars
By industry
Other Asian markets
Related stories
-
Open source ·
Meta Releases Llama 4 with 405B Parameter Model Leading Open-Source Benchmarks for APAC Enterprise Deployment
Meta Llama 4 405B leads open-source benchmarks and adds native multilingual APAC support including Japanese, Korean, and Bahasa. Significant for APAC enterprises building sovereign AI infrastructure requiring frontier capability without proprietary model dependency.
-
Open source ·
Meta Releases Llama 4 with Multimodal Capabilities, Advancing Open-Source LLM Adoption in APAC Enterprise
Meta releases Llama 4 with multimodal capabilities and expanded context. APAC enterprises self-host in-region on AWS/Azure for data sovereignty without proprietary API dependency. Most capable open-weights model at release — significant for APAC cost and customisation.
-
Open source ·
Hugging Face Launches APAC Inference Endpoints in Singapore and Tokyo for Open-Source Model Deployment
Hugging Face launches managed inference endpoints in Singapore and Tokyo for open-source model deployment with in-region data residency. Removes infrastructure barriers to Llama, Mistral, and Qwen adoption for APAC teams without dedicated ML engineering capacity.
-
Partnership ·
Salesforce and NTT DATA Expand Japan and APAC Partnership to Accelerate Agentforce Enterprise Deployment
Salesforce and NTT DATA expand Japan and APAC partnership for joint Agentforce AI agent deployments. NTT DATA's APAC enterprise relationships and Japanese-language implementation capacity provide the distribution channel Salesforce needs for Agentforce penetration in Japan.
-
Open source ·
Google Releases Gemma 3 Open Weights Models with 27B Parameter Version Topping Open-Source Benchmarks
Google Gemma 3 27B tops open-source benchmarks and runs on a single GPU — significant for APAC enterprises wanting on-premises LLM deployment without Llama compute requirements. Strong APAC language support makes it competitive for multilingual enterprise applications.