Japan's METI updated its AI governance guidelines to align with the G7 Hiroshima AI process, adding supply-chain transparency requirements and clarifying responsible AI obligations for enterprises procuring third-party AI systems.
Japan's Ministry of Economy, Trade and Industry (METI) published version 3.0 of its AI Governance Guidelines on April 17, the most substantive update since the framework launched in 2022.
The headline addition is **supply-chain transparency**: enterprises procuring AI systems from third-party vendors must now obtain and document the following from their suppliers:
- Training data provenance (categories of data used, geographic jurisdiction of collection, any third-party licensed datasets) - Model capability disclosures aligned with the G7 Hiroshima AI International Code of Conduct - Incident reporting commitments (response timelines, notification obligations) - Human oversight mechanisms for high-impact deployment contexts
This affects procurement teams at Japanese enterprises buying AI software from domestic and international vendors. The guidelines do not impose obligations on the vendors themselves — but buyers are expected to contractually require the disclosures.
The update also formalises METI's position on **generative AI in the workplace**: enterprises are expected to publish internal AI usage policies, provide employee AI literacy training commensurate with the AI systems being deployed, and establish channels for employees to raise concerns about AI-mediated decisions.
**AIMenta take:** METI's supply-chain transparency requirement is a quiet but significant procurement shift. Japanese enterprises — particularly the large industrial groups that form our typical client base — will need to update their vendor evaluation playbooks. Any AI vendor that cannot provide clear training data provenance documentation will face increasing friction in enterprise procurement processes, regardless of product quality. This is a market-structure change that favours established vendors with clear data governance documentation over newer models with opaque training datasets.
The guidelines carry advisory rather than mandatory status, but METI typically converts advisory frameworks into mandatory requirements through sector-specific regulations within 18–24 months.
How AIMenta helps clients act on this
Where this story lands in our practice — explore the relevant service line and market.
Beyond this story
Cross-reference our practice depth.
News pieces sit on top of working capability. Browse the service pillars, industry verticals, and Asian markets where AIMenta turns these stories into engagements.
Other service pillars
By industry
Other Asian markets
Related stories
-
Regulation ·
MAS confirms AI model risk management guidelines mandatory for Singapore's largest financial institutions by end-2026
The Monetary Authority of Singapore published its formal response to the AI in Finance industry consultation, confirming that AI model risk management guidelines will become mandatory for D-SIBs (Domestic Systemically Important Banks) and major insurers by Q4 2026, with an expectation of industry-wide adoption for all MAS-regulated entities by mid-2027.
-
Partnership ·
Salesforce and NTT DATA Expand Japan and APAC Partnership to Accelerate Agentforce Enterprise Deployment
Salesforce and NTT DATA expand Japan and APAC partnership for joint Agentforce AI agent deployments. NTT DATA's APAC enterprise relationships and Japanese-language implementation capacity provide the distribution channel Salesforce needs for Agentforce penetration in Japan.
-
Open source ·
Sakana AI Releases Japanese-Native Open-Source LLM Optimised for APAC Enterprise Deployment
Sakana AI releases Japanese-native open-weights LLM trained on curated Japanese corpora — outperforms English-primary models on Japanese enterprise tasks. Addresses the LLM quality gap blocking adoption at Japanese enterprises with Japanese-language operational workflows.
-
Regulation ·
Korea MSIT releases AI Basic Act implementation guidelines with 2027 compliance timeline
South Korea's Ministry of Science and ICT published detailed implementation guidelines for the AI Basic Act, specifying risk classification criteria, compliance obligations for high-impact AI systems, and sector-specific safe-harbour conditions. Enterprises have until Q1 2027 to achieve full compliance.
-
Regulation ·
EU finalizes GPAI Code of Practice ahead of August deadline
With the August 2026 GPAI obligations approaching, the European Commission published the final Code of Practice for general-purpose AI providers, setting expectations on documentation, copyright due diligence, and systemic-risk assessment.