The EU AI Act takes a risk-based approach. Unacceptable-risk systems (social scoring, real-time biometric ID in public spaces with limited exceptions, manipulative subliminal techniques) are banned. High-risk systems (employment, education, critical infrastructure, law enforcement, biometric ID, essential services) face heavy obligations: risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, cybersecurity, and post-market monitoring. Limited-risk systems (chatbots, deepfakes) face transparency requirements. Minimal-risk systems are largely unregulated.
General-Purpose AI (GPAI) models — including frontier LLMs — face their own obligations: technical documentation, training-data summaries, copyright compliance, and (above a compute threshold) systemic-risk assessments.
For non-EU companies serving EU customers, the Act applies extraterritorially. Penalties run up to 7% of global turnover. Effective compliance requires a documented inventory of AI systems by risk category, a deployment review process, and ongoing monitoring artifacts.
Where AIMenta applies this
Service lines where this concept becomes a deliverable for clients.
Beyond this term
Where this concept ships in practice.
Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.
Other service pillars
By industry