Skip to main content
South Korea
AIMenta
foundational · Generative AI

Claude

Anthropic's family of large language models, known for long-context reasoning, instruction-following, and Constitutional-AI safety training.

Claude is Anthropic's family of large language models. The line began in 2023 with Claude 1 and has progressed through Claude 2, the Claude 3 family (Haiku, Sonnet, Opus), Claude 3.5 Sonnet, Claude 3.7 Sonnet, and the Claude 4 family (Haiku 4.5, Sonnet 4.6, Opus 4.7) as of late 2025 / early 2026. Architecturally Claude is a decoder-only Transformer trained at scale; the distinguishing characteristics are Anthropic's **Constitutional AI** safety approach (models trained to critique and revise their own outputs against a written set of principles) and a consistent emphasis on long-form writing, nuanced reasoning, and instruction-following over raw benchmark chasing.

Within the family, the tiers map roughly to cost/capability: **Haiku** for high-volume cheap tasks, **Sonnet** as the default workhorse, **Opus** for the most demanding reasoning and long-context workloads. Context windows have moved from 100K tokens (Claude 2) to 200K as the Claude 3 default, to 1M for the highest-end Claude 4 tier. The tool-use API, vision input, computer-use agent mode, and the Model Context Protocol (MCP) are all first-class features rather than add-ons.

For APAC mid-market teams, Claude is often the right choice for long-document analysis, complex writing tasks, and workflows where response quality matters more than the last 20% of cost optimisation. Availability has historically been via Anthropic's direct API, AWS Bedrock, and Google Vertex AI — with Bedrock and Vertex providing the regional data-residency answers for Singapore, Tokyo, Sydney, and Seoul deployments. Evaluate Claude against GPT and Gemini on your actual workload before choosing; the right model is workload-specific and shifts with each generation.

The non-obvious operational note: **Claude's tool-use and agent behaviours are shaped by its Constitutional training in subtle ways**. It will refuse requests other models accept, volunteer caveats other models omit, and sometimes push back on framing in ways that affect agent loops. For most production uses this increases safety and reduces exposure, but occasionally requires prompt adjustments that would not be needed on other models. Build your adapter layer so that vendor-specific prompt adaptations live in one place.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies