SOC 2 Type II
In progressAICPA · independent auditor engaged
Audit window opened Q1 2026. Type II report targeted for Q3 2026. Type I letter available on request.
Sector-specific AI playbooks across 10 industries we know cold.
View all industries →Security & governance · DPO-ready · 9 markets
Our clients hand us regulated data, internal procedures, and customer interactions. This page documents what we run, where data sits, how we govern models, and which obligations apply in your market. Write to [email protected] with anything missing.
The controls we run by default on every engagement. Specific environments (client tenants, on-prem deployments) extend these where needed.
TLS 1.3 enforced on all client-facing endpoints. Internal service-to-service traffic uses mutual TLS with short-lived certificates rotated daily.
AES-256 across all storage tiers. Customer-managed keys (CMK) are available for engagements that require them; key rotation runs at least annually.
SSO via SAML or OIDC for every staff and client account. Multi-factor authentication is mandatory; hardware keys are required for any role with production access.
Role-based access with quarterly attestation. Just-in-time elevation for production changes; access auto-expires after the change window closes.
All credentials live in a managed vault (HashiCorp Vault or equivalent). Static secrets are rejected at CI; pre-commit and pipeline scanners block accidental disclosure.
MDM-managed laptops with disk encryption and EDR. Production networks segregated from corporate; egress restricted to allow-listed destinations.
Encrypted backups at minimum daily. Restore drills run quarterly; we publish RTO/RPO targets per service tier in the security pack.
Dependency scanning on every commit. Container images rebuilt nightly. Annual independent penetration test; high-severity findings remediated within 30 days.
Immutable audit logs for staff actions, model calls, data access, and admin changes. Retention 13 months by default; longer where regulators require it.
You pick the region. Data stays there for the life of the engagement. Cross-border transfers require explicit written approval and the appropriate legal instrument for your jurisdiction.
| Market | Default residency | Provider regions | Cross-border instrument |
|---|---|---|---|
| Hong Kong | HK | AWS ap-east-1, Azure East Asia | PCPD-aligned consent + DPA |
| Singapore | SG | AWS ap-southeast-1, GCP asia-southeast1 | PDPA Transfer Limitation + DPA |
| Japan | JP | AWS ap-northeast-1, Azure Japan East | APPI cross-border consent |
| Korea | KR (from Q3 2026) | AWS ap-northeast-2 | PIPA cross-border notification |
| Mainland China | CN (engagement-specific) | Per-engagement assessment with on-shore partner | CAC Standard Contract or Security Assessment |
| Taiwan, Malaysia, Vietnam, Indonesia | SG (default), local on request | AWS ap-southeast-1 / ap-southeast-3 | Local-law DPA + transfer clauses |
Default region is set by the market in which the contracting entity is incorporated. Region elections are recorded in the Master Services Agreement and audited at engagement close.
A discipline, not a slide. Every production model passes through a documented selection, evaluation, and monitoring loop owned by our research lead.
We score candidate models against six axes: task accuracy on a client-specific eval, latency at expected concurrency, total cost per 1K calls at projected volume, data-handling posture, supplier durability, and language coverage for the markets in scope. The rubric is recorded in the engagement runbook so a future engineer can replay the choice.
Pre-deployment: full evaluation set passes before any production cutover. In production: a sampled regression eval runs on every model or prompt change. Quarterly: drift review against a held-out gold set. Findings feed a public-to-the-client model-card per workflow.
Workflows that produce factual outputs are wired to authoritative sources via retrieval, with citations on every response. Outputs without sufficient grounding return a refusal rather than a guess. The grounding-rate floor is set per workflow during scoping and tracked as a primary KPI.
Untrusted inputs (documents, web content, user messages) are isolated from instructions through structured templating and content-source tagging. Tool-use is scoped to least-privilege permissions per session. A red-team test set runs against every release; new attack patterns from public research are added on a rolling basis.
Client data is never used to train shared models. We negotiate equivalent terms with every model provider in the loop and confirm them in the engagement DPA. Where fine-tuning is part of an engagement, the resulting weights are owned by the client and isolated to their environment.
Every model invocation logs: model and version, prompt template hash, input length, retrieved sources, output, latency, cost, and the human reviewer when one is in the loop. Logs are retained for 13 months by default and made available to client auditors on request.
Status is shown honestly. In-progress work is labelled as such; we do not display a badge before the auditor signs the report.
AICPA · independent auditor engaged
Audit window opened Q1 2026. Type II report targeted for Q3 2026. Type I letter available on request.
ISO accredited body (selection in progress)
Statement of Applicability drafted. Stage 1 audit booked for Q4 2026. Surveillance cycle from 2027.
Office of the Privacy Commissioner for Personal Data (PCPD)
Operating practices reviewed against the six Data Protection Principles. PCPD breach-notification workflow in place.
Personal Data Protection Commission (PDPC)
Data Protection Officer appointed. Annual PDPA refresh and DNC obligations integrated into engagement onboarding.
Personal Information Protection Commission (PPC)
Japanese-resident data stays in JP region by default. Cross-border transfers governed by per-engagement consent and PPC notification.
Cyberspace Administration of China (CAC)
For Mainland engagements we operate under a per-project compliance plan, including CAC standard contract or security assessment as required.
European Data Protection Board guidance
Where engagements involve EU data subjects we sign Standard Contractual Clauses and apply Article 28 processor obligations.
US National Institute of Standards and Technology
Our model-governance practice maps to the NIST AI RMF Govern, Map, Measure, Manage functions. Self-assessment refreshed annually.
Seven principles aligned with the NIST AI Risk Management Framework and the OECD AI Principles. They translate into engagement controls — they are not a poster on the wall.
01 · Human accountability
Each production AI workflow has a named accountable executive on the client side and a named delivery lead on ours. Decisions and incidents route to them without ambiguity.
02 · Explainability in context
Outputs that affect a customer or employee carry the inputs and sources used. Reviewers see why the model said what it said before they sign off.
03 · Fairness assessment
Workflows that touch hiring, lending, customer pricing, or service routing run a structured fairness assessment against protected attributes relevant to the market.
04 · Safety & refusals
Models are tuned to refuse when grounding is insufficient, when the request is out of scope, or when policy applies. Refusals are logged so the rate can be monitored.
05 · Privacy by default
We collect only the personal data the workflow needs, redact at the boundary where possible, and store in the client's elected region. Retention windows are set per data class.
06 · Security through the lifecycle
Every workflow gets a threat model covering data flows, prompt injection, model-output abuse, and supply-chain risk. The model is reviewed at every major change.
07 · Continuous monitoring
We instrument quality, cost, latency, refusal rate, and grounding rate from day one. A weekly review surfaces drift early so the client can act before users notice. A quarterly governance review pulls everything together for the executive owner.
Notification SLAs differ by regulator. Our internal targets meet or beat each market's statutory window so a client never learns about an incident from the news.
RTO
Recovery Time Objective for tier-1 production services.
RPO
Recovery Point Objective for tier-1 client data.
Initial client notice
From confirmed incident to the client's named security contact.
| Market | Regulator | Statutory notification | Our internal target |
|---|---|---|---|
| Hong Kong | PCPD | Voluntary, as soon as practicable | Within 48 hours of confirmation |
| Singapore | PDPC | 72 hours (notifiable breach) | Within 48 hours of confirmation |
| Japan | PPC | Promptly; report within 30 days | Initial notice within 72 hours |
| Korea | PIPC | 72 hours | Within 48 hours of confirmation |
| Mainland China | CAC | Within 8 hours for major incidents | Within 6 hours of confirmation |
| EU subjects (where applicable) | Lead supervisory authority | 72 hours (GDPR Art. 33) | Within 48 hours of confirmation |
Statutory references summarised for reading; full obligations are set out in the engagement DPA. The security pack download contains our full incident-response runbook and contact tree.
We will list awards here only after they issue. Placeholders below show where we expect early recognition based on submitted entries.
Industry analyst recognition program (target 2026)
Submission window opens Q3 2026. Listed here as a placeholder until results publish; we will not display a badge until issued.
Regional technology media program (target 2026)
Eligibility met from Q1 2026. Inclusion list publishes annually in Q4. Held as a placeholder pending result.
Singapore Business Federation program (target 2026)
Application drafted for the 2026 cycle. Held as a placeholder until results are published by SBF.
HK industry awards program (target 2026)
Targeted for the 2026 round. We will replace this entry with the issuer link once results are confirmed.
Domestic Japanese trade association (target 2027)
Eligibility window opens Q1 2027 once Tokyo office passes the 12-month operating threshold.
Related reading and contacts
Security and compliance posture connects to the services we deliver and the markets we deliver in. Use these links to dig deeper.
In their words
“AIMenta's team understood our regulatory constraints from day one. We were live with the first AI workflow in eleven weeks — faster than our internal IT had estimated for just the scoping phase.”
“We had tried two vendors before AIMenta. The difference was that they built what we specified, not what they wanted to sell us. The implementation team actually read our ops manuals.”
“The model accuracy was not the impressive part — it was how quickly the team identified the root cause when it drifted. Two hours to diagnosis, not two weeks. That is what operational AI looks like.”
“The model card deliverable for each workflow made our audit committee comfortable in a way that no consulting deck ever had. AIMenta gave us governance artefacts, not slide decks.”
“Sixty staff completed the AI for Financial Services program in six weeks. Three months later, two of them had shipped internal tools to production without IT involvement. That is the outcome we paid for.”
“We operate across seven business units with very different infrastructure maturity. AIMenta built a reference architecture that the weakest unit could adopt, not just the most advanced one. That is what enterprise-wide AI actually requires.”
Procurement reviewing AIMenta? We will pre-fill your security questionnaire, walk through the architecture, and align on residency and DPA terms before any engagement starts.