Skip to main content
Singapore
AIMenta

Security & governance · DPO-ready · 9 markets

Trust is the engagement.
Not a checkbox at the end of it.

Our clients hand us regulated data, internal procedures, and customer interactions. This page documents what we run, where data sits, how we govern models, and which obligations apply in your market. Write to [email protected] with anything missing.

Security posture

The controls we run by default on every engagement. Specific environments (client tenants, on-prem deployments) extend these where needed.

Encryption in transit

TLS 1.3 enforced on all client-facing endpoints. Internal service-to-service traffic uses mutual TLS with short-lived certificates rotated daily.

Encryption at rest

AES-256 across all storage tiers. Customer-managed keys (CMK) are available for engagements that require them; key rotation runs at least annually.

Identity & MFA

SSO via SAML or OIDC for every staff and client account. Multi-factor authentication is mandatory; hardware keys are required for any role with production access.

Least privilege access

Role-based access with quarterly attestation. Just-in-time elevation for production changes; access auto-expires after the change window closes.

Secrets management

All credentials live in a managed vault (HashiCorp Vault or equivalent). Static secrets are rejected at CI; pre-commit and pipeline scanners block accidental disclosure.

Endpoint & network

MDM-managed laptops with disk encryption and EDR. Production networks segregated from corporate; egress restricted to allow-listed destinations.

Backups & recovery

Encrypted backups at minimum daily. Restore drills run quarterly; we publish RTO/RPO targets per service tier in the security pack.

Vulnerability management

Dependency scanning on every commit. Container images rebuilt nightly. Annual independent penetration test; high-severity findings remediated within 30 days.

Audit logging

Immutable audit logs for staff actions, model calls, data access, and admin changes. Retention 13 months by default; longer where regulators require it.

Data residency

You pick the region. Data stays there for the life of the engagement. Cross-border transfers require explicit written approval and the appropriate legal instrument for your jurisdiction.

Data residency defaults, provider regions, and cross-border instruments by market.
Market Default residency Provider regions Cross-border instrument
Hong Kong HK AWS ap-east-1, Azure East Asia PCPD-aligned consent + DPA
Singapore SG AWS ap-southeast-1, GCP asia-southeast1 PDPA Transfer Limitation + DPA
Japan JP AWS ap-northeast-1, Azure Japan East APPI cross-border consent
Korea KR (from Q3 2026) AWS ap-northeast-2 PIPA cross-border notification
Mainland China CN (engagement-specific) Per-engagement assessment with on-shore partner CAC Standard Contract or Security Assessment
Taiwan, Malaysia, Vietnam, Indonesia SG (default), local on request AWS ap-southeast-1 / ap-southeast-3 Local-law DPA + transfer clauses

Default region is set by the market in which the contracting entity is incorporated. Region elections are recorded in the Master Services Agreement and audited at engagement close.

Model governance

A discipline, not a slide. Every production model passes through a documented selection, evaluation, and monitoring loop owned by our research lead.

Model selection rubric

We score candidate models against six axes: task accuracy on a client-specific eval, latency at expected concurrency, total cost per 1K calls at projected volume, data-handling posture, supplier durability, and language coverage for the markets in scope. The rubric is recorded in the engagement runbook so a future engineer can replay the choice.

Evaluation cadence

Pre-deployment: full evaluation set passes before any production cutover. In production: a sampled regression eval runs on every model or prompt change. Quarterly: drift review against a held-out gold set. Findings feed a public-to-the-client model-card per workflow.

Hallucination & grounding

Workflows that produce factual outputs are wired to authoritative sources via retrieval, with citations on every response. Outputs without sufficient grounding return a refusal rather than a guess. The grounding-rate floor is set per workflow during scoping and tracked as a primary KPI.

Prompt-injection defenses

Untrusted inputs (documents, web content, user messages) are isolated from instructions through structured templating and content-source tagging. Tool-use is scoped to least-privilege permissions per session. A red-team test set runs against every release; new attack patterns from public research are added on a rolling basis.

No training on client data

Client data is never used to train shared models. We negotiate equivalent terms with every model provider in the loop and confirm them in the engagement DPA. Where fine-tuning is part of an engagement, the resulting weights are owned by the client and isolated to their environment.

Audit trail

Every model invocation logs: model and version, prompt template hash, input length, retrieved sources, output, latency, cost, and the human reviewer when one is in the loop. Logs are retained for 13 months by default and made available to client auditors on request.

Compliance & certifications

Status is shown honestly. In-progress work is labelled as such; we do not display a badge before the auditor signs the report.

SOC 2 Type II

In progress

AICPA · independent auditor engaged

Audit window opened Q1 2026. Type II report targeted for Q3 2026. Type I letter available on request.

ISO/IEC 27001:2022

Scoping

ISO accredited body (selection in progress)

Statement of Applicability drafted. Stage 1 audit booked for Q4 2026. Surveillance cycle from 2027.

Hong Kong PDPO compliant

Achieved

Office of the Privacy Commissioner for Personal Data (PCPD)

Operating practices reviewed against the six Data Protection Principles. PCPD breach-notification workflow in place.

Singapore PDPA compliant

Achieved

Personal Data Protection Commission (PDPC)

Data Protection Officer appointed. Annual PDPA refresh and DNC obligations integrated into engagement onboarding.

Japan APPI compliant

Achieved

Personal Information Protection Commission (PPC)

Japanese-resident data stays in JP region by default. Cross-border transfers governed by per-engagement consent and PPC notification.

China PIPL — engagement-specific

Aligned

Cyberspace Administration of China (CAC)

For Mainland engagements we operate under a per-project compliance plan, including CAC standard contract or security assessment as required.

GDPR-aligned for EU subjects

Aligned

European Data Protection Board guidance

Where engagements involve EU data subjects we sign Standard Contractual Clauses and apply Article 28 processor obligations.

NIST AI Risk Management Framework

Aligned

US National Institute of Standards and Technology

Our model-governance practice maps to the NIST AI RMF Govern, Map, Measure, Manage functions. Self-assessment refreshed annually.

Responsible AI commitments

Seven principles aligned with the NIST AI Risk Management Framework and the OECD AI Principles. They translate into engagement controls — they are not a poster on the wall.

01 · Human accountability

A named owner for every system.

Each production AI workflow has a named accountable executive on the client side and a named delivery lead on ours. Decisions and incidents route to them without ambiguity.

02 · Explainability in context

Explanations users can act on.

Outputs that affect a customer or employee carry the inputs and sources used. Reviewers see why the model said what it said before they sign off.

03 · Fairness assessment

Disparate-impact testing before launch.

Workflows that touch hiring, lending, customer pricing, or service routing run a structured fairness assessment against protected attributes relevant to the market.

04 · Safety & refusals

Refuse rather than guess.

Models are tuned to refuse when grounding is insufficient, when the request is out of scope, or when policy applies. Refusals are logged so the rate can be monitored.

05 · Privacy by default

Minimum data, regional storage.

We collect only the personal data the workflow needs, redact at the boundary where possible, and store in the client's elected region. Retention windows are set per data class.

06 · Security through the lifecycle

Threat modelling before code.

Every workflow gets a threat model covering data flows, prompt injection, model-output abuse, and supply-chain risk. The model is reviewed at every major change.

07 · Continuous monitoring

A model in production is a system, not a project.

We instrument quality, cost, latency, refusal rate, and grounding rate from day one. A weekly review surfaces drift early so the client can act before users notice. A quarterly governance review pulls everything together for the executive owner.

Incident response & breach disclosure

Notification SLAs differ by regulator. Our internal targets meet or beat each market's statutory window so a client never learns about an incident from the news.

RTO

4 hours

Recovery Time Objective for tier-1 production services.

RPO

1 hour

Recovery Point Objective for tier-1 client data.

Initial client notice

24 hours

From confirmed incident to the client's named security contact.

Statutory incident-notification deadlines and AIMenta internal targets by market and regulator.
Market Regulator Statutory notification Our internal target
Hong Kong PCPD Voluntary, as soon as practicable Within 48 hours of confirmation
Singapore PDPC 72 hours (notifiable breach) Within 48 hours of confirmation
Japan PPC Promptly; report within 30 days Initial notice within 72 hours
Korea PIPC 72 hours Within 48 hours of confirmation
Mainland China CAC Within 8 hours for major incidents Within 6 hours of confirmation
EU subjects (where applicable) Lead supervisory authority 72 hours (GDPR Art. 33) Within 48 hours of confirmation

Statutory references summarised for reading; full obligations are set out in the engagement DPA. The security pack download contains our full incident-response runbook and contact tree.

Recognition

We will list awards here only after they issue. Placeholders below show where we expect early recognition based on submitted entries.

Asia Mid-Market AI Adoption Partner of the Year — TBD

Pending — placeholder

Industry analyst recognition program (target 2026)

Submission window opens Q3 2026. Listed here as a placeholder until results publish; we will not display a badge until issued.

Top 50 AI Services Firms in APAC — TBD

Pending — placeholder

Regional technology media program (target 2026)

Eligibility met from Q1 2026. Inclusion list publishes annually in Q4. Held as a placeholder pending result.

Singapore SME Innovation Recognition — TBD

Pending — placeholder

Singapore Business Federation program (target 2026)

Application drafted for the 2026 cycle. Held as a placeholder until results are published by SBF.

Hong Kong Smart Business Awards — TBD

Pending — placeholder

HK industry awards program (target 2026)

Targeted for the 2026 round. We will replace this entry with the issuer link once results are confirmed.

Japan Enterprise AI Adoption — TBD

Pending — placeholder

Domestic Japanese trade association (target 2027)

Eligibility window opens Q1 2027 once Tokyo office passes the 12-month operating threshold.

Related reading and contacts

Where governance meets delivery

Security and compliance posture connects to the services we deliver and the markets we deliver in. Use these links to dig deeper.

In their words

What clients say about working with AIMenta.

“AIMenta's team understood our regulatory constraints from day one. We were live with the first AI workflow in eleven weeks — faster than our internal IT had estimated for just the scoping phase.”

Chief Digital Officer, HK Insurance Group

“We had tried two vendors before AIMenta. The difference was that they built what we specified, not what they wanted to sell us. The implementation team actually read our ops manuals.”

VP Operations, Singapore Logistics Co.

“The model accuracy was not the impressive part — it was how quickly the team identified the root cause when it drifted. Two hours to diagnosis, not two weeks. That is what operational AI looks like.”

Head of Manufacturing IT, Japan Industrial Group

“The model card deliverable for each workflow made our audit committee comfortable in a way that no consulting deck ever had. AIMenta gave us governance artefacts, not slide decks.”

CISO, Korea Fintech Company

“Sixty staff completed the AI for Financial Services program in six weeks. Three months later, two of them had shipped internal tools to production without IT involvement. That is the outcome we paid for.”

Head of Learning & Development, Malaysian Commercial Bank

“We operate across seven business units with very different infrastructure maturity. AIMenta built a reference architecture that the weakest unit could adopt, not just the most advanced one. That is what enterprise-wide AI actually requires.”

Chief Technology Officer, Indonesia Conglomerate

Talk to our security team.

Procurement reviewing AIMenta? We will pre-fill your security questionnaire, walk through the architecture, and align on residency and DPA terms before any engagement starts.