Skip to main content
Hong Kong
AIMenta

Algorithmic Fairness

A research area concerned with whether ML systems produce equitable outcomes across protected groups, and the mathematical and policy choices involved.

Algorithmic fairness has produced a precise vocabulary for asking 'fair to whom, in what way?' The major formal definitions: demographic parity (equal acceptance rates across groups), equal opportunity (equal true-positive rates), equalized odds (equal true-positive AND false-positive rates), calibration within groups (predicted probabilities mean what they say in each group), and individual fairness (similar individuals receive similar predictions).

The inconvenient truth: these definitions are mutually incompatible in most realistic settings — the impossibility theorems of Chouldechova and Kleinberg show you can satisfy at most a couple at once. Fairness becomes a values-laden choice, not a technical optimization, and that choice should be documented with stakeholder input.

Production fairness work involves disparate-impact testing on training data, bias mitigation techniques (pre-processing, in-processing, post-processing), continuous monitoring across slices, and a written rationale for the chosen fairness criterion.

Where AIMenta applies this

Service lines where this concept becomes a deliverable for clients.

Beyond this term

Where this concept ships in practice.

Encyclopedia entries name the moving parts. The links below show where AIMenta turns these concepts into engagements — across service pillars, industry verticals, and Asian markets.

Continue with All terms · AI tools · Insights · Case studies