Equalized Odds

Equalized Odds is a fairness metric requiring that an AI's error rates — both false positives and false negatives — be equal across protected groups. Formalized by Hardt, Price, and Srebro in 2016, it is a more stringent criterion than demographic parity because it conditions on the true outcome. For example, an equalized-odds-compliant medical AI would have the same false-negative rate for male and female patients with the same underlying condition. The metric is often appropriate when accurate diagnosis or prediction matters across groups equally. Like all fairness metrics, equalized odds cannot always be achieved alongside other fairness goals — the famous COMPAS analysis showed that satisfying one fairness metric often violates another. Tools like Fairlearn, AI Fairness 360, and Themis ML implement equalized-odds analysis. AI governance and AI compliance frameworks recommend documenting which fairness criterion applies and why, supporting AI risk management and responsible AI through transparent metric selection across enterprise AI.

Centralpoint Logs the Evidence Fairness Audits Require: Oxcyon's Centralpoint AI Governance Platform captures full per-interaction context (OpenAI, Gemini, Llama, embedded) so equalized-odds analyses become possible. Centralpoint meters consumption, keeps prompts and skills on-prem, and embeds audit-ready chatbots into your portals with a single JavaScript line.


Related Keywords:
Equalized Odds,,