• Decrease Text SizeIncrease Text Size

Disparate Impact

Disparate Impact is a legal and statistical concept that captures when a facially neutral policy or AI system produces significantly unequal outcomes across protected groups. The concept originated in U.S. employment law (Griggs v. Duke Power, 1971) and the four-fifths rule from the EEOC's Uniform Guidelines — a selection rate for any group less than 80% of the highest-selected group is generally considered evidence of disparate impact. Modern AI applications include hiring algorithms (where disparate impact may trigger legal liability), lending models (regulated by ECOA and the CFPB), housing AI (Fair Housing Act), and educational tools. Famous cases include the lawsuit against State Farm over its claims algorithm and ongoing litigation against tenant-screening AI providers. AI governance, AI compliance, and AI risk management programs include disparate-impact testing as a standard pre-deployment check for any high-stakes AI affecting people — supporting responsible AI in regulated industries through legal-grade fairness review.

Centralpoint Logs the Evidence You Need for Disparate-Impact Review: Oxcyon's Centralpoint AI Governance Platform captures every AI interaction (OpenAI, Gemini, Llama, embedded) with full context. Centralpoint meters consumption, keeps prompts and skills on-prem, and embeds disparate-impact-monitored chatbots into your portals via a single line of JavaScript.


Related Keywords:
Disparate Impact,,