• Decrease Text SizeIncrease Text Size

Algorithmic Bias

Algorithmic Bias is systematic, repeatable unfairness in an AI system's output that disadvantages certain groups — often along lines of race, gender, age, disability, or socioeconomic status. Famous documented cases include Amazon's experimental recruiting AI that systematically downgraded women's resumes (scrapped in 2018), the COMPAS criminal recidivism tool's racial disparities, healthcare risk-scoring algorithms that under-prioritized Black patients, and facial-recognition systems with dramatically higher error rates on darker-skinned women (documented in the Gender Shades study). Bias can enter from biased training data, biased labels, biased feature engineering, biased optimization choices, or biased deployment contexts. Detection requires testing across demographic groups using fairness metrics like demographic parity, equalized odds, and disparate-impact ratios. AI governance, AI compliance, and AI ethics frameworks make bias detection and mitigation core responsibilities of every responsible AI program — and the EU AI Act, NYC's hiring law, and other regulations impose specific obligations on bias review across high-risk enterprise AI deployments.

Centralpoint Helps You Detect Bias Patterns Across Models: Oxcyon's Centralpoint AI Governance Platform logs every AI interaction (OpenAI, Gemini, Llama, embedded), giving teams the visibility they need to catch bias before it scales. Centralpoint meters consumption, keeps prompts and skills on-prem, and embeds bias-aware chatbots into your portals via a single JavaScript line.


Related Keywords:
Algorithmic Bias,,