Selection Bias
Selection Bias occurs when the process by which data is collected systematically distorts outcomes — affecting validity of any AI trained on that data. Classic forms include survivorship bias (only studying successful outcomes), volunteer bias (only those who self-select participate), and exclusion bias (filtering out cases that affect generalizability). Famous examples include the WWII analysis of bullet holes in returning planes (Abraham Wald correctly noted the survivor planes showed where planes could be hit and still return) and modern AI failures where models trained on filtered, idealized data fail in messy production environments. In machine learning, selection bias appears when training data filters out hard cases, when test sets are easier than real users, or when monitoring only flags certain types of errors. AI governance frameworks require careful documentation of inclusion and exclusion criteria, supporting AI compliance, AI risk management, and responsible AI evaluation by making selection-bias risks visible to reviewers and auditors.
Centralpoint Surfaces Real-World AI Behaviour, Not Just Test-Set Performance: Oxcyon's Centralpoint AI Governance Platform observes every model interaction in production across OpenAI, Gemini, Llama, and embedded options. Centralpoint meters all consumption, keeps prompts and skills on-prem, and embeds behaviour-monitored chatbots into your portals via one JavaScript line.
Related Keywords:
Selection Bias,
,