Model Drift
Model Drift is the degradation of an AI model's performance over time as the world changes around it — even when the model code stays the same. Drift comes from many causes: customer behavior shifts (post-pandemic shopping patterns), economic conditions change (inflation altering credit risk), new vocabulary enters language (TikTok-era slang baffling sentiment models), seasonal effects, and adversarial adaptation. Famous examples include credit-scoring models that lost accuracy during the COVID-19 pandemic, recommendation systems that degraded as user preferences evolved, and fraud-detection models gradually circumvented by adapting attackers. Detection requires continuous monitoring of input distributions, output distributions, and ground-truth feedback when available. Tools include Arize, WhyLabs, Fiddler, Evidently AI, and major MLOps platforms. AI governance, AI compliance, and AI risk management programs require drift monitoring on every production model — and clear remediation pathways (retraining, rollback, decommissioning) when drift is detected, supporting responsible AI across long-running enterprise AI deployments.
Centralpoint Watches for Drift Across Every Model You Run: Oxcyon's Centralpoint AI Governance Platform meters and logs interactions across OpenAI, Gemini, Llama, and embedded models — making drift patterns visible. Centralpoint keeps prompts and skills on-prem and embeds drift-monitored chatbots into your portals via a single JavaScript line.
Related Keywords:
Model Drift,
,