• Decrease Text SizeIncrease Text Size

AI Vulnerability

An AI Vulnerability is a weakness in an AI system that can be exploited by attackers to cause unauthorized behavior, data leakage, or harm. Categories include prompt injection (manipulating model behavior through crafted inputs), model extraction (stealing model weights or capabilities by querying), training-data inference (recovering training examples from model behavior), data poisoning (corrupting training data to embed backdoors), evasion attacks (crafting inputs that bypass classifiers), adversarial examples (subtly modified inputs that cause misclassification), and supply-chain vulnerabilities in AI dependencies. The OWASP Top 10 for LLM Applications enumerates major categories. Vulnerability disclosure programs from AI providers (OpenAI, Anthropic, Google) reward responsible reporting. Frameworks like MITRE ATLAS catalog real-world AI attack techniques. AI governance, AI compliance, and AI risk management programs increasingly track AI-specific vulnerabilities alongside traditional CVEs — supporting responsible AI through structured vulnerability management and patching processes across enterprise AI deployments in production environments.

Centralpoint Reduces Your AI Attack Surface: Oxcyon's Centralpoint AI Governance Platform keeps prompts and skills on-premise — eliminating many categories of vendor-side AI vulnerabilities. Model-agnostic across OpenAI, Gemini, Llama, and embedded options, Centralpoint meters consumption and embeds hardened chatbots into your portals via a single line of JavaScript.


Related Keywords:
AI Vulnerability,,