• Decrease Text SizeIncrease Text Size

Prohibited AI Practice

Prohibited AI Practices are AI uses banned outright under the EU AI Act because of unacceptable risk. The list includes manipulative AI that exploits vulnerabilities to cause harm, social scoring by public authorities, untargeted scraping of facial images to build recognition databases, emotion recognition in workplaces and schools (with limited exceptions), biometric categorization based on sensitive characteristics, predictive policing based solely on profiling, and real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrowly defined exceptions). Penalties for prohibited practices reach the highest tier under the Act — up to 7% of global turnover. Enforcement of these prohibitions began in February 2025. Other jurisdictions have introduced similar prohibitions on specific high-risk practices. AI governance, AI compliance, and AI risk management programs must screen every AI use case against prohibited categories — making early-stage use-case review essential to responsible AI deployment in any global enterprise AI environment.

Centralpoint Blocks Prohibited Use Cases Before They Run: Oxcyon's Centralpoint AI Governance Platform enforces policy at the tool layer — preventing prohibited AI calls regardless of which model is involved (OpenAI, Gemini, Llama, embedded). Centralpoint meters consumption, keeps prompts and skills on-prem, and embeds policy-enforced chatbots into your portals via a single line of JavaScript.


Related Keywords:
Prohibited AI Practice,,