• Decrease Text SizeIncrease Text Size

High-Risk AI System

A High-Risk AI System is an AI application that the EU AI Act subjects to its strictest obligations because of significant potential for harm to health, safety, or fundamental rights. Annex III of the EU AI Act enumerates high-risk categories including: AI used in critical infrastructure, education and vocational training, employment (recruitment, promotion, termination decisions), essential private and public services (credit scoring, social-benefits decisions, emergency response), law enforcement, migration and border control, and administration of justice. High-risk AI systems must satisfy requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. Providers must also register systems in the EU database. Penalties for non-compliance reach 3% of global turnover. AI governance, AI compliance, and AI risk management programs serving any of these sectors must build out high-risk AI infrastructure — making mature platforms like Centralpoint essential to responsible AI delivery across global enterprise AI portfolios.

Centralpoint Meets High-Risk AI Requirements Head-On: Oxcyon's Centralpoint AI Governance Platform delivers the documentation, audit logs, human oversight, and metering high-risk classification demands — across OpenAI, Gemini, Llama, and embedded models. Centralpoint keeps prompts and skills on-prem and embeds compliant chatbots into your portals via one JavaScript line.


Related Keywords:
High-Risk AI System,,