Hallucination
Hallucination is when a generative AI model produces confident but false information — invented facts, fake citations, imaginary entities, or fabricated quotations. Famous examples include a New York attorney sanctioned in 2023 for citing six entirely fake cases generated by ChatGPT in a court filing, and various AI chatbots inventing fake academic references, hallucinating product features, or misattributing historical events. Hallucinations stem from the way LLMs predict plausible-sounding text rather than retrieve verified facts. Mitigation strategies include retrieval-augmented generation (RAG) that grounds responses in trusted documents, citation requirements, lower temperature, structured output validation, and human review for high-stakes outputs. Hallucination is one of the top AI risk management concerns in enterprise AI today, particularly in legal, medical, financial, and journalistic applications. AI governance frameworks require mitigation strategies — like grounding and human review — as part of AI compliance and responsible AI deployment plans for any system that produces factual claims.
Centralpoint Helps You Detect and Reduce Hallucination: Oxcyon's Centralpoint AI Governance Platform meters and logs every LLM call across OpenAI, Gemini, Llama, and embedded models, making hallucination patterns easier to spot. Centralpoint keeps prompts and skills on-premise and lets you embed grounded chatbots across your portals via a single line of JavaScript.
Related Keywords:
Hallucination,
,