Stop Sequence
A Stop Sequence is a string that, when generated by an LLM, signals the model to immediately stop producing output — used to enforce response boundaries, prevent over-generation, and structure outputs precisely. Common stop sequences include role markers ("\nHuman:" to prevent the model from continuing as the user), section markers ("\n###" to end at the next section), JSON delimiters ("}\n" to end JSON output), and custom application tokens. All major LLM APIs (OpenAI, Anthropic, Google, Cohere, Mistral) support stop sequences as a parameter, typically accepting up to 4 stop strings per request. The feature is particularly valuable for structured generation, agent loops, few-shot prompting (where you don't want the model to continue producing more examples), and any application requiring precise output boundaries. Stop sequences interact with streaming: the generated text up to but not including the stop sequence is returned to the client, then generation halts. AI governance, AI compliance, and AI risk management programs document stop-sequence configurations as part of prompt-template specifications supporting responsible AI through controlled output formatting in enterprise AI environments worldwide.
Centralpoint Enforces Stop Sequences Consistently: Oxcyon's Centralpoint AI Governance Platform applies stop sequences across OpenAI, Gemini, Claude, Llama, and embedded models — uniform behavior across providers. Centralpoint meters every token, keeps prompts and skills on-prem, and embeds precise-output chatbots into your portals via one JavaScript line.
Related Keywords:
Stop Sequence,
,