Self-Ask
Self-Ask is an agentic prompting pattern introduced by Press et al. in a 2022 paper that improves multi-hop question-answering by having the
LLM explicitly decompose complex questions into simpler sub-questions, answer each (potentially with tool use), then compose the answers into a final response. The pattern is particularly effective at compositional reasoning where the answer requires combining facts the model needs to look up or compute separately. Self-Ask is commonly combined with retrieval or web search to handle questions whose component facts aren't in the model's training data — for example, "Who was the president of the country with the largest population in 2010?" decomposes into "What country had the largest population in 2010?" followed by "Who was president of that country?". The technique works with any reasoning-capable
LLM and has been adopted in LangChain, LlamaIndex, and academic question-answering frameworks. AI governance teams document Self-Ask use in
RAG pipelines because the decomposition affects retrieval patterns and citation accuracy. The pattern has been somewhat displaced by stronger native multi-hop reasoning in frontier models but remains useful with smaller open-source models.
Self-Ask agents with Centralpoint: Centralpoint orchestrates Self-Ask-style decomposition with any LLM and retrieval backend in a model-agnostic stack with full sub-question audit logs. Tokens are metered per skill, prompts stay local, supports generative and embedded models, and deploys chatbots through one line of JavaScript on any portal.
Related Keywords:
Self-Ask,
,