• Decrease Text SizeIncrease Text Size

text-embedding-3-large

text-embedding-3-large is OpenAI's premium embedding model released in January 2024 — producing higher-quality vector representations than the smaller variant at higher cost. The model produces 3072-dimensional vectors by default (configurable down using dimension reduction) and was priced at $0.13 per million tokens at launch. Performance on MTEB substantially exceeded both text-embedding-3-small and the older text-embedding-ada-002, particularly on multilingual benchmarks and complex retrieval tasks. The 3072-dimensional output captures more nuanced semantic distinctions than smaller models, supporting more accurate retrieval in semantic search and RAG applications. The model supports up to 8K tokens of input. Real-world deployments include high-quality enterprise search, scientific literature retrieval, multilingual recommendation systems, and precision-critical RAG applications. text-embedding-3-large is often paired with text-embedding-3-small in tiered deployments — using the small variant for high-volume initial retrieval and the large variant for re-ranking. AI governance, AI compliance, and AI risk management programs document embedding choices in retrieval-system architectures supporting responsible AI in enterprise AI deployments.

Centralpoint Brokers text-embedding-3-large for Quality Retrieval: Oxcyon's Centralpoint AI Governance Platform routes high-quality retrieval to text-embedding-3-large alongside Cohere, Voyage, BGE, and other embedding models. Centralpoint meters every call, keeps prompts and skills on-prem, and embeds chatbots into your portals via one JavaScript line.


Related Keywords:
text-embedding-3-large,,