• Decrease Text SizeIncrease Text Size

Cosine Similarity

Cosine Similarity measures the angle between two vectors, producing a value between -1 and 1 that captures how similar two embeddings are in direction (regardless of magnitude). It is the most common distance metric in semantic search and embedding-based retrieval, since modern text embeddings are typically normalized so that cosine similarity is equivalent to dot product. A cosine similarity of 1.0 means identical direction (most similar), 0 means orthogonal (unrelated), and -1 means opposite. The metric is supported across every major vector database and embedding library. Real-world examples include matching customer queries to FAQ entries, finding similar products in e-commerce, deduplicating documents, and grouping related news articles. Other distance metrics include Euclidean distance and Manhattan distance, though cosine dominates in NLP. AI governance and AI compliance documentation captures the chosen similarity metric since it affects retrieval behavior — a small but important detail in any responsible AI and AI risk management evidence file.

Centralpoint Captures Every Similarity Metric in Its Audit Trail: Centralpoint by Oxcyon logs every retrieval operation across OpenAI, Gemini, Llama, and embedded models. The platform meters consumption, keeps prompts and skills on-prem, and embeds retrieval-grounded chatbots into your portals via one JavaScript line. Reproducibility and governance, side by side.


Related Keywords:
Cosine Similarity,,