KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Elastic now supports Cohere’s text embedding models

Article Featured Image

Elastic, the company behind Elasticsearch, announced the Elasticsearch open Inference API now supports Cohere’s text embedding models, enabling developers to experience immediate performance gains without impacting search quality.

This includes Elasticsearch native support for efficient int8 embeddings, which optimize performance and reduce memory cost for semantic search across the large datasets commonly found in enterprise scenarios.

“We’re excited to collaborate with Elastic to bring state-of-the-art search solutions to enterprises,” said Jaron Waldman, chief product officer at Cohere. “Elasticsearch delivers strong vector retrieval performance on large datasets, and their native support for Cohere’s Embed v3 models with int8 compression helps unlock gains in performance, efficiency, and search quality for enterprise-grade deployments of semantic search and retrieval-augmented generation (RAG)."

Developers who want to build more intuitive and accurate semantic search experiences for enterprise use cases need to look at Elasticsearch and Cohere, explained Shay Banon, founder and chief technology officer at Elastic.

“Innovation is rarely insular, and our work with the great team at Cohere showcases how we bring developers the best of both worlds,” Banon added. “The Cohere and Elastic communities now have great models to generate embeddings with support for inference workloads and seamless integration into the leading search and analytics platform that has invested in creating the best vector database.”

Support for Cohere embeddings is available in preview with Elastic 8.13 and will soon be generally available in an upcoming Elasticsearch release.

For more information about this news, visit www.elastic.co.

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues