-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Cohere introduces Command-R for LLM workloads

Article Featured Image

Cohere is releasing Command-R, a new LLM aimed at large-scale production workloads that targets the emerging “scalable” category of models that balance high efficiency with strong accuracy—enabling companies to move beyond proof of concept and into production.

According to the company, Command-R is a generative model optimized for long context tasks such as retrieval augmented generation (RAG) and using external APIs and tools. It is designed to work in concert with Cohere’s Embed and Rerank models to provide best-in-class integration for RAG applications and excel at enterprise use cases.

As a model built for companies to implement at scale, benefits of Command-R include:

  • Strong accuracy on RAG and Tool Use
  • Low latency, and high throughput
  • Longer 128k context and lower pricing
  • Strong capabilities across 10 key languages
  • Model weights available on HuggingFace for research and evaluation

Command-R will be available immediately on Cohere’s hosted API, and on major cloud providers soon.

Command-R is the first in a series of model releases advancing capabilities crucial to enterprise adoption at scale, according to the company.

Even without leveraging Cohere’s Embed and Rerank models, Command-R outperforms others in the scalable category of generative models, according to the vendor. When used together, the lead expands significantly, enabling higher performance in more complicated domains.

Command-R is designed to serve as many people, organizations, and markets as possible. The model excels at 10 major languages of global business: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese. In addition, Embed and Rerank models serve over 100 languages natively.

Command-R features a longer context length, supporting up to 128k tokens in this initial release. The upgrade also comes with a price reduction on Cohere’s hosted API, and significant efficiency improvements for Cohere’s private cloud deployments.

Cohere works with all major cloud providers as well as on-prem for regulated industries and privacy-sensitive use cases, to make our models universally available.

For more information about this news, visit https://txt.cohere.com.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues