-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Adopting RAG while addressing its complexities with Shelf, Coveo, and Progress Semaphore

Article Featured Image

Retrieval-augmented generation (RAG) is shaping to be the de facto standard for improving the performance, accuracy, and precision of large language models (LLMs) and their GenAI applications. Despite its promises, RAG bears a variety of complexities—from increased data traffic to information overload, data quality risks, and security and compliance issues.

Experts joined KMWorld’s webinar, Unlocking the Power of RAG, to speak to the nuance of extracting value from RAG at scale while mitigating its complications.

Jan Štihec, director of data and analytics at Shelf, explained that RAG “has almost become synonymous with generative AI.”

The reason for RAG’s popularity is because while AI has the capacity to transform business for the better, according to Reynold Xin, co-founder of Databricks, “less than 10% of GenAI projects have reached the production phase." According to Gartner, this is because poor data quality is the foremost obstacle for enterprises implementing GenAI.

Answer accuracy, then, is a significant player in the way GenAI succeeds. Štihec pointed to five key strategies to ensure that RAG is successful within an enterprise-scale implementation:

  1. Enrich data to create greater context for GenAI with controlled vocabularies and accompanying glossaries, metadata enrichment, topic modeling for topic assignment, and a knowledge graph that utilizes metadata and data relationships.
  2. Identify data risks before they’re fed into RAG, including duplicate and redundant data, inaccurate and outdated information, conflicting and inconsistent information, privacy and compliance risks.
  3. Control RAG outputs by setting up quality filters, such as filters for contradicting content, private content, toxic and biased content, and duplicate content.
  4. Monitor RAG answers to spot hallucinations and inaccuracies by incorporating user feedback mechanisms, setting up regular conversation quality reviews, implementing a document audit trail, and automating answer evaluation.
  5. Fix poor RAG answers at the source with human-in-the-loop transparency and governance, monitorable data health during changes and updates, clear data ownership, and ticketing systems for issue resolution.

The digital workplace is in the middle of a knowledge crisis, and “RAG is the key to overcoming it,” said Emma Zernask-Cebek, product marketing manager at Coveo.

“Tools like GenAI can optimize workflows, but it's not just about generating insights. It's about ensuring those insights are grounded in accurate, relevant data through precise retrieval,” Zernask-Cebek continued.

Turning to RAG serves to ground AI models with a curated source of truth, ensure information is fresh, secure content access, cite sources, and a myriad of other advantages. Yet, RAG alone is not enough, noted Zernask-Cebek. Poor retrieval setups can lead to inefficiencies that impede the business rather than propel it toward success.

Determining which RAG approach best suits your organizational needs is a key determiner in its ability to drive positive business outcomes; will you use a vector database for retrieval or a RAG API? These choices may dictate the overall success of RAG within your unique enterprise, according to Zernask-Cebek.

Coveo’s Relevance Generative Answering (RGA), for instance, is a formidable selection for effectively implementing RAG within a business. By combining RAG with AI relevancy, RGA is a powerful, enterprise-ready solution that generates answers that are relevant, personalized, and secure, according to Coveo.

Steve Ingram, senior sales engineering manager at Progress Semaphore, centered his conversation on the role that semantic context plays in the world of RAG. Echoing the concerns associated with RAG that Štihec and Zernask-Cebek pointed out, Ingram argued that context and understanding remains the largest challenge facing AI and LLM implementation.

“The context of a piece of information is as important as the information itself,” said Ingram. To highlight this importance, Ingram presented the example of the word “gas,” or, the fuel powering cars in the U.S. Yet, “gas” has a multitude of other monikers based on geography (”petrol”), industry (“petroleum” or “3YE/1270”), and purpose (its chemical compound name, “CnH2n+2”).

With context playing such a crucial role in retrieval, semantic enrichment is fundamental for ensuring that RAG delivers on its slew of efficiency, accuracy, and performance promises. Infusing semantic enrichment into both the data and the questions themselves improves answer accuracy, model adaptability, compliance, timeliness, customization, efficiency, and more, according to Ingram.

Progress Semaphore combines the capture of subject matter expertise and private content, further aligning both the implicit and explicit information associated with enterprise data. This alignment creates a rich system of metadata, enabling the LLM to deliver answers that truly encapsulate the dynamic contexts and semantics interwoven within an enterprise infrastructure.

For the full, in-depth webinar, featuring examples, a Q&A, and more, you can view an archived version here.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues