-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Boosting LLM reliability with RAG

Article Featured Image

In an ideal world, search is the smooth vehicle that brings knowledge consumers to the information they need, when and where they need it. Yet, in reality, search is not without its ugly side—an arena rife with hallucinations, false information, and fabricated data which has been particularly exaggerated by the proliferation of large language models (LLMs).

AI and search experts joined KMWorld’s webinar, Optimizing LLMs with RAG: Key Technologies and Best Practices, to discuss how search reliability can be maximized through retrieval augmented generation (RAG), the AI framework that improves the quality and relevance of generated text.

Adam Oliner, CEO and founder of Graft, offered valuable context around foundation models—of which LLMs are one type. A way to think about foundation models, Oliner explained, “is as a kind of database that does compression…these models are capable of storing much more information…but it has the property that you don’t get out [the amount] that you put in.” Meaning, based on the wide plethora of information that the model consumes, it delivers a smaller, statistically plausible answer to a user’s query.

However, “statistically plausible” answers leave more to be desired, noted Oliner. Enterprises want utmost accuracy and reliability from its generated text; fortunately, RAG works to solve this pain point.

RAG acts as an external database which searches over the secondary storage for relevant data in conjunction with what’s compressed in the model. Its access to factual, uncompressed information from an enterprise’s data storage better grounds query responses than just an LLM model would.

According to Oliner, though RAG already offers a way to enhance query responses, it can be further improved with the following processes that are available through technologies like the Graft Intelligence Layer:

  • Allow the model to support larger contexts, pulling more information from uncompressed data storage
  • Perform better retrieval through lexical, semantic, and hybrid search, knowledge graphs, and more
  • Improve the data itself, treating it as more than a static artifact

Sebastian Gabler, chief customer officer of PoolParty, argued that semantic RAG—a specialization of advanced RAG—can help to elevate search at the enterprise level.

Semantic RAG seamlessly integrates knowledge models and retrieved information into the generation process, enabling more informed and contextually grounded answers and mitigating hallucinations. It also facilitates a user experience that is not only accurate but also aligned with underlying domain knowledge, according to Gabler.

With technologies that employ semantic RAG—such as the PoolParty Semantic Suite—organizations can shorten time-to-insight, democratize querying, lower costs for implementation and maintenance, and remediate the limits and complications of LLMs.

In the context of PoolParty’s solution and methodology, this is accomplished with knowledge graph technology. Knowledge graphs, when used to support AI and generative AI (GenAI) initiatives, can help to accelerate LLM integration and ensure regulatory compliance.

For the full discussion of optimizing LLMs with RAG, you can view an archived version of the webinar here.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Companies and Suppliers Mentioned