-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Generative AI search: preventing defects and creating an answer engine

The conversation of hallucinations is a necessary hurdle for any and all organizations attempting to leverage the power of large language models (LLMs).

Colin Harman, head of technology, Nesh, and Ryan Welsh, founder and CEO, Kyndi, led KMWorld 2023’s session, “Generative AI Search: Preventing Defects and Creating an Answer Engine,” to explore the shortcomings and potential successes of generative AI (GenAI) search.

Harman posed a threatening scenario: Your LLM integrated into your search system shows users a hallucinated, non-factual answer, even though your relevance system worked perfectly, and you grounded the model with search results. All the carefully constructed trust that your organization has cultivated is thereby eradicated.

These hallucinations are also known as defects, where “a defect is any difference between what a user expects from your system and what your system is able to provide,” explained Harman. “The reason you care about defects is because it directly impacts your user satisfaction and affects your brand reputation.”

There are various categories for defects—generation defects, capability expectation mismatch, user preference, and retrieval defects.

For the purposes of the presentation, Harman focused on generation defects.

Generation defects, aka hallucinations, are either unfaithful or incorrect. Unfaithful defects are made up and unsupported by the available context, where incorrect defects are discrepancies in the model where the model cannot provide the correct response.

Harman warned against LLMs that use or generate numbers, explaining that LLMs are fundamentally not designed to handle quantities. They will often assume that two similar numbers are the same, leading to obvious query failures.   

Generation defects can be mitigated by:

  • Prompt statements that use provided content only
  • If there is not enough information, say so
  • Set the model temperature to 0.0
  • Include passage metadata in prompt context

Harman also emphasized the importance of source quality, where generation ultimately depends on the metadata that the ingestion consumes. Without this metadata, the answers generated will likely be wrong.

“The more noise you feed into the model, the more complex a task it’s given, and the more likely it is to make a mistake,” said Harman.

You can’t completely mitigate against defects, as people will attempt to leverage the LLM system for something it simply cannot do. With UX and guardrails, you can reduce the amount of defects and create a more trustable search system, according to Harman.

“Anyone heard of generative AI?” Welsh joked, emphasizing the massive proliferation of GenAI technologies on the market.

Despite this growth, Welsh pointed out that many of the current GenAI search solutions are yielding poor search outcomes, ranging from siloed knowledge management, duplicate and inconsistent content, and poor visibility into user’s needs.

Welsh echoed Harman’s earlier point, explaining that “the quality of the information that comes out of the LLM and the likelihood of hallucination is dependent on the input information.”

Ultimately, you can’t throw an LLM at everything and expect it to “fix” something; if you don’t have quality input, you won’t have quality output.

Welsh then introduced the Kyndi Generative AI Answer Engine, which takes knowledge from human experts and encodes that information into the representation. This unique approach also features explainability, domain-specific answers in minutes, all at a lower cost.

“We’ve [Kyndi] thought about all those defects that Harman just spoke about,” said Welsh. “When you’re using our complete, end-to-end network, the experts at Kyndi are thinking about all of those defects for you so you don’t have to.”

Kyndi takes all the vectors provided into a graph structure, capturing semantic and contextual information that vectors alone cannot capture. This neuro-symbolic approach to AI provides explainability in the results, where summarized answers are directly rooted in enterprise content to ensure that there are no hallucinations. 

KMWorld returned to the J.W. Marriott in Washington D.C. on November 6-9, with pre-conference workshops held on November 6.

KMWorld 2023 is a part of a unique program of five co-located conferences, which also includes Enterprise Search & Discovery, Enterprise AI World, Taxonomy Boot Camp, and Text Analytics Forum.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues