Avoiding messy GenAI implementation with Catalynk and SearchUnify
The freshness of data and information grows ever-crucial for knowledge management, where today’s fast-paced, information-overloaded world makes traditional, hierarchical KM obsolete. With the advent of generative AI (GenAI), organizations may be able to completely transform their knowledge infrastructures, automating a variety of tasks to make data as relevant and instantaneous as possible.
Beth Coleman, lead consultant and KCS trainer at Catalynk, and Brian Corcoran, KCS certified practitioner and cognitive search evangelist at SearchUnify, joined KMWorld’s webinar, From Struggle to Success: Knowledge Management Made Easy With GenAI, to illustrate how to best leverage GenAI to drive KM success and overcome rampant knowledge challenges.
Many organizations may find themselves in a knowledge “mess,” where difficulties in capture, knowledge creation, identifying and visualizing content gaps, and more prevent a successful knowledge management architecture from taking shape. Turning to GenAI is a common response, yet using GenAI to automate a mess will only make it messier, faster, according to Corcoran. Without having accessible knowledge—as well as the change management skills to embark on transformation—any applications of GenAI will be fundamentally useless.
Similarly, large language models (LLMs) are data-dependent; without having a robust data architecture, LLMs may:
- Perpetuate bias
- Lack controllability
- Lack common-sense reasoning
- Have limited contextual understanding
- Disrespect copyright policy
- Hallucinate
- Introduce privacy risk
Curated content, Coleman stated, is crucial for these GenAI initiatives to succeed without enabling these risks. For many KM practitioners, GenAI is being used for five key use cases:
- Knowledge sourcing: The specific mechanism by which an employee accesses another’s knowledge included those recently proposed in the KM literature (knowledge repositories, virtual communities of practice, and more) and well-established organizational practices (meetings, memos, etc.)
- Knowledge generation: Using GenAI to create new knowledge from existing data sources, which may be risky if using an open or public LLM that will contribute content to overall training datasets
- Knowledge structuring: Involves the transformation of information, data, or existing knowledge into new insights, ideas, concepts, or solutions
- Knowledge discovery and presentation: The process of querying a dataset or a repository to find specific pieces of information that match the user's criteria and is personalized to the user’s maturity, context, and journey
- Knowledge optimization: The processes and practices such as updating outdated information, identifying knowledge gaps, analyzing content health, verifying the accuracy of knowledge, adding new insights or findings, and organizing knowledge for efficient retrieval
For each of these use cases, fine tuning a LLM for a specific task or enterprise domain is critical. Despite its significance, fine-tuning LLMs is cost and resource intensive, especially as it scales; introduces challenges in managing permissions; drives a repetitive need of labeled training datasets; causes hallucinations that are hard to control; and maintains a higher level of complexity that requires expertise.
This fundamental challenge can be solved with retrieval augmented generation (RAG) pipelines, made possible via SearchUnify FRAG, a solution that helps unlock the value of continuously changing content without the need to constantly retrain the LLM, noted Corcoran. By federating (the “F” in “FRAG”) multiple sources into the RAG pipeline, enterprises can enable:
- Dynamic real-time information handling based on permissions
- Greater flexibility across the response generation process
- Reduced risk of hallucinations
- Elimination of the need for frequent retraining
- Permissions respected based on user access to source content
- Ease of maintenance
Coleman and Corcoran then introduced SearchUnify’s Knowbler—a machine learning (ML) and GenAI-based KM solution designed to help enterprises achieve up to 90% faster knowledge curation, 100% agent participation, and over 80% reduction in time to create new knowledge—as well as Knowledge-Centered Service (KCS)—a refined methodology that helps companies capture knowledge, solve problems faster, enable self-service, improve customer and employee satisfaction, and build organizational learning.
For the full, in-depth discussion of using GenAI to improve KM, you can view an archived version of the webinar here.