The evolution of knowledge graph solutions
Knowledge graphs have evolved to power modern AI, progressing into many varieties such as Knowledge Graph Embeddings (KGE), Domain Knowledge Graphs (DKG), and Retrieval-Augmented Generation (RAG) solutions. Even with RAG, there are variations.
These different approaches enhance knowledge discovery, contextual understanding, and AI-driven applications.
KMWorld recently held a webinar, Understanding Knowledge Graphs - KGE, DKG and RAG, with experts who discussed how to improve search, reasoning, and generative AI performance throughout the enterprise using knowledge graphs.
Victor Lee, PhD, head of product marketing, TigerGraph, said knowledge graphs provide clear, interconnected facts that enable reasoning. Graphs can represent all types of knowledge relevant to your domain and your business, including:
- Industry jargon and knowledge
- Product catalog
- Components of Products
- Features of Products
- User Manuals
- Support Manuals
- Customer purchase history
- Payment details (for fraud analytics)
GraphRAG can be used for relevant and responsible AI, Lee said. It can improve LLM prompts with a knowledge graph describing your data domain. It can also offer scalability and real-time updates for large and changing knowledgebases.
The TigerGraph solution combines a hybrid graph with a vector database, Lee explained. It offers hybrid search for smarter, agentic AI. According to Lee, there are several questions to ask when looking for a knowledge graph/GraphRAG solution for better AI, this includes:
- Does it scale to handle all your knowledge/data?
- Does it have the speed to handle your workload?
- Is it usable and understandable as a standalone graph, without AI?
- Is there integration for building the graph from unstructured documents?
- Is there an integration between the graph database and vector database?
The key to building AI-ready data is having a semantic data foundation that is safe, accurate, and scalable, according to Steve Hedden, product lead at TopQuadrant.
A knowledge graph solution should leverage unique business models and inferred relationships as the underlying “source of truth” for CMS, Hedden noted. He recommended aligning key terms in the CMS to centralized, controlled vocabularies to increase accuracy. Apply AI to generate tags for content that is relevant yet not explicitly linked. And ensure data usage complies with regulatory requirements using built-in guardrails/policy-as-code metadata in real time.
Charles Lagor, principal biomedical informaticist, Semedy, outlined the requirements for the knowledge graph ecosystem, this includes:
- Different “actors” must be supported
- Entities must have their own lifecycle states
- Data must meet specifications of ontology
- Data must comply with specific constraints
- Knowledge must be interoperable
- Versions and dependencies must be managed
- Provenance and changes must be tracked
- Validation rules must ensure integrity
For the full webinar, featuring a more in-depth discussion, Q&A, demo, and more, you can view an archived version of the webinar here.