KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

The future of knowledge management: Talking to documents with generative AI

Article Featured Image

Peering into the future is always problematic, but it's clear that generative AI will have a profound effect on KM, rapidly shortening the time it takes to access future capabilities so that they’re available today. Generative AI relies on large language models (LLMs) for its training sets. But multi-modal foundation models—think Google’s BERT, for example—also play a large role. Generative AI supplies a natural language interface to data and KM systems. Foundation models can generate code, visualizations (including images and video), and data models with which to view business domains.

Multimodal foundation models, driven by advanced machine learning (ML), have made it possible for humans to talk to documents—and for those documents to talk back. They’ve transformed search into question-answering, automated much of the legwork for crafting taxonomies and ontologies, and culled knowledge from an endless number of sources in almost no time. The consequences of applying this statistical approach to generating content and natural language are vast.

According to SAS VP of advanced analytics Udo Sglavo, “This is the beauty of these models, that they can behave like humans. I don’t want to overstress this because a large language model, of course, is not a human. But, it now does tasks which humans have been better at in the past.”

With a battery of approaches for mitigating the highly publicized “hallucinations” of language models and overcoming the data privacy issues of accessing them via publicly available APIs, organizations may very well realize the future of KM now.

A natural language interface

Language models’ propensity to make complex information technology systems accessible via natural language involves KM professionals as much as it does anyone else. This characteristic is both generative AI’s premier benefit and a launching point for a host of future deployments. “Generative AI helps engage, on a human level, between whatever services you’re accessing through it and the individual consuming that, on a more natural level,” pointed out John Riewerts, SVP of engineering at Acoustic. “That will continue to evolve and not go away anytime soon. It will continue to get better.” Natural language, of course, is the preferred means of communicating for humans; it’s simply how we talk. The achievements language models have made in this medium for interacting with it are formidable.

The models are also extremely consequential for KM applications. Sglavo referenced a financial services institution that has a database “of decisions they made for providing loans, taking risks for credit, information about past behavior, how they treated customers, and which decision they made under which circumstances.” Implementing language models as an interface with this system will allow that institution to have conversational interactions with it. The models won’t predict future behavior, but instead will measure comparable decisions in the present to those in the past—including their outcomes—and issue recommendations “like a lawyer would,” Sglavo said.

Ontologies, data models, and natural language queries

Language models’ responses are considerably improved by inserting domain knowledge, such as that found in ontologies and domain-specific data models, as a buffer between them and databases. Often contained in knowledge graphs, these data models serve as an “interface between the messy data and the language model,” remarked Molham Aref, RelationalAI CEO. “It becomes much easier for the language model to give you the right answer because it can formulate the query of the underlying dataset through the knowledge graph, which abstracts over all this complexity.”

This not only includes a heightened understanding of the concepts in the query, prompt, and underlying data sources, but it also involves the natural language query capacity of language models. Moreover, it’s horizontally applicable to any business domain. According to Aref, this advantage is realized with even “a relatively small ontology, a set of concepts that a knowledge graph typically has, around 200. That tends to be the upper-bound. In smaller organizations, it’s maybe dozens, as opposed to hundreds.”

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues