-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Get your game on: KM skills needed for reliable use of LLMs

Article Featured Image

Every academic, journalist, lawyer, or, for that matter, industry analyst knows the importance of fact-checking sources. For some, it’s a laborious manual task; others outsource the work and rely on trusted and proven mega-sources such as LexisNexis or Bloomberg that curate and verify the information. Knowledge workers in critical positions understand the importance of source verification, but that doesn’t mean people don’t take shortcuts or aren’t sometimes fooled by accurate-looking but misleading information. In this era of generative AI, there has never been more of a need for KM skills, but the future of KM is, at best, mixed.

Let’s first look at the positives. Every flavor of AI needs good data and good management of its output to work accurately. Without either, results from AI will never be optimal and can, on a bad day, be disastrous. As the saying goes: “Good data in; good data out.” What constitutes good and accurate data, however, is subjective, hence, the interest and investment in building out industry-specific large language models (LLMs). An excellent example is
Oracle’s LLM, built specifically to meet the needs of first responders. The key word here is “built,” because these systems don’t build themselves; they require information management and subject matter experts (SMEs) to design, architect, and manage them.

Generative AI needs KM experts

Billions of dollars are being invested in narrowly focused LLMs, so this should be a boom time for KM experts. Like it or not, generative AI will play a key role in KM in the future and has the potential to deliver rich, accurate, timely, and insightful knowledge to the workers and customers who need it. But though there is a boom in generative AI, there does not seem to be a parallel demand for skilled knowledge managers. This is worrying.

On the sunny side of the street, many more knowledge managers are being hired for work related to generative AI, although this may be masked by the fact that they are often not called knowledge managers. But my team wonders if those building generative AI systems have put too much faith in the AI itself to worry about hiring the experts needed to do the job correctly.

We know that a critical strength of generative AI is also one of its most significant weaknesses: It is designed to give excellent and plausible answers based on patterns, not facts, and, as such, it gets things wrong. In theory, you can get a system such as ChatGPT to provide the sources it used for its answers. The problem is, first, that few would bother or know how to do that, and second, ChatGPT has already been shown to invent sources. We can only hope that in the future, narrow business-focused LLMs will be designed with complete accuracy in mind. And the only way to get that is with human supervision of the data collection. OpenAI, in contrast, was primarily trained without human supervision on open web sources such as Wikipedia and even Reddit.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues