-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Get your game on: KM skills needed for reliable use of LLMs

Article Featured Image

The challenge here is simple: To fill the generative AI system with trusted, verified, and sourced data, you need knowledge managers and SMEs to select and feed the data. But to do that is expensive and takes time. Don’t get me wrong—that is precisely what credible AI vendors are doing, but it is not at all easy or practical at times to scale. In our conversations with technology vendors in the KM space, almost all are building in or integrating with
generative AI. This is mainly because you, the customer, will only access and process your internal corporate data, possibly supplemented with a high-quality, narrowly-focused LLM. It makes good sense, on paper, at least. Still, as any KMWorld reader knows, most corporate data mountains are not in excellent condition and come complete with redundancy and unintentional errors.

The way forward

What is to be done? There is no questioning that generative AI is here to stay, but its use in mission-critical work has some way to go before it can be trusted and let loose. The technology may improve over time; the KM community should use this time to resurge, engage, and educate. I have said it before in this column: The advances in AI and unstructured data automation over the past decade have brought us further than anyone could have imagined. But without professional knowledge and information managers to architect, run, and manage these advanced technologies, they won’t deliver on their promises. Indeed, they will end up delivering unintended and unwelcome outcomes.

My gut feeling is that the teams behind OpenAI, ChatGPT, and their ilk have been surprised by just how many errors (aka hallucinations) the systems make. High accuracy in closed test settings is seldom replicated in the real world, which seems to be the case here. Examples of the errors that ChatGPT generates are now a core component of my conference presentation slides. Just a year ago, I used the disastrous and thankfully short-lived Microsoft Tay as my example of how AI can show such promise yet backfire quickly. That was another example of AI that worked perfectly in tests and fell flat when exposed to the real world. The thing is, the cat is now out of the bag, and generative AI will be with us for a long time—but much work has to be done to get it to be truly trustworthy, and the KM community should be leading that work.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues