You may have noted that I didn’t reference AI, particularly generative AI, until paragraph three, even though it seems to be the law that all columns written in 2023 should lead with the topic. But I couldn’t avoid it forever, as generative AI is at the peak of its hype cycle and dominates the tech landscape. Moreover, it’s a tech advancement that directly impacts the world of KM. Technology vendors that sell into the KM market are almost rabid in their enthusiasm to embrace generative AI, and, overall, it’s fair to say, with good reason. But to invoke a phrase from my homeland, it’s “horses for courses.” In other words, what runs well in one place may not run so well in another.
A 100% accuracy rate is a worthy goal for KM, but it’s not realistic. Experts that manually curate a specialized knowledge-base will make errors, not intentionally, but it’s not and never will be feasible to validate every datapoint in information sources. Generative AI cannot, and will never, deliver 100% accurate results. Likewise, it’s simply not feasible to validate every datapoint before processing it or every output before giving it to the user. Mistakes will happen; that is just how it goes. But the criticality and burden of those mistakes vary enormously.