-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

The trust problem with GenAI

Article Featured Image

During the last 20 years, mid-September through November has always been a busy time for me. It’s the height of the technology conference season. This year, I attended nine different events, and they all (under the covers) had two common themes. The first and most apparent was the promotion of generative AI (GenAI). Whether in the form of ChatGPT, Microsoft Copilot, or OpenText Aviator, everyone had some to sell—it’s all the rage right now! However, another pervasive theme emerged from talking with attendees at these events: the question of, “What can I trust?” It’s not that deep philosophical discussions were underway; instead, it was simply a developing concern. Trust and truth are central to technology, central to the work of information and knowledge management professionals, and both truth and trust are, at times, in short supply.

Almost everyone knows by now that GenAI is prone to what its makers like to call “hallucinations.” This is simply a polite way of saying that GenAI convincingly provides mistruths. It’s not so much that it makes mistakes—we all make mistakes—but rather that it is designed to be plausible, to be convincing, when it simply makes things up. So, in many ways, it’s less the mistakes that are a concern; instead, it is about when it makes mistakes and then essentially tries to cover them up. And that goes to the issue’s root, something to which we can likely all relate.

Truth will out (maybe)

For many years, I volunteered in the prison system in the U.S. and earlier in the U.K. During that time, I heard many incredible stories, some heartbreaking and some barely believable. As a prison visitor, you are taught not to judge, to listen only. To this day, I have no idea who was telling the truth and who was not, nor does it matter. But on a few occasions, I visited with accomplished liars and con artists, people who had used and defrauded many people before being finally locked up. They were remarkable liars and had the process down to a fine art, ensuring that most of what they said was verifiable and plausible before working in the untruths. We all like to think we would never be the ones to fall victim to such people. However, it’s not the naivety of the conned; it’s the skill of the con artist that wins out.

Back to generative AI. Imagine that it has been used to create a summary and analysis of a set of complex legal documents. The likelihood is that it will do an excellent job, so good, that any mistakes or hallucinations are so convincing as to be unnoticeable. And for what it’s worth, you don’t have to imagine that; it’ s already happened. One software company I know well ran more than a dozen proof of concepts within legal firms and large corporate legal departments earlier his year to show the power of GenAI. Not one went into production. The reason? After initially being wowed by the technology, a small error was spotted, and the fact that the error was wrapped so convincingly within the broader summary set off the alarms: How many other errors had it made that had gone unnoticed, and how can we trust it after this? “Lie to me once, shame on you. Lie to me twice, shame on me,” as the saying goes.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues