-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Truth, lies, and large language models

Article Featured Image

What to do about this?

What can we do about this? I would be fine with requiring chat AI responses to include some type of digital watermark indicating that they were machine-made. Several proposals have been floated, including a clever one from Google for products of its own entries in this field. None of them have the buy-in from competing AI companies that could make it a reliable enough tool. But I doubt we’re done innovating on this front.

Unless and until watermarking is reliable, these bots could still, even more annoyingly, remind us of their detachment from reality every time we ask a question. At the stronger end, they could simply refuse to answer questions that sound like they’re asking for knowledge. That latter approach would cut down on the chat AIs’ popularity, but since they frequently give false answers, is that such a bad thing?

Or they could get better at knowledge. Plugins like Wolfram|Alpha’s bestow genuine mathematical knowledge upon ChatGPT; Wikipedia is experimenting with its own plugin.

AI chatbots for reliable knowledge

Sites that use AI chatbots for reliable knowledge could put in place processes that check their responses against known sources that are actually reliable. Bing already seems to be running its ChatGPT responses through its search engine to identify sources. And if you preface your prompt for Google Bard with, “Using only sources you can cite,” it gives you actual links for what it says.

Or perhaps chat AI could simply refuse to answer factual questions, and instead use its incredible power to turn prompts into well-defined queries to relatively reliable sources of knowledge. What could be easier than chatting with ChatGPT and having it use its admirable ability to summarize the posts it finds— and, in my experience, with far fewer hallucinations. In response to a query, it could summarize Wikipedia and the pages returned by a plain old web search. In fact, given how difficult to read many Wikipedia articles have become, I sometimes use ChatGPT to summarize an article as if I were in high school. It seems to do quite a good job at that.

The good news is that the problem of chat AI’s proclivity for hallucinating is well-recognized by the organizations creating these marvels, and they realize that it is a danger to the world and to their success, not necessarily in that order of priority. Until that problem is solved, chat AI engines need to lose their self-confidence and make it crystal clear that they are the most unabashed and charming liars the world has ever seen.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues