-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

The ChatGPT ways of knowledge

Article Featured Image

I am a huge fan of AI-based chatbots—ChatGPT, Google Bard, and the like—because they are mind-bogglingly implausible, but also because, when used carefully, they can teach us so much about the world. At the same time, I worry about what they’re going to teach us about how we know the world.

There’s a huge difference between the two. Knowing about the world means understanding how particular things operate—from sand crowding through an hourglass to poor defendants being crowded through a legal system. Knowing how we know the world means seeing the sometimes hidden ways beliefs get formulated, established as reliably true, and propagated throughout a culture.

These two types of knowing—understanding the world and understanding knowledge—are, in some important ways, at odds in AI-based chatbots.

Of course, AI-based chatbots are extremely young as an end-user tool, so it’s impossible to know how they will be put to work over the long- or even short-term. However, they seem on their way to replacing web search engines. After all, why ask a search engine for a list of sources that might address your issue when you can ask AI-based chatbots to write a little essay answering the very question you asked?

Just answer the question

Sure, search engines these days already often give you the answer to your question followed by a list of links, but AI-based chatbots can answer more complex questions. For example, Google does a fine job finding places I can buy sunflower seeds, but Bard does way better at recommending good flowers to plant at the base of those sunflowers here in Massachusetts.

Google Search can instantly remind me who candidate Mike Dukakis’ running mate was in 1988—Lloyd Bentsen— but Google Bard can tell give me a useful report on the effect Bentsen had on the Dukakis campaign … useful if you ignore these two sentences at the end of its response: “In the end, the Democratic ticket of Michael Dukakis and Lloyd Bentsen won the election by a comfortable margin. Dukakis won 46% of the popular vote and 43 states, while Bush won 37% of the popular vote and 11 states.” I had forgotten that the Dukakis had won so many of the 54 American states!

Hallucinations are an obvious problem with AI-based chatbots. However, it’s such an important and obvious problem, that the big tech companies are highly motivated to address it. Thus, it will probably become less and less of a problem in the future. Still, it’s worth noting that the problem isn’t that the large language models (LLMs), to which AI-based chatbots are a front end, make things up: Everything these systems say is made up, based on the statistical relationships among words. Everything AI-based chatbots say is a hallucination. Most of them just happen to be true hallucinations.

So what’s the problem with what AI-based chatbots are teaching us about knowledge itself? Just about everything.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues