Conversation With an AI About Knowledge
I understand the many hesitations about using large language models (LLMs) such as ChatGPT as reliable sources of knowledge. But I still find them uniquely valuable as “conversational” partners for developing and testing ideas. Here’s a lightly edited transcript of one such interaction with ChatGPT 4o as an example.
DW: Traditionally in the West—and we’re only talking about the West— knowledge has been defined as “justified true belief.” We therefore expect to be able to ask someone for the justification of a statement of belief they’ve made. But when you, as an LLM, state something as knowledge, your reasons for making the statement are not justifications as we understand them. Right?
ChatGPT: Yes—with some clarification. LLMs like this one do not possess knowledge in the traditional epistemological sense of justified true belief. What a model provides are outputs based on statistical patterns in its training data, not beliefs, and not claims that have been consciously justified or evaluated as true. So:
♦ When an LLM “states” something that sounds like knowledge, it’s not offering a belief, and it has no internal justification for it.
♦ It can simulate a justification—e.g., “According to X, because of Y … ”— but this is a generated output, not an access to an internal belief structure or a reasoned inference process.
♦ It cannot verify the truth of its statement, nor can it check whether the justification it gives genuinely supports the claim (except probabilistically).
In short: Yes, this system cannot give a justification in the philosophical sense because it does not “believe,” does not “know,” and does not “justify.” It can only approximate the form of justification based on training data.
DW: Your simulated justification consists of the reasons you present to us for why we should believe it, but those aren’t the reasons you made your statement, right?
ChatGPT: Yes. When a large language model generates a statement, it does so based on statistical patterns in its training data: the correlations among tokens (words, phrases) in context, learned during training via optimization on a predictive objective (typically next-token prediction). The “reasons” it gives when prompted for justification are simulated post hoc rationalizations— they resemble what a human might give, but they are not the actual generative cause of the output.
The actual cause is mathematical: the activation of neurons across layers, influenced by the model’s parameters and the immediate context, with no introspective or epistemic access to “why” in the human sense.
In short: the justification you receive is a linguistically plausible echo of past justifications in the training data, not the reason the model generated the original claim.
DW: Since you generally, but not always, get knowledge statements right without what we count as justifications, how might this affect our human understanding of the concept of knowledge? How do you think it should affect it?
ChatGPT: The emergence of language models that often—but not always—get knowledge statements right without having access to reasons in the traditional epistemic sense poses a challenge to the Western concept of knowledge as justified true belief. The implications are substantial, and philosophers and cognitive scientists are only beginning to reckon with them. Here’s a high-level response:
How It Might Affect Our Understanding of Knowledge:
1. Decoupling justification from reliability: LLMs demonstrate that a system can generate mostly correct statements without possessing justifications.
2. Emphasis on performance over internal structure: LLMs “know” in the sense that they reliably produce true statements in a wide range of contexts. This aligns more with pragmatic or reliabilist theories of knowledge (e.g., Alvin Goldman’s), which ground knowledge in the reliability of the process rather than in explicit reasons.