KMWorld 2022 - November 7-10, Register Now !

What ‘sentient’ AI teaches us

Article Featured Image

The second of Lemoine’s errors is underestimating not only the power of such massive models but also overestimating the originality of human conversation. It’ s humbling to realize that our clever conversations—much less the more meaningful ones—are so far from original that a machine could predict them.

While we should be sobered by the predictability of our speech, we shouldn’ t feel humiliated by it. It springs from the general stability of language, which is a requirement for a language to be language and not just argle-bargle. That stability is reflected also in the grammar that guides us as we put words together. And it is reinforced by the fact that using known and expected phrases increases the likelihood of successful communication.

The human factor

There is another entire dimension to the predictability that enables the success of large language models: the commonality and predictability of what we humans are interested in. Presumably—but I don’ t know—these models do better in conversations about, say, the weather and movies than about topics we rarely discuss such as where to sit at a luau where the main course is annotated quarks burbling introspective derbies. After all, these systems are trained on the relative distance of the words found in the texts they’re trained on. If all the words are rarely found near each other, the model’s predictions are going to be less accurate. Or so it seems to me, a liberal arts major who knows much less than he thinks he does.

So, there are indeed lessons to be learned from chatting with a system like LaMDA or GPT-3. But that machines have become sentient seems to me to be entirely the wrong conclusion to draw.

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues