-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Artificial intuition

Article Featured Image

Over the past 5 years or so, a few people have somewhat proudly said to me that they like to think of AI as artificial intuition. I’ve nodded and pursed my lips in appreciation, although I’ve written off the phrase because intuition is often such a terrible guide. But I’ve been too quick to dismiss it. I think machine learning may be changing my mind about this—not by giving me evidence and reasons, but by reshaping my (and perhaps our) model of knowledge.

My failure to take the idea seriously enough comes proximally from my experience teaching philosophy of ethics. Moral intuitionism is one of the West’s standard frameworks, although it doesn’t get much love because we have such clear evidence of moral intuitions being disastrously wrong. A lot of racists have thought racism is intuitively obvious, and most of us think that taking care of our family in preference to strangers—“Family first!”—is intuitively obvious.

Intuition

Intuitionism of this sort is offensive to most moral philosophy because intuitions have two primary characteristics. First, saying something is intuitive is saying that you can’t—or don’t feel you need to—provide reasons justifying it. But moral philosophy is all about discovering the sorts of reasons that justify moral decisions.

The second characteristic of intuition is expressed by the immortal line from Ring Lardner’s short story, “The Young Immigrunts”: “‘Shut up,’ he explained.” Claiming something is intuitive shuts down the conversation before it begins, and no philosopher likes that.

But neither of my examples are good ones. The racist will give you plenty of reasons why the object of their hatred deserves it: Those other people are lazy, or cheap, or smell bad. The parent who assumes that family comes first might tell you about the role they suppose family-centric ethics plays in evolution. Those might be good or bad reasons—and I’m going to go out on a limb and say that the racist’s reasons are bad—but if the person can give you any reasons, then they don’t think they’re acting on intuition.

So, I’ve dismissed intuition on the basis of bad examples of it. Real cases of intuition aren’t like that, and usually are far from pernicious. For example, someone might have an intuition that you're a vegetarian, or an intuition that a third-grade class will respond well to a particular activity. These count as intuitions if the person can’t put their finger on exactly what led them to those conclusions, but they have an internal sense that it’s not just a guess.

This could seem to be simple fuzzy or ungrounded thinking, but machine learning is giving us a different type of model.

Sometimes the output from machine learning models is both correct and inexplicable. There are a number of related types of machine learning inexplicability, but a common one is the multidimensional nature of the relationships it finds. In a one-dimensional relationship, you’re comparing things according to a single property: The paints are either water- or oil-based. If you're simultaneously comparing them in terms of price, color, wall coverage, customer ratings, manufacturer, and how much they’ll glare, you're putting them in a multidimensional relationship. Machine learning doesn't care how many dimensions it needs in order to usefully sort matters out or to make accurate predictions. It will happily consider millions of dimensions if the machine has enough power. Try wrapping your puny earthling brain around a million-dimensional model!

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues