AI’s new type of knowledge
For almost all rule sets, you can’t tell what you’ll get until you apply the rule set over and over to each new result. Then, unless it’s clearly a repeating pattern, the only way to know what a particular row will look like is to repeatedly apply the rules. Wolfram refers to this characteristic as “computational irreducibility” because, unlike Newton’s sort of rules, you can’t just drop numbers into a formula and get your answer: Newton lets us compute how long the coin will drop without having to calculate its position at every moment of its fall.
Both Newtonian laws and cellular automata aim for simplicity and, at least sometimes, end up with complexity. Newton’s laws are simple, but the universe to which they apply is complex. Cellular automata’s 256 rules and the single binary line they apply to are both simple, yet the results can be computationally irreducible.
Machine learning (ML) assumes computational irreducibility. In fact, its value comes from not having to know ahead of time what the rules are: ML discovers the rules— patterns—in the data that it’s being trained on. Those patterns may intersect and compound, resulting in complexity beyond our understanding. ML also frequently is asked to apply those complex rules to starting states that are themselves complex, from global weather to molecular interactions across cell walls. In short, ML succeeds at complex events by embracing the complexity of our universe in both its rules and its state.
Knowledge derived from machine learning
But what type of knowledge does ML produce? Does it even count as knowledge? Is it knowledge if it’s a probability distribution of outcomes that (when it works) is correct, but we don’t always know why it’s correct, exactly how ML came up with it, what the salient factors were, or why they’re salient?
If so, it’s knowledge that lacks traditional roots in method and authority. It’s knowledge that often does not necessarily yield explanations. It’s knowledge that may be incapable of being checked or verified except by how well it performs.
Is this knowledge at all? I would say yes, even if we decide to give it its own name. But perhaps we shouldn’t, for calling it knowledge helps remind us of the unknowable complexity of our world’s rules and of the situations to which those rules are applied. ML knowledge is what usable knowledge looks like when it does not turn away from the ultimate unknowability of our world.