Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

From just-in-time to just-ahead-of-time

Looking to nature for clues

Much of the current thinking in anticipatory systems is based on the work of the late Robert Rosen, a brilliant theoretical and mathematical biologist. His book, Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations, 2nd edition (2012), is the definitive resource.

One of Rosen’s key insights was that living, biological organisms and systems are caught in a temporal field in which the past and future each has a causal effect on current behavior. This happens primarily through modeling what is, what is possible and what is probable.

This opens the door to an entirely new approach to AI by making the shift from reactionary to anticipatory behaviors. The reactionary approach attempts to remedy a problem by looking back and deconstructing the events and conditions leading up to it. Neural networks do this, as they attempt to link all possible past conditions to a single present outcome.

The anticipatory approach looks in both directions. It formulates a recommended course of action not only by starting in the past and working forward, but also by postulating a possible future state and working back to the present.

Rosen maintained that anticipatory behaviors can be found in living things at all levels—from the cellular to the individual to the ecosystem, including socio-economic systems. That’s not to say living systems always get it right. Anticipation of future events or outcomes often results in poor decisions based on hype (excessive optimism) or fear (excessive pessimism). Automating anticipatory processes can provide structure and consistency which is often lacking in human nature. This fits in perfectly with our goal of computers working with humans rather than replacing them.

Getting started

Successful anticipation means paying attention to weak signals indicating seemingly minor disparities, which if unchecked can grow into a catastrophic event. This is where the first two levels of our next-generation AI architecture come into play. The first provides the computational capacity to handle the large volume, variety and velocity of weak signals. The second links those signals with an ontology rich enough to provide sufficient meaning and context to plan a sequence of possible responses long before the actual events begin to unfold.

In their book, Managing the Unexpected (2007), Karl Weick and Kathleen Sutcliffe found that successful anticipation in resilient organizations means avoiding three key mistakes: 1) fear of failure, 2) over-simplification and 3) operational disconnect. The first requires a subtle change in mindset: actively seeking indications of potential failure rather than avoiding them. This is standard practice in the medical profession, in which weak signals are treated with suspicion until eliminated through differential diagnosis; hence the relentless focus on early detection.

In the second aspect, the drive toward simplicity often results in lumping potentially critical parameters into broad, generic categories. This greatly dilutes their anticipatory properties. To prevent this, a machine-readable ontology can keep track of the myriad, seemingly minor details, along with the complex web of interdependencies. This allows decision-makers to be alerted when things start to drift in the wrong direction. Note that “start to drift” is a far cry from crossing an arbitrary threshold, as is often done in computationally intensive approaches.

In the third aspect, operational errors are minimized by intelligently linking the ontology to specific processes where the necessary interventions can take place. This includes learning by projecting forward, postulating what might go wrong and formulating structured yet flexible ways to detect and prevent it. Adaptability is key.

We’ve barely scratched the surface of this exciting yet critical area of research. Embodying anticipatory behaviors in artificial systems entails building rich, adaptive models of both the system and its environment. If you’re serious about implementing AI in your future enterprise, you need to start anticipating how to make anticipatory systems an integral part of your overall strategy.

For more anticipation-related war stories and their application to business, see Extreme Ownership: How U.S. Navy SEALS Lead and Win, by Jocko Willink and Leif Babin, St. Martin’s Press (2017).

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues