-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

AI? Or cognitive computing?

Article Featured Image

The past year has catapulted artificial intelligence into the public consciousness in dramatic fashion, utterly eclipsing the boomlet of inflated expectations that AI experienced in the 1980s. The remaining snow drifts of skepticism left over from the “AI winter” that followed that earlier round of enthusiasm are melting fast—you can hardly find them now, perhaps only on shady edges of parking lots where the mountains left by snowplows of doubt back in the early ’90s are now reduced to a few quickly shrinking snowballs.

Everyone is talking about AI. While only a few years ago the only people who would utter the term were Ray Kurzweil and the few other courageous zealots from the “old days” who couldn’t give up on the vision, now we see it discussed on local TV news, on the covers of general audience magazines and in The New York Times. Presidents, captains of industry, technology gurus, scientists, doctors, university presidents, TV personalities and even our most high-profile tech executives are engaging in a loud and often divisive conversation about how AI will change our world.

Cognitive business

But as much attention as the progress and the promise of technology has received in the media, the resulting cloud of hype has created more confusion than understanding around the terms that it uses, like “machine learning” and “deep learning” and most especially around “AI” versus “cognitive computing.”

As recently as 2011, the year IBM launched its Jeopardy Challenge campaign with Watson, IBM itself would not use the term AI, choosing instead to talk about Watson’s natural language understanding and learning methodologies as cognitive computing. As IBM ratcheted up its Watson marketing, that term cognitive computing became part of its market definition program. IBM did not want to be viewed as taking their enterprise customers into risky and uncharted waters of AI, but rather they were shepherding those enterprises in the direction of better leveraging their digital assets in the new practices of cognitive business. They didn’t want executives worrying about machine overlords, they wanted them to understand that there were technologies in operation today that could have highly profitable impacts on their businesses.

The AI arms race

While IBM was busy branding cognitive computing, the new tech behemoths—Google, Amazon, Facebook—were busy reintroducing the terminology of AI. They hired bright graduates of leading academic computer science programs, like those at Stanford, Carnegie Mellon, MIT, et. al., who had kept alive in their graduate programs the legacy of the early AI work of the 60s, 70s, 80s and 90s. Those companies wound up creating AI research teams that are orders of magnitude larger than earlier efforts, and they took a new open source-based approach to disseminating the products of that research. Opening up TensorFlow or Facebook Artificial Intelligence Researchers (FAIR) or Microsoft’s cognitive services literally gave AI tools for free to anyone with the know-how to work with them. By 2016, due largely to those efforts and the strategic importance they were given by the executive suites of the respective competitors, the AI arms race had become the next grand battlefield in the global tech wars.

So now we have AI terminology dominating the headlines, with cognitive computing continuing to be in common usage in enterprise conversations. The question becomes: How can we talk intelligently about the developments in our industry when in any exchange of views we have to conduct a sense-making exercise to unpack the meaning of the words that we use?

The Cognitive Computing Consortium and Babson College have been undertaking a joint research program aimed at providing industry with a framework for characterizing the new portfolios of learning-based applications. One of the results of the work so far has been a simple four-cell array (see chart on page 4, KMWorld, Nov/Dec 2017, Vol 26 Issue 9 shown below, or download chart) that helps explicate the purpose and the technical approach for any specific application. As a side benefit, this model offers a way to differentiate and clarify what we mean when we say “AI” and when we say “cognitive computing.”


Chart from page 4, KMWorld, Nov/Dec 2017, Vol 26 Issue 9,

The horizontal axis of this matrix takes on the challenge of bringing forward the historical debate in the AI community between those who feel that AI should stand for true “artificial general intelligence” and those who feel that the realities of our slow progress in research and delivery of learning-based systems dictates that we should speak of AI only as “augmented human intelligence.” In our view, it is a mistake to insist on a chasm between those interpretations—in reality they form a continuum of machines’ abilities to render human-like behaviors.

The vertical axis of the matrix posits two poles representing a fundamental division in the purposes or functional capabilities of these learning-based systems. One primary purpose of these systems is to serve as advisors. That is the expert system tradition long studied in knowledge management automation. These systems work best when we want a system to help us find something that we can articulate or remind us about something we think we know already but can’t bring into focus at the moment—they help us manage what we know. Today they are critical for that single function: “help me find what I know (or suspect) is there somewhere.” Big data has made them an essential tool of business. But they may also be helpful in surfacing problems or insights that we suspect might be there but haven’t learned about yet—they help us surface “known unknowns” within the guidance offered by specific, learned domains.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues