-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Cognitive computing: A definition and some thoughts

Computers are one of those artifacts of modern life that we love to hate. They are powerful, pervasive, intrusive and, let’s face it, clumsy to use. Today’s applications require us to break down complex, subtle ideas into simplistic statements. We must learn arcane codes to speak their language. They are incapable of assisting us in an evolving knowledge voyage because their understanding breaks down as our context or intentions change.

Quietly, though, researchers have been attacking some of those barriers. Using tools like natural language understanding, search and categorization, visualization, data analysis, psychology, statistics, research in how the brain works and studies in human information interaction, they began to construct a new type of computing system. The work has been evolving for years, and it adds research in artificial intelligence, game theory and message understanding as well. IBM’s Watson changed the game, though, when it won Jeopardy in 2011. With that event, “cognitive computing” grabbed the spotlight, spawning books, articles, conferences, speeches, hopes and fears.

Like most emergent technology phenomena, however, everyone talks about it, but no one can define it. We know it’s different from current systems, but how? That’s the problem our working group on cognitive computing decided to attack. The definition that follows was developed by a cross-disciplinary group of experts from the IT industry, academic institutions and analyst firms. It will be used as a benchmark to determine whether an application can be considered cognitive or not. And it will evolve as others comment on it. Feel free to use it, and to comment on it on wikipedia.org or at synthexis.com/cognitive computing. Our future work will be to develop a set of categories of cognitive computing problems. More on that at KMWorld 2014 (kmworld.com/Conference/2014) in November in Washington, D.C.

Making context computable

Cognitive computing makes a new class of problems computable. It addresses complex situations that are characterized by ambiguity and uncertainty; in other words, it handles human kinds of problems. In these dynamic, information-rich and shifting situations, data tends to change frequently, and it is often conflicting. The goals of users evolve as they learn more and redefine their objectives. To respond to the fluid nature of users’ understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but also of influences, contexts and insights. To do this, systems often need to weigh conflicting evidence and suggest an answer that is “best” rather than “right.”

Cognitive computing systems make context computable. They identify and extract context features such as hour, location, task, history or profile to present an information set that is appropriate for an individual or for a dependent application engaged in a specific process at a specific time and place. They provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and then apply those patterns to respond to the needs of the moment.

Cognitive computing systems redefine the nature of the relationship between people and their increasingly pervasive digital environment. They may play the role of assistant or coach for the user, or they may act virtually autonomously in many problem-solving situations. The boundaries of the processes and domains these systems will affect are still elastic and emergent. Their output may be prescriptive, suggestive, instructive or simply entertaining.

In order to achieve this new level of computing, cognitive systems must be:

  • Adaptive. They must learn as information changes, and as goals and requirements evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real time, or near real time.
  • Interactive. They must interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices and cloud services, as well as with people.
  • Iterative and stateful. They must aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must “remember” previous interactions in a process and return information that is suitable for the specific application at that point in time.
  • Contextual. They must understand, identify and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory or sensor-provided).

Cognitive systems differ from current computing applications: They move beyond tabulating and calculating based on preconfigured rules and programs. They already infer and even reason based on broad objectives.

Many of today’s applications (e.g., search, e-commerce, e-discovery) exhibit some of those features, but it is rare to find all of them fully integrated and interactive.

Cognitive systems will coexist with legacy systems into the indefinite future. But the ambition and reach of cognitive computing are fundamentally different. Leaving the model of computer-as-appliance behind, it seeks to bring computing into a closer, fundamental partnership in human endeavors.

What’s next?

So far, we have identified two different types of cognitive computing systems. The first delivers highly relevant information within the context of the individual’s information needs of the moment. The second finds patterns and surprises—a smart serendipity machine. What’s ahead for cognitive computing is both already evident and frustratingly opaque. Undoubtedly we will see smarter Siris, a conversational, self-controlled transport vehicle from Google, a Watson-powered medical diagnostics advisor from IBM. Perhaps we will adopt personal avatars whose talents in some areas put our “real” ones to shame.

As the future uses of these systems come to light, we hope to help ground the discussion of cognitive computing in a verifiable framework of capabilities and technologies.

Note: This definition was developed by representatives from BA Insight, Babson College, Basis Technology, Cognitive Scale, CustomerMatrix, Decision Resources, Ektron, Google, Hewlett-Packard, IBM, Microsoft/Bing, Next Era Research, Oracle, Pivotal, SAS, Saxena Foundation, Synthexis and Textwise /IP.com. It was sponsored by CustomerMatrix and Hewlett Packard, with additional support from IBM. 

 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Companies and Suppliers Mentioned