-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Understanding cognitive computing: How much is enough?

Article Featured Image

Because cognitive computing is a highly technical and mathematical field, understanding the enabling technology can be difficult for the business users and executives who often, despite their lack of expertise in the field, have a role in making decisions about whether to adopt it. Moreover, they may also need to provide at least some input as to which software solutions to select. So how much technical knowledge does a decision-maker need to have in order to make intelligent choices about the future of their company?

At a conceptual level, the idea of a computer that thinks like a human is both understandable and appealing. At the procedural level, the idea that the system ingests information and performs calculations of some kind is also understandable. In between those high-level and low-level extremes, however, lies a lot of room for confusion.

The terminology alone can be daunting—machine learning, deep learning, neural networks and random forests are just a few that are likely to come up in discussions of cognitive computing. More generic terms such as algorithm and model also are in the vocabulary, but understanding them in that context can be difficult.

As with many emerging fields, cognitive computing is defined in different ways by different organizations. The Cognitive Computing Consortium, for example, has a detailed definition that addresses many dimensions, while Accenture condenses it to three words: “sense,” “comprehend” and “act,” and then explains each one.

Some definitions focus on the underlying technologies, which generally include machine learning, natural language processing (NLP), data mining and pattern recognition. Cognitive computing systems may also include ontologies that provide facts about entities and the relationships among them, whether general or specific.

Training the system

Because cognitive computing aims to emulate human thinking, the computer needs to be informed about that thinking so it can match its responses to those in the human model. “When people say they are training their model or their algorithm,” says Patrick Hall, machine learning engineer and data scientist at SAS, a provider of business intelligence solutions, “they mean that they will provide many examples of the data labeled with correct responses—sometimes millions of examples—so that the system will be able to make decisions about new data when it is presented.” That training process is referred to as supervised learning, and it is a mainstay of machine learning and contemporary cognitive computing approaches.

Algorithms are mathematical formulas that direct the analyses as the computer makes its way through the data. Because algorithms are written by programmers, it is safe to say that the machine learning is only going to be as good as the algorithm on which it is based. Anyone who is evaluating the performance of a cognitive computing system should look at the input and output of a known set of information to validate it. To some extent, the process is always going to be a black box because it is not possible to see an algorithm at work, but getting good results on a test sample is one indicator of how well the system is working.

Machine learning dates back more than half a century, but only recently has computing power been available to make the best use of it. “A major difference between machine learning and traditional statistical analyses is that fewer assumptions need to be made about the underlying population,” Hall explains. “In many statistical analyses, the goal is to determine whether an outcome is different from the one that would be expected from that population, whereas with machine learning, the data can be analyzed with fewer burdensome assumptions about data samples that describe business problems.” The computing power allows processing of very large data sets that include hundreds or thousands of variables.

“The number of variables marks another differentiator between machine learning and traditional statistical analyses,” Hall says. Regression analysis, for example, is most commonly used with a modest number of variables that are selected to test a hypothesis, and the results of the analyses are interpreted by a human practitioner. Conversely, machine learning approaches tend to be difficult for humans to interpret and the algorithms thrive on the abundant supply of examples available in large data sets. If the analysis scenario includes a relatively small number of variables, machine learning would not be the best match. So an important factor in decision-making is to know the data and its characteristics.

The ability to analyze data with unknown distributions opens the door to other opportunities as well; one of the many new approaches is unsupervised learning. In those processes, there is no assumed correct answer—for example, whether a credit card charge was fraudulent or not. The analyses are run and data is mined, patterns are detected and information is categorized. Unsupervised learning is used for discovery in large data sets that contain many possible variables to find patterns or relationships among them.

SAS’s statistical, data mining, fraud and risk solutions have used machine learning for many years, and the company has made a major R&D commitment to machine learning and cognitive computing.

Going deeper into learning

Machine learning has the advantage of being able to process large amounts of data and match them to a model, but is not adaptive. In fact, it is trained over and over on a set of examples to be able to match them to an outcome. Cognitive computing seeks to model the adaptive capability of the brain and uses multiple AI techniques, including deep learning, to accomplish that.

“Deep learning is a more modern approach to neural networks,” says Matt Sanchez, founder and CTO of CognitiveScale, “leveraging advances in computing infrastructure to build deeper and more complex neural networks and using them to learn structure and meaning of complex data. This moves cognitive computing into the realm of inferential learning, when non-obvious conclusions can be drawn.” CognitiveScale delivers cognitive computing as an API that is used in applications for commerce, financial services, compliance and healthcare for chronic illnesses.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues