-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Understanding cognitive computing: How much is enough?

Article Featured Image

Cognitive computing enables a “learn-as-you-go” approach; rather than presenting the system with right and wrong answers, it reinforces the right ones as they are uncovered. Many cases can be made for the importance of adaptability and being able to learn on the fly. “When a shopper starts looking for a shirt for work, one set of recommendations is appropriate,” Sanchez says, “but if the search is for a gift for his wife, that set would not be relevant. The system needs to detect the change.” Most recommendation engines are still based on the “people who bought this also bought that,” which is a broad segmentation approach.

“Computers still do not understand context the way humans do,” Sanchez explains. “If you spent just a moment with a good salesperson in the store, they would quickly find out what you needed.” However, in combination with a human, cognitive computing can leverage information and help scale up the human part of the process. “Our software has been used in wealth management to put information at the fingertips of the wealth adviser,” Sanchez adds, “to allow more consistent and data-driven decisions.”

Role of ontologies

Ontologies give computers something to think about by providing information about the world. “Inference reasoners and ontologies take mathematical learning and allow it to be encoded in software that permits human-like reasoning,” says Stephen DeAngelis, president and CEO of Enterra Solutions. “An ontology might include the fact that an individual owns a certain type of car of a particular color, but it would also know that a car is a motor vehicle and understand the characteristics of vehicles,” he explains. “An inference engine would therefore be able to predict that if an individual is hit by that car, they will be injured and that an ambulance should be called.”

The combination of machine learning, inference engines and ontologies leads to the powerful capabilities that are emerging and in some cases have emerged from cognitive computing. Enterra’s software combines those along with representational learning, a technique for facilitating the extraction of information that is useful in the classification process. “The Enterra Enterprise Cognitive System finds the combination of variables that explains what is happening with the data,” DeAngelis says.

“Algorithms reflect a set of rules that define a sequence of operations,” DeAngelis continues, “and they vary from the very simple to the wildly sophisticated. At the simple level, the algorithm might say, ‘When the light is green, move forward,’ and at the more complex level, the algorithm would be directing an analysis that will detect which parts of an anti-cancer drug molecule are having an effect on the disease.”

Enterra’s software conducts both forward chaining (given the data, what might the potential outcomes be?) and backward chaining (given a certain outcome, what possible paths through the data would explain it?) to allow an understanding of the data. The process is intended to be as transparent as possible so that humans can interpret the results. “Being able to figure out complex relationships requires a lot of math and a lot of algorithms to fully understand what is happening,” DeAngelis says. “But this process has huge potential not only to make businesses run better, but to improve people’s quality of life.”

As for “Random Forests,” the term refers to an analytic technique developed by Leo Breiman and Adele Cutter and licensed to Salford Systems. The technique is claimed by some highly respected statisticians and data scientists to improve the accuracy of predicted outcomes. Decision trees are used in machine learning and statistics to train a system in classification so it can predict a certain outcome based on

variables that have led to that outcome in the past. With random forests, the predicted outcomes from many decision trees are combined, thus the term “forest.” Randomness is introduced by training each decision tree in the forest on a random sample of the entire input data set and interjecting variables at random into each branch (decision point) of every tree in the forest. In numerous studies, the classification error was reduced substantially.

The bottom line for non-technical decision-makers is that while you don’t have to be able to write an algorithm or make your way through a random forest, ithelps to understand the fundamental concepts of cognitive computing. It is a dynamic, fast growing field and taking the time to understand it will be a good investment. 

 

 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues