Cognitive computing: Big data and cognitive computing–Part 1
Quoting the definition of cognitive computing in a previous article in KMWorld (kmworld.com/Articles/News/News-Analysis/Cognitive-computing A-definition-and-some-thoughts-99956.aspx), its use as a human problem solver is at the forefront: Cognitive computing makes a new class of problems computable. It addresses complex situations that are characterized by ambiguity and uncertainty; in other words, it handles human kinds of problems. In these dynamic, information-rich and shifting situations, data tends to change frequently, and it is often conflicting. The goals of users evolve as they learn more and redefine their objectives. To respond to the fluid nature of users’ understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts and insights.
The technology foundation of cognitive computing is not fundamentally about programming, processing or storage paradigms, or about data flows and stream handling, but rather about the broad ranging data analysis technologies addressing discovery, disambiguation, contextual understanding, inference, recommendation, probabilistic reasoning and human/machine communications. So instead of MapReduce, Hadoop, no-SQL, Pig, Hive, Spark, Sqoop and other big data tools and technologies, cognitive computing relies on technologies like voice recognition, text-to-speech, language recognition, natural language processing in its many forms, machine learning in its many forms, neural networks, Bayesian statistics and inferencing, support vector machines, many kinds of statistical analysis, voting algorithms, not to mention a heavy dependence on human interaction and visualization design. We can layer cognitive computing on a big data foundation, if it is available, in order to understand, infer or reason about the evidence the data contains.
After presenting an argument here for the distinctness of big data and cognitive computing in both the mission and the technology levels, I want to close by pointing out the valuable relationship between the two trends. The most important symbiosis between the two is that the availability of big data-scale quantities of data is tremendously helpful for the kinds of machine learning algorithms and methodologies on which cognitive computing depends for the accuracy and contextual appropriateness of its answers or solution recommendations. The flip side of the value of increased power of analysis for cognitive applications is the new kind of analytic value these applications offer those who are trying to make some kind of sense out the petabytes, exabytes or zettabytes of data that are collecting in their enterprise big data “lakes,” black holes or other kinds of repositories.
Big data and cognitive computing will continue to be interrelated and will continue to be spoken about together as if they were all of a piece. In fact, they are not, and in the next issue, I will look at other distinctions as well as interrelationships between the trends and the terms at the functional and symbolic levels.