-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Getting started with cognitive computing

Article Featured Image

Cognitive computing is real. It is here now, and as time goes on, it is only going to become more widely adopted and provide competitive advantages that will allow some enterprises to leapfrog their competition. And getting started may not be as difficult as you imagine.

Most organizations already have applications in which cognitive computing technologies and approaches can be leveraged. In fact, they likely already have some capabilities that can be effectively applied to solving problems with cognitive computing technologies. The elements required to make cognitive computing meaningful and effective are the same ones that are needed to make any information retrieval project meaningful and effective.

New tools and methodologies can provide greater functionality and capabilities, but it is best to think of it as an extension or evolution rather than a revolution. I am sure there are lots of industry pundits and marketers who will breathlessly exclaim how revolutionary this is—and yes, there will be vastly different ways to work in the near future. But organizations don’t deploy revolutions, and even if there were a magic application that would revolutionize work, implementing it and integrating it into business processes, business practices and the organizational culture would take time. Therefore, companies are better off working incrementally to apply the skills they have while moving toward a more sophisticated approach over the long term.

The eternal KM challenge of customer self-service

One application that is getting a great deal of attention is customer self-service. It seems like an obvious choice for cognitive computing and has interesting ramifications. Content can be put into task context when a user is perusing a product list or shopping in a store. Salespeople ask, “How can I help you?” and online chat agents (live or automated) ask the same question. Content needs to be associated with a task or product purchase path, and “help” is what users ask for when they cannot locate what they need or find the answer while on their journey.

An intelligent virtual assistant (IVA) is a mechanism for accessing content in a conversational manner where the assistant can provide a specific answer, rather than a long list of responses. Cognitive computing can help optimize performance of virtual assistants, and although some aspects of the technology may seem out of reach, a number of them will be familiar to knowledge practitioners.

Foundational components of cognitive computing

Development of scenarios and use cases: It is important to define the problem to be solved and the boundaries of the problem space—general-purpose artificial intelligence is a challenging problem to solve. The most successful virtual assistant applications target narrow problems where the use cases can be articulated and the process for developing the solution to user problems can be laid out in a predictable fashion. Help requests that are repeatable, or complex but predictable processes that follow a set of procedures, can be structured for access by an intelligent system. The system can then provide the input to the virtual assistant, which is the interface between the customer who is seeking an answer and the underlying knowledgebase.

Knowledgebase, corpus and data sources: Cognitive computing applications operate on a knowledgebase, corpus or content source potentially with other sources of structured and unstructured data. Content can range from highly curated and structured to unstructured information sources such as call transcripts, chat logs and user-generated content in discussion forums and postings. Data can be transactional data, customer attributes and profiles and other sources of structured data, or can be less well formed click streams or streaming sensor data. To process those sources, a degree of curation, while not mandatory with some tools, will improve the effectiveness of the application.

Domain-specific terminology: While many tools contain concept relationships developed over years of development and evolution of the tool, domain-specific terminology is typically layered on top of those vocabularies to help identify the core concepts, issues, products, problems or solutions specific to the company or industry. Those specialized vocabularies are used to improve identification of themes, facts and features.

Feature definition: Features are characteristics of data sets or content that allow for identification of patterns. Patterns are used to either match content (e.g., for a recommendation engine) or make predictions (e.g., about a purchase motivated by a promotion). Features can also be used to reduce the volume of data for processing (imagine putting data in buckets and processing the buckets) and to develop training sets with which to build hypotheses (consider default rates on credit cards, household income and debt load—correlating those variables can predict a borrower’s chance of default).

Features can be defined in advance, and they also emerge from the data. When features are predefined, they can be analogous to facets and attributes. For example, features can be represented by clusters or groupings of content for identification of types of questions or complaints from a customer service or voice of the customer application. The theme can then be used to narrow a selection of potential responses from a broader set of use cases. A customer support analyst can define the categories of questions and problems that can then be used to group both issues and potential solutions.

Dealing with ambiguity

One advantage of using cognitive computing is that it deals more effectively with ambiguity than typical information access approaches. A thesaurus structure used in query expansion (“SOW” being mapped as equivalent to the term “proposal” would allow someone searching on either term to return documents tagged with or containing the other) deals with situations where multiple terms can mean the same thing. When one term can have multiple meanings, we need to disambiguate use of term to understand what the user intended. Ambiguity requires an interpretation based on contextual clues and therefore the answers are not absolute, but instead assessed as more or less likely. When terms have multiple meanings, one mechanism to determine the intended meaning is through natural language processing of the user query. In an insurance processing application, a user could type any of the following phrases and “claim” would have a different meaning in each case:

  • “The adjuster claimed that my house was damaged by rain.”
  • “I want to file a claim for damage to my house.”
  • “My ex-wife has a claim to my house and we need to have it insured. How can we divide responsibility for payment?”

In the first case, the adjuster made a statement that the homeowner disputes; in the second, an insured individual wants to submit information about damage; and in the third, the concept of claim is related to a right in a property. Those differences require processing to interpret the intent of the user.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues