Cognitive computing and AI: allies or adversaries?
Cognitive computing and AI are often players in the same conversation. In one version of that conversation, the terms are used virtually interchangeably, as slightly uncomfortable synonyms. One person may say that she has “an AI” that she is starting to rely on to help her do certain analytical tasks. Another may refer to that same software as one of several cognitive computing tools she uses.
In another version of the conversation, however, the two terms are coming out of opposite corners, fighting it out round after round for top position in the marketplace. The AI advocates are convinced that their machines will provide augmented intelligence that will surpass humans in accuracy and insight or strength and agility. The advocates for cognitive computing are taking a far more conservative approach, positioning cognitive “assistants” as wonderful tools for handling volumes of data and exhaustive rounds of analytics, but they see humans firmly in charge of the process for the foreseeable future.
It is clear that accepting “cognitive” and AI as synonyms is too simplistic an approach for today’s rapidly moving field. So what can we learn by studying the two adversarial perspectives on those trends?
It is always helpful to pose a challenge problem that can help shed light on the core issues packed into opposing views. For example, consider a situation in which you will be making an important life decision. Let’s assume that you will be making a decision about a major career change. Today you have a number of options from which to choose as to how you might proceed with that decision. You could play out your decision journey the old-fashioned way, using your own vision, wits, connections, references and serendipity alone. That will be Scenario 1. But you could also use an AI to guide you in your career direction. That will be Scenario 2. And in Scenario 3, you could use a cognitive computing assistant to analyze and organize relevant information that will help you weigh the elements of your decision.
A series of elements
Scenario 1 carries forward the legacy of the old world in which all our decisions were made “manually.” Those were the days when computers were for counting and 4-function arithmetic and creating reports delivered on deep stacks of folded paper. They were housed in chilled rooms where normal people were not allowed entry. You would not consider engaging with one to contribute to your career change decision. You could not imagine that some kind of computer could become an integral part of your social life.
In that old world, and in our new computer-filled one too, a major decision is made up of a series of elements, typically playing out in a serial fashion: First you need a trigger to get started—getting really mad at your boss, for example; then you need a high level of motivation; then you need to develop a vision of where you might be headed; then you enter a discovery period during which you gather all kinds of information from people, published information, intuitions, hunches; and finally you resolve to take a plunge into the new career phase, or not. In Scenario 1, we accomplished that decision process using networking, classified ads, professional associations, new educational degrees, multiple in-person interviews, etc. How might we imagine things going differently in an AI-based scenario?
Scenario 2 would begin with an AI approach that might be already taking place without our conscious direction being involved in any way. After all, when we have an AI machine as an alternate ego, our alternate is more intelligent than we are, and it can understand our discomfort with the current job situation much earlier than the trigger point of our getting really mad at the boss. By ingesting our e-mails, text messages, phone calls, Web explorations and social media activities, our AI alternate ego must already know that we should consider a major career change.
Too good to be true?
Our personal AI also knows that we have a degree in statistics, say, and have drifted into a business intelligence analyst role at a large retail firm. By keeping up with trends in the economy, it knows that a new field is being created for flexible and inventive analysts to extract valuable insights from the big data being created by all kinds of corporate operations and customer engagement. Those “data scientists” are making far more money than we currently make in our BI analyst role, but with both statistics acumen and computer-based data skills, we already may have the qualifications we need to upgrade our career.
At this point is there any need for us to get involved with the old-world stages of motivation and vision and discovery and finally an action decision? Perhaps at the first conscious trigger point for us, say a particularly uncomfortable episode during which we feel really mad at the boss and perhaps text something about this to a trusted colleague, our personal AI simply lights up the screen of the smartphone and advises us that we now have a new job working in the data science group at a global social networking firm for twice the annual take-home that we now make plus stock options. Our AI has been negotiating with the firm’s AI for at least the last several seconds and they have determined that our professional qualifications, personality type, geographical preferences, curiosity and predilection for innovation make a superior match with the firm’s hiring profiles. Decision accomplished. The plane ticket is already booked, our 401K already transferred.
Maybe something as effortless as Scenario 2 sounds a bit too good to be true. But the prognostications of many AI advocates point to the imminent arrival of such a radically different relationship between machines and our human processes. They point to a radically different life. The big problem with that thinking is the depth of the deficits on virtually every level of the relatively simplistic features of Scenario 2. In everything from powerful enough hardware processors to fast enough communications networks to smart enough data integration and learning systems to accurate enough voice interfaces, we have a very long way to go in both science and execution before the flip to a machine-driven society takes place.
The cognitive future
So responsible voices in cognitive computing are opting for a Scenario 3 approach, holding back on the AI hype. We can craft helpful cognitive tools today that will help us identify data science as a potential career path, for example, and that will help locate data science job openings all over the world, surface other kinds of possible careers, like actuarial science, for example, locate educational opportunities and bring that all back from the big data available on the Web, perhaps making it digestible in a human-friendly form. That kind of intelligent online service will no doubt appear as the framework for the future “job board.” At the end of the day, it will still be our responsibility to have the motivation and make the judgments that can make this kind of major decision the right one.
When we hear leaders like Ginni Rometty say that every business has a cognitive future, let’s hear that as a practical perception about how we can make machines smarter, not a call to have AI machines making the calls in our decision paths.