-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Automating cognitive tasks: fact or fiction?

What is intelligence?

There is a long-standing debate in philosophical, psychological and educational circles about how to understand and measure intelligence. Is intelligence actually a singular thing that can be pointed to and measured, for example, by an IQ test? Or are there multiple kinds of intelligence whose existence and behaviors only come to light when individuals confront specific kinds of context in life?

Howard Gardner, a well-respected Harvard professor of education, proposed seven different, distinguishable “intelligence modalities,” or frames of mind, including: musical, visual/spatial, verbal, logical/mathematical, bodily/kinesthetic, interpersonal and intrapersonal. (In later work, he proposed up to three additional “intelligences.”) But in his view, any expression of these intelligences is conditioned by the context experienced by the individual. He uses the example of famous chess champion Bobby Fisher and questions whether his particular spatial or logical intelligence would have been apparent if he had grown up in a culture without chess?

So, the way we approach the strategy for machine intelligence and its work is going to make a huge difference in the kinds of systems we end up with. If we take the “singular thing” intelligence approach, like recreating a Hal that can run all aspects of the business, we might target an IQ of 150+ and start working hard on artificial general intelligence. However, virtually no one, including the AI research community, believes that this kind of system will be achievable for decades, if ever.

A fragmented process

So, it appears that a more fragmented approach, one that reflects multiple kinds of intelligences, is the primary practical strategy available today. This strategy must understand and incorporate the dependencies of intelligence on context, a thing we rather take for granted when operating with human beings, but which is in need of rethinking for the Intelligence Economy. If we want to use artificial “intelligence” to build a bot to accomplish work, we will have to focus on understanding the intelligence or intelligences required to accomplish specific cognitive tasks and work through them to engineer a machine that can have a positive impact on the business.

In an incredibly fragmented way, this movement to automate cognitive tasks is already all around us in the consumer environment. Amazon recently announced that the number of Alexa “skills” developed both by Amazon itself and by thousands of third-party developers using Amazon’s open tool kit, had passed 25,000. Speaking to the Alexa bot through one of Amazon’s Echo devices, you can already order up news briefings from many media outlets, for example. You can also interact with a spectrum of automated systems like home security systems, refrigerator apps that give status on shelf life of food in your ice box, travel services, sports statistics feeds and on and on. Each one of those elements today is of necessity a microscopic component, but the entire collection of them has made Alexa a far more valuable home assistant than Amazon could have programmed on its own.

In the enterprise, the proprietary device-centered, open source-enhanced value generation strategy will not be a viable approach, but some kind of similar practice of combining components with complementary capabilities into an intelligent “agent” is a likely path to creating intelligent machines.

Innovation offices, data science teams, lines of business will need to add the next phase of business imperatives to Drucker’s model—we now need to understand the meaning of cognitive tasks and design machines to streamline their accomplishment in particular contexts for the business.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues