KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

From computer to centaur-
Cognitive tools turn the rules upside down

Article Featured Image

The world chess champion of the time encountered his ultimate vanquisher in 1997. A computer from IBM called Deep Blue beat Garry Kasparov in an exciting tournament. The first electronic machine won checkmate against the best human chess player.

That incident was considered to be the first major effort in simulating hard human intelligence. Defenders of a bright future predicted that super-intelligent machines would soon flood office desks, retail stores and management floors. But it became a false promise.

After conquering chess as the intellectual crown of cognitive games, the engineers weren’t able to accomplish anything near to “general intelligence” … nothing that delivered on the broad range of knowledge work in orders of magnitude. Machine intelligence remained niche intelligence, lasting for decades. A chess game comprised a few dozen rules that would have to be implemented into a computer; the rest were hopes based on overwhelming processor power. No one had the slightest idea how such an architecture could become the prototype of a machine that acts as a general cognitive tool to interact with knowledge workers in a human way.

A paradigm shift in human-machine interaction

But Kasparov, having recovered from his embarrassing defeat, already pursued another idea: Would it be possible to match best-of-breed humans and the best-of-breed machines to play superior chess? Both humans and machines would benefit from each other’s peculiar powers. The computer would provide speed and unlimited memory to analyze chess moves. The human would bring insight that derived from long-standing experience and intuition.

In June 1998, Kasparov ran the first “advanced chess” game against Veselin Topalov, a top-rated chess grand master. Both players were allowed to take onboard whatever hardware and software might help them to win their games. Thus, they retrieved historical games and requested possible moves from off-the-shelf chess software whenever they thought it would increase their chances of winning.

A month before that game, Kasparov had beaten Topalov 4-0. Now it turned out that Topalov increased his chess skills by using computer power far better than his opponent. He fought Kasparov to a 3-3 draw, and that was not a singular case.

In 2005, an advanced chess tournament took place that allowed any combination of humans and computers. Steven Cramton and Zackary Stephen, who only held amateur status in Elo (named after physics professor Arpad Elo) chess rankings, took their regular desktop computers and squeezed them for their purposes. They won that tournament against chess masters with superior chess ratings and even superior hardware and software. Both players had leveraged their expertise to align computing power to win chess games. They created a superior team comprising humans and machines. In essence, a new form of chess intelligence had emerged. Kasparov concluded, “Human strategic guidance combined with the tactical acuity of a computer was overwhelming.”

Kasparov coined the term “centaur” to describe that human-machine combination. The Greek mythological creature consists of a human torso with head and arms and the body and legs of a horse.

The rise of knowledge work centaurs

We can expect that this human-machine trajectory will not stop or be limited to a dedicated cognitive skill domain like chess. With a new class of computing architectures called “cognitive computers” emerging, we see strong evidence that such cognitive assistance will soon spread to many commonplace knowledge domains: retail shops, banking, marketing, consulting, etc.

And that will undoubtedly call into question an iron law of knowledge work: that humans will outperform machines in cognitive thinking skills for an unpredictable time. We have to change perspectives: Human-machine combos will emerge in many domains that seemed not at threat for machine intelligence a decade ago: engineers, physicians, lawyers.

A clever, cognitive symbiosis of a human and a machine will outperform the best humans by sharing their respective qualities: Machines are accurate and precise; humans are intuitive and creative. We should say that both are in an intimate relationship on a cognitive and intuitive performance level that each would not be able to achieve. And that idea will shake the foundations of knowledge work for the years and decades to come.

From “transactive memory” to “transactive cognition”

In 1985, psychologist Daniel Wegner coined the term “transactive memory,” and defined it as a mechanism through which people collectively encode, store and retrieve knowledge. Initially studied in couples and families, transactive memories were later extended to teams, groups and organizations. What Wegner et al. learned from their studies was that people often referred to other humans when they had to retrieve knowledge that currently wasn’t present in their own mind. The close interaction with others helped them to improve, increase or even substitute their inferior capabilities to memorize. In that way, a transactive memory system provides its members with more and better knowledge than any individual could access on his or her own. With the rise of social networks and search engines over the last 20 years, we see that today machines extend and take over the coaching role of those partners.

If we push Wegner’s idea a little bit further, it is only a matter of time before we realize that “transactive cognition” will constitute a new quality of symbiosis between a human and a digital system to help humans accomplish superior thinking skills. It will turn the rules of knowledge work upside down.

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues