-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Cognitive Computing: What can self-driving cars teach about cognitive computing?

Article Featured Image

When the world learned this spring of the untimely death of a Tesla driver in Florida who crashed his vehicle into a semi-trailer while relying on the company’s autopilot feature, a shudder went through the driverless car community. It should have gone through the cognitive computing community as well.

Despite the tragedy, longtime driverless researcher Google and other technology and auto industry leaders like Uber, Amazon, Ford, Mercedes and even Apple have since made highly public investments, acquisitions and program announcements heralding their intentions to take advantage of the shift to driverless cars sooner rather than later. In September, the federal government, through the U.S. Department of Transportation, issued a set of voluntary guidelines for manufacturers and announced that it plans to create a premarket approval process for driverless systems that would both offer a level of certification of the safety of new products and presumably speed market acceptance of the new technology.

The issue of trust

That rush to driverless highways (and the government’s legitimate concerns) begs the question: Can we trust the computers, cameras, radars and layers upon layers of software code that stand behind those systems?

Trust has always been the 500-pound gorilla stalking the room where people decide whether they will “buy” a new technology trend. In the early days when computers mostly did accounting and other kinds of straightforward number crunching, many people offered the machines a kind of blind trust: If the computer figured this up, it must be right. (Must be better than accountants with adding machines.) But that period ended rather quickly, as people found mistakes in their bank statements, absurdities in tax calculations and other egregious errors that “any fool” human would have seen immediately, but that the computer had no context or ability to catch.

A similar dynamic drove the early internet, as the first users, researchers and scientists exchanging information in a government/academic network initially assumed that any information in circulation was high quality. That trust fell apart with the Nets’ rapid growth and the beginnings of commerce—now anybody could make a buck out there, and plenty of people were scammed before they knew what scamming could be. One of the biggest challenges to commerce pioneers like eBay, Amazon and PayPal was delivering environments in which people felt safe to participate, where they trusted that the information and transactions would be high quality and secure.

So given that fraught factor of trust and its impact on people’s willingness to adopt new technologies, where do we stand with people’s sense of trust in cognitive computing applications as they begin to come online?

We are still in the earliest days of getting enough applications in the market to begin to measure aspects of the trust relationship. At this stage, we are not certain what guarantors of the quality and accuracy of systems will be necessary for people to feel safe enough to embrace the new kind of relationship with computers that cognitive presumes.

Risk/reward dynamics

But we can look to the most famous public example to date to consider aspects of the trust issue as it played out in the 2011 Jeopardy challenge that pitted IBM’s Watson-based Jeopardy-playing machine against the two leading human Jeopardy champions, Ken Jennings and Brad Rutter. As the games played out and the points accumulated, the human contestants found that playing against a machine created new kinds of challenges compared to playing against other humans. One big competitive problem was that Watson could answer questions far faster than normal human response times, and it would frequently answer correctly. As the games went on, Watson was building bigger and bigger leads in points scored by leveraging its fast response times and impressive accuracy. The humans could not score points unless they could hit the buzzer fast enough to get a chance to answer the questions. They were in danger of getting shut out by taking too much time to think.

Under that pressure, Jennings and Rutter in effect came to “trust” that Watson’s response times were so fast that they, too, would have to hit the buzzer faster than they were used to—in fact they would have to hit the buzzer instantly—without at the time knowing if they knew the answer, but strictly in the hope that they could think up an answer in the few extra seconds offered after the buzzer and before they would have to start talking and giving their best guess. They were going to have to change their perception of risk in order to keep in the game. That is an example of the capabilities of the cognitive system turning the risk/reward dynamics of the situation in an entirely new direction. The humans had to change their game in order to even give themselves a chance to keep up with the cognitive machine.

It’s important to point out, however, that at the same time this was going on, Watson was making a few famous and egregious errors, the most infamous being the question on which it identified Toronto, Canada, as a U.S. city. IBM researchers could explain immediately why the gaffes occurred, because they could understand the assumptions built into the question-answering and the game-playing software code. But they could not eliminate for many observing humans the sense that Watson, with all its speed and accuracy and game-changing capabilities, is still a deeply flawed competitor, capable of making mistakes that “any fool” human would recognize in an instant.

The gap dilemma

Attending to that “idiot savant” gap will be a first order challenge for cognitive applications seeking to build a level of trust between extraordinarily capable computer systems and their human clientele.

The McKinsey Global Institute has been conducting a major research program analyzing the spread of smart automation systems into a broad range of professions. They have stated that up to 78 percent of some work activities in the economy could be automated using technologies available today. Even in the knowledge-based professions, which are among the activities with the lowest potential for automation, some 18 percent of activities have a good potential to be automated in the future.

In most examples of computer automation applied to knowledge-based work, computers don’t replace humans, but they challenge the humans to assume a collaborative role. The cognitive systems are there and they are changing the game. Each job context in each industry will have its own characteristics that will color system designs and the balance between human and machine activities. But the pressure to automate will be relentless, very similar to the current pressure to move toward driverless vehicles.

Whether the inevitable transition to a shared cognitive future between human and computer decision-makers is smooth or bumpy will be determined largely by the factor of trust. No one wants to be in the back seat when the autopilot mistakes a semi-trailer for the open road. On the other hand, no one wants to bet the business on a cognitive recommendation that insists that Toronto is a U.S. city.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues