Anticipate a bumpy ride
In December, the California Department of Motor Vehicles revoked the registration of Uber’s fleet of 16 self-driving cars in the state. The move came after the company had quietly begun a program for testing the vehicles in San Francisco. Local residents soon noticed, however, that the autonomous vehicles were not necessarily abiding by the rules of the road, even with human driver “monitors” at the wheel. Running red lights and ignoring bicycle lanes were two of the transgressions observed multiple times by pedestrians on the scene.
It perhaps didn’t help the optics of the situation that Uber had “on principle” refused to apply for the $150 permits that California regulations required of all self-driving test vehicles. So the state shut down the test drives, and Uber packed its cars onto flatbed trailers and rolled down the road to Arizona, where local governments expressed a willingness to be more supportive than the San Franciscans (and the terrain is notably flatter).
The self-driving car model
When we think about how to understand the emerging dynamics of the markets for cognitive computing and what it’s going to look like as products begin to come to market, what better place to look for a model than to the field of autonomous vehicles? Self-driving cars, trucks, tractors, buses, etc., have captured the imagination of the technology industry and the transportation industry alike. Like many cognitive computing applications, the innovations in autonomous vehicles are bridging industries, disrupting long-held business models, integrating technologies into new kinds of packages and generally posing a challenge to the established ways of doing things.
We believe that there are at least six areas in which the stutter-start world of self-driving machines can show us interesting things about the development of cognitive computing. First, consider the “consumerization of IT” trend. It’s been years already since Gartner and others noted that the IT industry’s practices of inventing and delivering what they deemed best for the business and the employees had become obsolete. The “users” had switched to taking their cues from the software and devices that they used in their consumer lives. Imagine search that works. As close as Google. Imagine phones that were beautiful and useful in new ways, tablets suddenly cool and effective.
Fast forward to the present, and we see that self-driving car innovations are starting with one of the most intensely consumer-oriented platforms around. Consumers have high demands and personal preferences around their vehicles, and you already can observe in the concept cars coming out of Mercedes and others that they expect this to be a battle for future design standards for the new functionality arriving in the self-driving environment. The lesson for cognitive computing: The consumer context will be an important, if not the primary, determining factor for innovation in each application area. So much of the cognitive world revolves around human-computer interaction issues—the consumer/user will be in the driver’s seat.
Another key area where autonomous vehicles are showing the way is the phenomenon that IBM likes to refer to as “embodied cognition.” In many cognitive computing applications, the “smarts” of the system will be expressed through a device of some kind, whether that device is a smart car, a refrigerator that makes shopping lists (and communicates with the grocer to arrange a drone delivery perhaps?) or a surgical robot device whose vision systems and precision of movement can help the human surgeon reduce the risk of complex surgical procedures.
Demand for digital assistants
Many of the health-oriented cognitive applications bring into high relief an age-old debate in AI circles about the desirability of a “strong,” “artificial” intelligence versus a “weak” intelligence “augmentation.” We have discussed this tension between full autonomy and assistance/augmentation in earlier columns in relation to trust. The lumpy performance of self-driving vehicles, even given years of road testing by now, is a living demonstration that full autonomy is a very high bar—one that no system has so far reached. In the case of cars, everyone recognizes that an autonomy-seeking machine—like the Tesla running in auto-pilot mode, for example, can be a threat as much as a promise. Cognitive computing proposals are largely centered around the augmentation model. Digital assistants, for example, are the applications in the highest demand in these early days. We expect that will be the foundational mode for cognitive computing for the foreseeable future. We also expect that it will be a long time before the autonomous vehicle industry will achieve a workable and safe autonomy.
The problems that the car innovators are encountering with autonomy brings us to the next “lesson” for the developers of cognitive computing: contending with the issue of risk. Tesla paid a high price in May 2016 when a crash in Florida killed one of its enthusiastic supporters and brought to the forefront the risk involved in operating a supposedly autonomous system that turns out not to be. Beyond the obvious risk to life and limb, another unresolved issue lurks that could turn out to be an equally important factor to the success of the new industry: liability or “who is responsible?” If the car (or the surgical assistant or the portfolio manager assistant, etc.) is supposed to be the smart expert, making decisions that propel a process toward a successful conclusion, doesn’t the car itself bear some of the blame when it smashes itself under a semitrailer? Can the manufacturer of the car claim that it is free of responsibility? Can the developer of the cognitive application sell the cognitive assistant to professionals on a “buyer beware” basis, as software has always been licensed to date? What happens when injured patients sue the surgeons and the manufacturer because of a robot error? Watch this space.
The issue of risk is inseparable from the issue of trust. We can see in the self-driving car business the pent-up demand on the part of drivers interested in operating a vehicle that could do a lot to take care of them. But by choosing to deliver systems with limited effectiveness, the car marketers are balancing on a razor’s edge where loss of trust on the part of drivers could sink their businesses. Cognitive applications face a similar high bar. While the effectiveness of this generation of digital assistants will inevitably be limited, they need to perform well enough to convince their users that they are doing a lot to take care of them.