Cognitive computing and AI begin to grow together
How much is “good enough?”
There are several factors that go into our use of technology. Each of these is both a threat and a promise. For instance, a device that promises to behave to perfection, not to intrude but to augment our lives, is welcome. Assuming that the device remains quietly in the background, volunteering only when summoned, it is deemed useful. But if it intrudes (volume drowns out human activity) or, worse, threatens an activity or a human interaction, then it must be adjusted. In the worst situations, a device or a technology may actually threaten human welfare. The Boeing 737 MAX is a prominent example. By relying only on its design, without the opportunity to modify an outcome, the plane took lives because there were no humans in the loop.
This is an extreme example, but it is also a good lesson about how humans and machines must both contribute to the functioning of safe human-technology environments. And that’s the problem. What are the boundaries in device design and human reactions that must be built into an innovation? This is no mean determination to make. How do you prevent risk in self-driving cars? How much should we override the next technology if it intrudes on human activities? How much should we risk reliance on smart machines in unpredictable conditions? These are questions that we should ask now, before disasters that are preventable intrude on our lives. We need guidance on how to perform the risk-benefit analysis.