-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Ethical issues in AI and cognitive computing

Sloppy thinking

Algorithms and biased training sets are not the only culprits in our quest to develop better information systems. We see a growing reliance on accepting AI and cognitive computing recommendations without interjecting the human ability to test the reality of the recommendations, given the context in which the user is seeking information. Context is a new concept in developing cognitive applications, and we are still experimenting with how to filter results by the user’s context without invading their privacy. Another danger in our interaction with information systems is the system’s tendency to deprecate or eliminate exceptions, rather than, perhaps, highlighting them.

Complexity, context,and AI challenges

While human interaction with information systems has always been a problem for users, there are new quandaries that have arisen because of the volume and availability of uncurated data. These stumbling blocks are again societal as well as legal. Social media, for instance, coupled with lack of governance, invites manipulation by users, by organizations, and by governments. The drive to regulate malefactors is admirable, but who will have the power to decide whether a social message is dishonest, inaccurate, or evil? This pertains to challenges such as poor design in Boeing’s 737 Max software, as well as acceptance of bail and sentencing recommendations, election interference, and the spread of hate and violence ideologies. In yesterday’s more homogeneous societies, especially with non-porous borders, it might have been possible to enforce social norms. This is no longer a possibility.

The complexity of risks and choices have flummoxed technologists as well as regulators. What is the correct decision for a self-driving car to make when faced with a dilemma of injuring one person or a group of bystanders? Autonomous car designers tell us that there will be far fewer traffic deaths with more autonomous vehicles on the road, but this means ceding decisions of whom to injure to a vehicle. Do we want to do this?

Privacy (or lack thereof)

Discussions of privacy issues are perhaps the most prevalent in the media and press. Do we need to trade privacy for the results and benefits of AI and cognitive computing? Will we resolve issues of data ownership, data (and device) access, and data control? Copyright ownership plays into this area as well, as do transparency and the right to know.

These are rarely technical problems, nor are they likely to spawn effective technical solutions that appeal to all interested parties. Rather, they invite legal or regulatory solutions. And yet, most technologists debate these issues fiercely as if they could arrive at technical solutions.

Perfection eludes us all, every day. Instead, we must develop coping strategies for dealing with imperfections in terms of selecting better training sets and designing vehicles or systems in which disastrous consequences are a possibility. The trick will be to achieve “good enough” solutions to give humans the chance to work with machines in order to catch egregious errors.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues