AI’s ways of being immoral
This issue arises regardless of the context in which machine learning is used. It is a risk baked into ML’s nature and logic. After all, we use ML to find correlations that we would have missed because they’re too obscure or too complex: ML’s haystack of data can be the size of Nebraska, and the needle is the way some stalks in the cities of Omaha, Lincoln, and Bellevue are pointing at one another. While we’re getting better at investigating how models work, the complexity that gives rise to this challenge is indeed essential to machine learning.
This inexplicability is directly tied to moral questions because to know that something is moral—say, the way a machine learning app is deciding who gets an interview for a job, or even how many pedestrian crossings a city needs—we want to know either the effects of the answers or how those answers were derived … and preferably both. If a company is sorting job applications manually and hiring too few women, we’d want to know what their applicant pool looks like, and whether the company is doing enough to attract female candidates. If the same problem exists when using machine learning to sort applications, we’d have the same questions but also would wonder if the AI has learned an anti-woman bias. It can be very difficult to assess whether biases have snuck into the model despite the developers’ best intentions.
Of course, the same is true for humans: Hiring committees that consist of people who are genuinely committed to gender equity may produce inequitable results because of biases the members don’t know they have, as well as biases embedded in the hiring methodology itself. But that only makes clear how unusual machine learning systems’ moral challenges are: They can be black boxes just as we ourselves are. That similarity is an important reason why this new technology is raising moral anxieties the way no prior technology has.