The intricacies of responsible AI at KMWorld 2022
Working with and developing AI necessitates socio-technological awareness from its inception to its application. It boils down to a critical question: What is the kind of relationship we ultimately want to have with AI?
At KMWorld 2022, Phaedra Boinodiris, principal consultant for trustworthy AI at IBM, discussed the ethical approaches to AI design and application during her workshop, “What the Titanic Taught Us About Responsible AI (and What We Need to UNLEARN).”
Boinodiris explained that “organizations that have even the best of intentions can cause a serious amount of harm, in terms of AI.”
Trusting machines and developing ethical AI is not an uncomplicated process, Boinodiris stated. Yet, regardless of its difficulty, implementing best practices which shape the way AI is treated throughout its existence entirely determines its capacity for malintent. Boinodiris listed the guiding principles that IBM formulated as a response to AI’s propensity for bias:
- The purpose of AI is to augment human intelligence, not to displace humans.
- Data and insights belong to the creator.
- New technologies, including AI systems, must be transparent and explainable.
Though imperative, these tenets only scratch the surface of what AI must be built upon.
Earning trust for AI is a socio-technological challenge, Boinodiris implored. The intersection of technology and social ethics is often overlooked by tech developers responsible for its invention, where ethics comes as an after-thought, not a constant guide for its actuality.
“Technology alone cannot solve this problem,” continued Boinodiris. “There is no easy button; it's hard work. You can’t just install and configure a single tool to resolve it.”
Instead, it requires a holistic approach—from people to culture, process, governance, and tools—acknowledging that humans creating data or the machines that produce it are innately biased creatures.
Data science, Boinodiris noted, plays a significant role in remediating data and the bias of its creators.
“What needs to be unlearned is that 100% of the effort it takes to create AI is coding, when in fact, over 70% of the effort is finding the right data,” she said. “We have to have really talented people who are trained to ask hard questions about data.”
Questions such as, “Who or what generated this data?”; “Who or what is responsible for managing this data?”; “Does it reveal judgements against a group of people?”; and “Is this data inclusive?” are the sorts of inquiries that begin to integrate ethics with technology.
Ultimately, Boinodiris pointed to holistic approaches that targets inclusivity and accessibility in both the AI and the surrounding environment cultivating it.
“When you talk about the right culture to curate AI responsibly, you have to start with a foundation of diversity and inclusivity,” said Boinodiris.
Embracing an integrative approach toward expertise by being truly multidisciplinary—preventing exclusion of certain entities at the level of an organization’s teams—invites its products to be just as diverse and socially encompassing.
Boinodiris concluded that considering who even gets invited to conversations of AI ethics, both in an enterprise and in its wider education, also empowers its ethicality. At the root, inclusivity and accessibility must be unalienable from its technological development and application to create and sustain responsible AI.
KMWorld returned in-person to the J.W. Marriott in Washington D.C. on November 7-10, with pre-conference workshops held on November 7.
KMWorld 2022 is a part of a unique program of five co-located conferences, which also includes Enterprise Search & Discovery, Office 365 Symposium, Taxonomy Boot Camp, and Text Analytics Forum.