Understanding the ethical risks associated with different types of AI at KMWorld 2025
In a rapidly evolving digital landscape driven by AI agents and autonomous systems, traditional governance models are being reimagined.
At KMWorld 2025, Phaedra Boinodiris, global leader for responsible AI, IBM Consulting and Author, AI for the Rest of Us, led her session “Governance in an Agentic Age: Accelerating Value?,” drawing on her expertise in responsible AI, organizational design, and AI education, discussed how enterprises can strike a balance between innovation and oversight to thrive in the agentic era.
“If you haven’t started playing around with AI, let today be the day,” Boinodiris said. “Play is the important word.”
The three common plays when it comes to investing in AI are seen are defend, extend, and upend. However, AI is failing to bring a return. The number one reason there’s no ROI is that the AI isn’t solving a business problem.
“How can we design, create, procure, and govern AI that is responsible,” Boinodiris said.
All forms of AI present risks, she said. Part of the battle is understanding there’s more than one type of artificial intelligence. There is predictive AI, generative AI, and agentic AI. These three types may also have different components of each other as well.
“What we’re describing here isn’t strictly just a technical challenge,” Boinodiris said. “It’s a social-technological challenge.”
You don’t have to look far for compelling stories to understand why this is important, she explained. Establishing AI ethics is forcing people to confront what they truly value across cultures, systems, and time.
“My favorite definition of data is: Data is an artifact of the human experience,” she said. “AI is like a mirror that reflects our biases, but we have to be brave enough to look and see if it reflects our values.”
Data plus context, plus relationships, plus stories, equals wisdom, she offered.
“Context is everything,” Boinodiris said.
Agentic AI refers to goal driven software that acts on an organization’s behalf. It keeps the loop going without us, she explained.
“With greater autonomy comes greater risk,” she said.
She demonstrated how a prompt injection can make AI misbehave, whether it spits out bias or harmful material. The University of Finland created a game called, Breakable Machines, that can show how AI can be highjacked, she said.
To be held accountable, there needs to be “value alignment” throughout the organization. Everyone needs to be on the same page about how important it is that AI has ethical guardrails.
“It requires power and it requires a funded mandate,” Boinodiris said.
KMWorld returned to the J.W. Marriott in Washington D.C. on November 17-20, with pre-conference workshops held on November 17.
KMWorld 2025 is a part of a unique program of five co-located conferences, which also includes Enterprise Search & Discovery, Enterprise AI World, Taxonomy Boot Camp, and Text Analytics Forum.