Building Trustable AI
One hazard of using agentic AI is that mistakes also scale. “An error rate of 1% in an agentic AI of 100 steps compounds to be a 63% error rate for the overall operation,” explained Miller. “Adding governance rules and including a human in the loop for validation are critical for developing a trustable AI.” After proving that the rules for generating responses are being followed and the correct process is being reliably repeated, manual checking can be eliminated. “The rules provide a means for scaling human intelligence, but they need to be tested initially to validate them,” Miller noted.
In order to remain current, knowledge models must be adaptive. Progress Semaphore can flag users’ inquiries that relate to concepts that are not in the knowledge model. The system can then send a notice to the subject matter expert (SME) that a concept is missing, and the SME can provide the correct placement for this concept. “The AI can then do a ‘reverse query’ and notify the original user that the new information is now available,” Miller pointed out. “Applied at scale, this means the knowledge platform learns as it takes in more data.” The feedback loop creates confidence in users that the AI system is providing current data rather than static information.
Respecting Customer Data
AI systems rely on customer data for numerous purposes, but one of the primary uses is providing personalized experiences. Customer data also enables analysis of purchasing patterns and predictive analyses, as well as other measures, such as how likely the customers are to churn, net promoter scores, and voice of the customer measures. In many cases, however, customers have grown wary of sharing data because they don’t believe it will be protected. According to Deloitte, in its Connected Consumer report (www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/ increasing-consumer-privacy-and-security-concerns-in-the-generative-ai-era. html), customers have become more worried about security over the last several years. Yet without robust underlying customer data, AI systems cannot provide reliable information.
Founded 25 years ago to facilitate data exchange, Adeptia delivers self-service data integration through Adeptia Connect, which is used to process incoming data of all kinds. With the concept that “AI models are only as good as the data powering them,” Adeptia focuses on first-mile data. “No company works in isolation,” said Deepak Singh, chief innovation officer for Adeptia. “A great deal of external data comes in, whether from venders, suppliers, or prospective customers. We provide data mapping to make this data consistent and useful as it comes in from disparate sources to resolve differences in how it is labeled.” Citizen developers can integrate data from these sources without writing code.
Although accuracy in data is important to all analytical systems and to AI in particular, customer data needs to be treated with particular respect. “First, companies should be transparent about why they are collecting customer data and what value it has,” commented Singh. “They also need to back up their collection initiatives with the proper security measures in order to fulfill the assurances they give.” In addition, their practices should be summarized clearly. “They should not hide their practices in complex notices, or make it difficult to opt out,” Singh advised.
Adeptia helps its users build trust by establishing the ability to maintain data quality that is in compliance with privacy regulations. With these assurances, customers will have greater trust in sharing data with AI applications that are delivering products or services. Customers are more likely to respond to requests when the use of their data is made transparent and the opt-out options are clear. Singh also pointed to the issue of bias as an important one in trustability. “If your training data is biased, then the AI will be too,” he noted. “That’s another reason why it’s important to have a human in the loop to verify responses such as recommendations. Inappropriate responses will reflect poorly on a company, so it’s important to be aware.”