KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Cognitive computing: Building blocks

Article Featured Image

Arriving at decisions

The model described earlier for loan applications is a logistic linear model; the score is calculated simply by adding up the weighted factors. “Several other classes of models are getting widespread attention, such as tree models and deep neural networks,” said Datta, “because they provide more nuanced analyses, but the decision can be more difficult to explain.” Truera offers a built-in capability to assess model quality and explainability for these more sophisticated models.

Tree models have multiple decision points that determine the decision. In the case of the loan application, individuals with incomes above a certain level would be routed down one branch, and those below it, down another. Then a further breakdown would send individuals with low debt-to-income levels in one direction and those with high levels in another. “The result would be four risk categories,” said Datta. “Those with high incomes and low debt ratios would be in the lowest risk category, and those with low incomes and high debt ratios would be in the highest risk category.” The algorithm for a loan decision becomes more complex, and so does the explainability. This is especially true because there can be many trees (referred to as a forest) in an application, and the recommended decision would be a combination of many factors.

Neural networks present similar challenges in that they typically have multiple layers that affect both the learning process and the recommendation process. Therefore, the explanations are also more complex. “If used in an image recognition application, for example, the system may learn at lesions are an indicator of diabetic retinopathy,” Datta observed, “but surfacing the neural network’s reasoning is not easy.” Truera can look deep into the network, identify important network units that drive model predictions, and help with interpreting the concepts that those units have learned. For example, it might surface the fact that an important network unit (such as a neuron) has learned the concept that lesions in images are indicators of diabetic retinopathy.

A multitude of organizations, both public and private, are exploring the issue of trustworthiness in AI applications. In a study by IBM’s Institute for Business Value, more than 80% of respondents wanted to move ahead with AI, but 60% were worried about liability. In another study by PwC, 76% of respondents said concern about bias and lack of transparency were impeding adoption of AI.

Among the organizations trying to determine what is required to engender trust are the National Academies Board on Human Systems Integration (BOHSI) and the Defense Advanced Research Programs Agency (DARPA). As the use of AI increases and is applied to a broader range of activities, greater focus will need to be placed on ensuring trustworthiness so that all stakeholders will feel confident in the outcomes.

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues