One of the most pervasive deployments of knowledge graphs and AI is for the data engineering required for machine learning models. Because they harmonize data across sources, structures, and any variation, such graphs are optimal for the basic data integration required for ascertaining which datasets can best inform the training of cognitive analytics models. “Being able to use a single capability or platform, like a knowledge graph platform, to represent, prepare, model, integrate, and connect all of the data is critical,” Clark said. “Here we want to be really ambitious because the more data you can get access to connect and bring to bear to the analytics stack, the easier the analytics stack gets.” This benefit is enhanced by a data fabric, which naturally enjoins all enterprise data, regardless of physical location.
The knowledge foundation of graphs is also invaluable for creating effective training datasets. “When the data’s not containing any information to make predictions more precise because there’s some kind of background knowledge missing, or the data cannot be interpreted in a consistent manner because it’s really too noisy, you cannot go any further with your machine learning algorithms,” Blumauer said. When integrating data for training machine learning models, the universal applicability of semantic graphs to standardize any type of data is pivotal for surmounting the data silo dilemma—the systemic data problem Blumauer previously mentioned that data fabrics dispel. Otherwise, even the richest array of metadata containing tags and classifications for business objectives that could potentially impact predictive model weights is useless. The only way out of this problem is to create knowledge graphs—otherwise, you never understand the context of the semantic metadata, Blumauer explained.
Implementing knowledge graphs is either a direct requirement or best practice for the most effectual AI techniques—many of which have significant knowledge management consequences. The knowledge populating these graphs further enhances the creation of machine learning models by producing the labels necessary for supervised learning deployments, which makes building these models cheaper and less obtrusive than they would be otherwise.
With other methods, machine learning models drastically accelerate the engineering of enterprise knowledge populating knowledge graphs to also save costs. These techniques accelerate machine learning and knowledge management deployments. “There’ s a big turnaround in the market for eventually fusing symbolic AI, which is represented by knowledge models, and statistical AI, which is represented by machine learning,” Blumauer explained. There are also a number of things termed by Martin as “graph algorithms” that either invoke statistical AI or impact applications of it. These include “clustering, for example, nearest neighbor, and shortest path,” Martin said. “There are also algorithms you can increasingly run that are either statistical or driven by machine learning.”
Knowledge graphs’ machine reasoning is critical for automating aspects of human rules, which were arguably the inception of AI. For example, machine logic can identify whether a loan is risky. “People typically want to rea- son about that in terms of rules,” Clark said. “Like, here are the conditions that define what a risky loan is for this bank, given the regulatory environment it operates in, given its risk profile, given everything it knows.” By acting on human rules—which incorporate enterprise knowledge, terminology, and its hierarchy—knowledge graphs represent one of the best means by which humans can interact with AI, improving the productivity of both.