-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Next-generation natural language technologies: The deep learning agenda

Article Featured Image

Practical caveats

Implicit to these examples of deep learning’s ability to improve the effectiveness of natural language technologies is its effect on knowledge workers. In each of these use cases, it is enabling workers to concentrate on the business utility that data-driven processes supply, as opposed to sifting through that data themselves. As such, it is imperative to ensure that the statistical foundation of deep learning models function properly in production settings, which is a critical challenge for this dimension of AI. This technology’s operational benefits require consistently accounting for the following:

♦ Model lifecycle management: At a high level, the lifecycle management of deploying any machine learning model involves a continuous cycle of “building a model: Initially you start with the data you know is directly relevant,” reflected Nexsan CTO Surya Varanasi. “But the second order effect is you add more data. It may be a little more complicated to model, but the model gets more and more accurate.” It’s essential to evaluate models in production and refine them in an iterative process that often involves adding different types of data, which helps eliminate bias.

♦ Model validation: Validating models in deployment is a critical facet of managing them to ensure they’re consistently producing desired outputs. Varanasi mentioned a FinTech use case in which traders were using machine learning to optimize stock market predictions. Relevant datasets not only included market trends over a period of years, but also daily ones—which requires updating models nightly. “It’s continuously feeding [the model], validating it, and back-testing it against time,” Varanasi said.

♦ Scale: One of the most pertinent concerns about cognitive statistical models is ensuring they’re able to scale for enterprise needs, which is a major consideration when operationalizing models as opposed to running them in training settings. “If you’re going from 2 terabytes to 10 petabytes [in production], the same solution couldn’t be used,” Varanasi said.

♦ Explainability: “Explainability” is important for deploying any machine learning models, particularly in highly regulated verticals. According to Wilde, competitive deep learning solutions have mechanisms in which organizations can upload sample documents to see what models do with them, as well as graphic techniques to illustrate explanations of model outputs.

Understanding human language

The capacity for machines to understand human language, courtesy of natural language technologies fortified by deep learning, results in limitless enterprise value across use cases and industries. Whether those involve the creation of content, querying it, or acting on it, this combination is the means of making IT infinitely more accessible—and useful— than it’s ever been.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues