• September 6, 2019
  • By Marydee Ojala Marydee Ojala, Conference Program Director, Information Today, Inc
  • Article

Everything Old Is New Again

I’m entranced by old technologies being rediscovered, repurposed, and reinvented. Just think, the term artificial intelligence (AI) entered the language in 1956 and you can trace natural language processing (NLP) back to Alan Turing’s work starting in 1950. Text analytics has its antecedents in data mining. Data mining itself has a long history, all the way back to Thomas Bayes, who died in 1761, and his eponymous theorem that still informs algorithms regarding inference, probability, and predictions.

Even when the concept and the phrases are the same, the increased power, bandwidth, and sheer computing power available today changes the applications significantly. Whenever I read about the pattern-matching feats of NLP, machine learning (ML), and AI, I’m reminded of a student I knew at university. He manually counted how many times certain words appeared in a Shakespeare play. As you can imagine, this consumed an enormous amount of his time—and I’ve never been convinced that his professor thought it was worth the effort. With today’s technology, this task would take maybe a minute, maybe even less time.

Data Quality Matters

Jen Snell, VP of Product Marketing at Verint, understands that the fundamental purpose of text analytics and NLP remains constant: “They help get the right information to employees at the right time.” Sounds simple, right? It’s certainly an admirable goal, but one too often thwarted by the amount of information available. Sifting through zettabytes of data is not a task for humans; it has to be done by computers.

For computers do this well, regardless of how robust their text analytics and NLP capabilities, the quality of the underlying data needs to be assessed and optimized. It doesn’t matter whether that data is structured or unstructured, its quality determines the success or failure of a KM project.

Snell also cautions that not all NLPs are the same. Two approaches to using an NLP engine are statistical and symbolic. In the former, you train the system to identify patterns, generate a model, and predict word meanings based on a large data corpus. The latter used hard-coded linguistic rules, which originate with people and are then taught to machines. Neither, according to Snell, are sufficient by themselves. Regardless of how you deploy NLP, data quality makes all the difference.

Ask NLP

One major change from the early days of NLP is the increasing reliance on more informal language rather than the written word of structured letters or memos. Customers expect to interact with companies in a conversational way, either by actual phone calls or by text messages. As Susan Kahler, SAS’s Global AI Product Marketing Manager, points out, you can use NLP engines to teach and guide machines to examine these types of audio and text data.

Identifying relationships and patterns in the data puts you in a better position to meet customer needs and delivery better experiences, personalized to the individual customer. Additionally, pulling together data from all your customer communications channels and feeding it into an NLP engine gives you insights that will streamline future customer communications.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Companies and Suppliers Mentioned