Quintessentially search, no matter what the terminology
Insight engines, cognitive search and knowledge discovery are among the terms that evolved as new descriptors because the label of “search engine” was no longer adequate to describe what those solutions can now do. The techniques achieving fruition after years of development include the ability to fully integrate with other enterprise applications, explore and proactively present relevant information, allow use of natural language, use cognitive search and machine learning to continuously improve and personalize results, and retrieve and analyze in real time information from repositories throughout the enterprise.
Search engages on many levels
Coveo’s intelligent search software has been built out as a platform over the past year, so rather than serving only as a standalone search tool, it can also be embedded much more easily in such environments as intranets, community sites and contact centers. “The product is fundamentally the same,” says Mark Floisand, CMO of Coveo, “but now it is designed to be accessed from within other applications.”
In the broader context of information access, search engines have developed features that take into account much more information about users. “Coveo uses machine learning to learn from what has been proven successful for previous users, which allows it to continuously improve results for those who are exhibiting similar behavior,” Floisand says.
Coveo indexes disparate information from multiple data sources, securely ranks them by relevance to a search topic and is able to automatically compare and contrast documents in the repository. “The functionality is less dependent now on taxonomies and maintaining a thesaurus,” Floisand explains. “Coveo’s software uses proprietary techniques to explore the content for the most relevant set of responses based on user inquiries.” Machine learning allows Coveo to identify patterns as they change in real time. “Instead of worrying about specific queries, we think about the model to refine it and improve relevance,” Floisand says.
As with so many other enterprise software products, development is being driven by user expectations in the consumer market. “People have a much lower tolerance now for user experiences that are not personalized,” Floisand says. Personalization is generally something that must be developed over time, both because the applications need time to learn and the users need time to mature in their understanding and acceptance of the technology.
“Search needs to be delivered in three progressive stages,” Floisand advises. “First, if the user wants something specific, the information needs to be delivered efficiently—we call this being responsive. Then the system can begin to suggest other relevant information proactively that relates to the topic at hand. Finally, it can begin to deliver predictively, using machine learning to detect the user’s true intent and anticipate what they need next, having observed the behavior of others before them.”
Another example of the increased sophistication of search software products is that some now include advanced analytics. Sinequa was developed from the beginning with content analytics already integrated with the search function. As a cognitive search solution, Sinequa uses natural language processing (NLP) to interact with users and machine learning to continuously improve results. It provides out-of-box capability to extract elements from unstructured text, configurable to incorporate terms and phrases to specific businesses or domains.
Those text-mining agents (TMAs) are integrated into Sinequa’s indexing engine and allow detection of either standard expressions or complex “shapes” that represent the likely meaning of terms and phrases. “Once defined,” says Scott Parker, senior product marketing manager of Sinequa, “they can be normalized and used throughout the enterprise to extract relationships and concepts.” Natural language processing and machine learning capabilities are other elements that support its advanced search.
TMAs can sift through large volumes of text and data to identify authors and concepts even if the queries and documents do not include the exact terms. “They can be used to verify whether an author has been consulted on certain topics via email,” Parker points out, “and determine the volume of publications and correspondence.” In this way, the TMAs can map implicit networks and create links between them.