-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Why reducing review time can revolutionize enterprise content search

Article Featured Image

It typically takes one to five seconds for a search engine to pull up information. However, reviewing search results and finding specific information within documents is often tedious and time-consuming. This review time is the most significant stumbling block to faster searches.

A McKinsey Global Institute (MGI) report found that 19% of an employee’s work is consumed by searching and gathering information. That equates to nearly one day out of a five-day work week. According to an article in Forbes, “Numerous studies of ‘knowledge worker’ productivity have shown that we spend too much time gathering information instead of analyzing it.” Gathering means searching, and search needs to improve vastly.

Though there have been significant advances in AI-driven search with a focus on speed and relevance of results, there have been practically no advances in display methodology to speed up the review process.

The human brain can underpin search

The solution, surprising in its simplicity, is to leverage the human brain’s innate ability to store information as images rather than text. The brain is a powerful visual pattern recognition engine, saving data as generalized snapshots rather than a series of words.

A study by leading neuroscientists from MIT published in the journal Attention, Perception, and Psychophysics, and referenced in MIT News, found that, “the human brain can process entire images that the eye sees for as little as 13 milliseconds.”

The study specifically addressed innate review and selection capabilities of humans. In it, subjects were asked to “look for a particular type of image, such as ‘picnic’ or ‘smiling couple,’ as they were presented a series of images for only 13-80 milliseconds a piece.”

Photo recognition

Searching images is hundreds of times faster than reading text. A classic example of this phenomenal capability is how someone can locate “just the right” photo from hundreds of images on their mobile phones to share with someone fairly quickly. Even when presented with an enormous amount of images while scrolling through our devices, the brain is able to immediately identify the photo being sought because of innate pattern recognition abilities.

Recollect how companies used to operate in a paper-centric world in the not-so-distant past. A person walked to the file room and located the needed file. Searching for information consisted of opening the file, flipping through from one stapled document to the next, recognizing the document content from the first page’s image, quickly checking the title and date, and estimating the number of pages from feel. Once the person located the needed document, they flipped through the pages fast, looking for a particular page of interest before settling to read the contents in detail to find necessary information.

Adding pictures to search

In this analog user experience with paper documents, people unknowingly used their memory and innate pattern recognition capability to find documents and information.

When it comes to the world of digital documents, however, search engines do the exact opposite, displaying results in a list view of metadata or as textual summaries, forcing people to read first. Reading is hundreds of times slower than inferring content and context from images, such as a snapshot of the first page of a document.

Leveraging this finding of the brain’s superior ability to recognize patterns, a next-generation viewer uses three progressive panels to display search results and document pages as visual thumbnails, with the final panel displaying the entire page that one wants to read at length. This simultaneous display recreates the analog viewer user experience with paper, reducing the current inordinate amount of time needed to review digital documents online.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues