-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Search Engine Scoring for Dummies

Article Featured Image

Search relevancy is probably the number one concern in search applications, whether you are the user or application developer.

It is a key goal in enterprise search – even as the number of data sources and query use cases grows. It is important in the publishing industry, especially if you are concerned with user retention.  And it’s business-critical in e-commerce and recruiting search applications.

But how do you know if your search application is doing a good job?  And if you tune your search engine, how do you know if you’ve improved it?  Or made it worse?

Search Engine Scoring has been a hot topic lately, in terms of an approach to addressing this problem.  What I hope to do in this article is to demystify this topic and make it seem less daunting.

Gather Historical Results

  • There is a treasure of insight in your search logs. 
  • What queries did your users type in the most?  How often was the best result displayed at the top?  How often did the best result at least appear on the first page?  If not the first page, how far down was the ideal result?
  • Using this historical data, you can make tuning adjustments to your search engine and run simulations to see if the results improve or worsen, overall. 
  • Consistent use of the same data in simulations helps ensure valid conclusions about whether or not you are really improving results or not.
  • The more data you have the better.

Normalize the Score

  • Many search engines provide you with some sort of search relevancy “score” for a set of queries and content. But the scoring is not the same across different search tools and there is usually little explanation.
  • What does a score of 1.78 mean?  What about 0.82?
  • With a little bit of coding, experimentation and analysis, you can “normalize” the scoring in your search engine to be a number of 0 to 100, or 0.00 to 1.00.  This is critical to be consistent in your evaluation of results.

Design Search Tuning “Experiments”

  • There are many? “buttons”, “levers” and “knobs” you can use to tune your search engine.
  • Adding the concept of phrases (vs. words); adding dictionaries; incorporating synonyms – the methods are almost endless.
  • In order to make sure you understand the impact of each change, you have to lay out a plan or set of “experiments” on the incremental improvements.

Simulate, Iterate and Measure

  • Once you’ve accomplished the first three steps, it’s time to roll up your sleeves and get to work.
  • Run a simulation to get a baseline “score” for your current search engine setup.
  • Get a sense as to how long the simulation took; this may limit how many improvements or “tweaks” you have time to try before your deadline (if you have one).
  • Iterate through your different tuning “experiments” and measure and document the normalized score for your results.  An annotated graph is always a good idea.
  • If you see significant improvements at any given step, you may decide to deploy those changes into production even before you finish all your “experiments”.

Summary

Search can be found in many different use cases in many different industries – often as a critical component to important business processes.  Search engine relevancy (along with performance) is critical to virtually all of these use cases.

The Search Engine Scoring process presented here represents a very scientific and methodical approach to answering the simple question “how well is search performing” in my critical search application.

You can find more details and actual results in this blog by Paul Nelson, Chief Architect at Search Technologies.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues