An Introduction to Results Utility Positioning

When a user enters in a search query and is taken to a search engine results page, then the probability of clicking on a result is a function of three items:

  1. The density of the searched for terms found in the search snippet. Ideally, these are somehow highlighted – a different color, italics, bolding – to show the user that the search term is indeed found in the target link.
  2. The actual ranking in the search result. The higher the result, the more likely the user is to click on  that result.
  3. The search engine result as compared to what the content manager thinks should be in that position.

This article will cover the first two of those aspects: the keyword density and the relative placement.

First off, lets understand why this is an important concept for content managers and IT-based search engine managers within the enterprise. The algorithm of a search engine will determine relevancy of results within the index of the corpus of documents and materials within a website. This relevancy is usually based on keyword and concept frequency; however, it is possible, particularly within Solr, to tune a search engine so that relevancy is based on other factors, such as recent sales or recent clicks on results. While, for example, a marketing manager may want to have the product with the highest profit appear at the top of a search engine results page (SERP), this can be disastrous to the overall profitability and sales of an e-commerce enterprise.

Lets examine why this is the case. Imagine if a user searches for the term “red ball.” A red ball itself sells for $1.00 and has a $0.10 profit. If a user sees red ball as the top result, he will eventually buy the red ball 50% of the time. In this case, if the red ball comes up first, then the expected value of the search for the term “red ball” is $0.05. However, if a marketing manager decides that blue balls, which have a sales price of $2.00 but a profit of $0.50, should be first. Given past clickthrough rates (CTR), we know that if the user searches for “red ball” but sees the blue ball first, there is a 1% chance of a purchase. This leads to an expected value of $0.005. We also know that if the red ball is second, there is a 20% chance of a purchase, which gives us an expected value of the second position of $0.01. Combine the two, and you have an expected value of $0.015.

This is a very simplistic analysis, but is the framework to understand why putting less relevant but more profitable items above the more relevant but less profitable product. Someone searching for “red ball” often wants a red ball, and, on a rare occasion, can be convinced of a cross sell of the value of a blue ball. However, the place to suggest alternative items may be on a shopping cart page or a checkout page (both of which are easily accomplished with Solr) rather than manually forcing a different result set.

To evaluate the measurable quality of a SERP in a quantifiable measure, we must evaluate both the relative position of a result and the keyword density of the search term within the result snippet.

First, we look at the probability of a click result given the relative ranking of a result within the search engine results page. All things being equal, we know that the top position gets more clicks than any other positions. Short of having a test and control set to measure the delta created by altering a search relevancy algorithm, we can use existing click probability data to provide a proxy for a relative scoring, setting the top result to the highest score and normalizing other positions compared to the top position.

Then, we look at the keyword density of the SERP result snippet to determine how much signposting we are providing the user to signal that the result has appropriate relevancy for the search term. We use a discounted cumulative gain function of the keyword density to create a scoring model for the density, with a top score ceiling. We believe that there is a rapidly declining utility of incremental density beyond a certain set point and do not score additional value for density above that value.

We then combine the two values and provide a normalized score from 0 (low) to 100 (high). We believe that a score of 70 or higher is excellent and provides appropriate content-based signposting for users in their search terms.

Ideally, you should be looking at your top 100 search terms and getting a RUP score for each of them. Low scores will point to either the need to adjust the relevancy algorithm within your search engine or to guide your content managers to adjust the content of their posts to provide a higher keyword density for the top search terms. In either instance, IT or content management teams will have a quantifiable metric to point to in order to drive change and also to measure the success of subsequent changes.

If you need metrics on the viability of your search engine results pages, please contact us or e-mail us at talktous at opensourceconnections dot com.