Blog

Bad Patterns in Tuning Search Results

We’re talking to customers about what challenges they are experiencing in returning highly relevant search results this week while we are at LuceneRevolution. I was really happy when during the opening keynote by Grant Ingersoll, CTO of LucidWorks, he explicitly talked about examples of negative behaviors that we engage in when working on search relevancy. These are behaviors that even I have personally engaged in. Many are drivers that led us to create our search relevancy tuning dashboard Quepid.

Negative Behaviors when Tuning

Get Your Head out of the Weeds

The first behavior that he brought up was locally optimizing your search relevancy. What I call the “whack a mole” problem. It’s very common to fixate on a specific search use case, and improve it the best that you can, only to discover that you’ve negatively impacted the rest of your search queries. The “Rena and Doug” comic is an example of this negative behavior.

Doug and Rena discussing search relevancy

With Quepid, you will catch yourself when you start “locally” optimizing, as your individual query score will go up, but your overall Q Score across all queries will go down.

Pet Peeve queries

Ahh! The query that the CEO comes up with in the middle of the night, and stumps your team. It’s the very odd query that rarely if ever entered by your users, and yet leads to the perception that “search sucks”.

Without data, it’s very hard to convey to your (insert person with clout) that the bad results aren’t actually a problem.

With Quepid, you can show your CEO metrics around search relevancy. We establish a Q Score for relevancy, and show your CEO that fixing the pet peeve query negatively impacts your overall score.

I like to solve Pet Peeve queries not by changing my search algorithem, but instead by using Solr’s Query Elevation Component to match the exact pet peeve query and return a hand crafted list of perfect results ;-).

Note: while Quepid can show you the impact of trying to solve pet peeve queries, it isn’t a query log analytics tool. It doesn’t track frequency of queries for example. There are a number of great products out there for doing that.

Was it you, or was it Oprah?

The Oprah Effect refers to the almost magical power of a recommendation from Oprah Winfrey to drive huge amounts of sales. Having your book be a Book of the Month choice is worth an additional million dollars in sales!

In the world of relevancy tuning, what this means is that sometimes it can be hard to figure out if the reason things improved (or got worse!) is due to the changes you’ve made to the search algorithm, or because of something else that happened like a new user interface. This is especially likely if you do your relevancy tuning in a different environment then production, and have a big release in conjunction with other changes, like a new UI. So don’t pat yourself on the back unless you know it was your changes.

With Quepid, you’re working, ideally, in your production environment. You get immediate feedback on the impact of the changes, so that you’re confident that it’s your work thats making the different, not the new metadata that was added to the index.

The Theme of 2015 LuceneRevolution

Every LuceneRevolution has a different topic that seems to float to the top. And this year it’s…. Relevancy! Stay tuned for more posts from LuceneRevolution.