Imagine a search for “laptop”. Your search engine, carefully tuned for maximum relevance, returns over 100 candidates, all of which are relevant to that query. How can you rank these results according to other criteria: the laptop the user likes the most, that the business wants to sell, that’s available, in stock and nearby for delivery or collection?
Search result quality is more than just relevance. No matter if your search engine is indexing computer hardware, music tracks or internal documents, you need to consider how to put the very best results at the top of the list.
But how can you tell which results are the ‘very best’ ones? – Answering this question goes far beyond search relevance and tracking user behavior. In this new training the OpenSource Connections team will provide you with a systematic approach to this problem, more comprehensive than any other training to date in this field.
The class will cover these areas:
- A model for understanding search result quality – what defines search result quality and how can we find indicators of what defines result quality in user interviews?
- Understanding feedback on search results and how can we tell apart feedback about search result quality, taking into account different types of biases?
- Collecting explicit feedback: How to design feedback collection? How to interpret explicit feedback and how to assure high quality of feedback? What technologies to use for feedback collection?
- Search result metrics: understanding their intuition and calculation, when to use which metric?
- Collecting implicit feedback: Understand different approaches to deriving search result quality judgments from tracked user behavior. How to get rid of biases, such as position bias or device bias?
- A/B testing: understanding different approaches to designing A/B tests and interpreting their results, including concepts such as interleaving, significance testing, and Bayesian A/B testing
- Understanding the specifics of how to evaluate trending AI-related technologies such as Large Language Models (LLMs) with respect to search and retrieval quality
Read René Kriegler’s blog on Beyond Search Relevance: why we’re moving on to search result quality.
Who should attend this training?
Suitable for everyone with beginner to intermediate expertise in search who wants to approach search result quality systematically, based on a comprehensive model. Though some sections will use mathematical and statistical concepts, we will assure that attendees without special knowledge will be able to understand them.
Attendance at a previous OSC Think Like a Relevance Engineer training is not a prerequisite, but if you have already attended TLRE this will help expand and deepen your knowledge.
Get Notified about Upcoming Trainings