Buy Tickets

Does your team use Elasticsearch but struggle to deliver relevant, high-quality search results?

‘Think Like a Relevance Engineer’ (TLRE) is a 2-part training that gives your team the skills you need to tackle search relevance issues. This class helps search teams understand how to measure search quality, iterate on relevance against those quality metrics, with a survey of the common techniques used to improve relevance: from basic TF*IDF, to taxonomies, to learning to rank. TLRE is delivered online using a leading e-learning platform, including the facility to carry out exercises and labs.

Who we are

OpenSource Connections believes in empowering search teams. OpenSource Connections has worked with open source search and applied Information Retrieval since 2007. We wrote the book Relevant Search and have pioneered open source relevance tuning tools like the Elasticsearch Learning to Rank plugin, Quepid and Splainer.

Your trainer

Your OpenSource Connections trainer is a relevance thought-leader actively working on real-life relevance issues. Training is core to our mission of ‘empowering search teams’, so you get our best and brightest. We never send a trainer to just “read off slides”. We expect you to bring your hardest questions to our trainers. Our trainers expect to be challenged, and know how to handle unique twists on problems they’ve seen before.

What you’ll get out of it

How to:

  • Measure search quality against business metrics
  • Appropriately engage product and technical stakeholders
  • Steer your organization to a scientific hypothesis-driven, iterative mindset on relevance
  • Select and manipulate high-precision ranking signals in the search engine
  • Add semantic intelligence via synonyms and taxonomies

Agenda

Part One - Managing, Measuring, and Testing Search Relevance

(delivered over 2 half day sessions online)

This part of the training helps the class understand how working on relevance requires different thinking than other engineering problems. We teach you to measure search quality, take a hypothesis-driven approach to search projects, and safely ‘fail fast’ towards ever improving business KPIs:

  • What is search?
  • Holding search accountable to the business
  • Search quality feedback
  • Hypothesis-driven relevance tuning
  • User studies for search quality
  • Using analytics & clicks to understand search quality

Part Two - Engineering Relevance with Elasticsearch

(delivered over 2 half day sessions online)

This part of the training demonstrates relevance tuning techniques that actually work. Relevance can’t be achieved by just tweaking field weights: Boosting strategies, synonyms, and semantic search are discussed. The day is closed introducing machine learning for search (a.k.a. “Learning to Rank”):

  • Getting a feel for Elasticsearch
  • Signal Modeling (data modeling for relevance)
  • Dealing with multiple, competing objectives in search relevance
  • Synonym strategies that actually work
  • Taxonomy-based Semantic Search in Elasticsearch
  • Introduction to Learning to Rank

Who should come to training?

This training is appropriate for all members of the search team:

  • Search Engineers
  • Data Engineers that use the search engine
  • Machine learning engineers
  • Relevance engineers
  • Production-focused Data Scientists focused on search
  • Product team members wanting exposure to how to manipulate the search engine

Quotes From Past Attendees:

'Think Like a Relevance Engineer' has helped me think differently about how I solve Solr & Elasticsearch relevance problems"

Matt Corkum, Disruptive Technology Director,
Elsevier

What a positive experience! We have so many new ideas to implement after attending 'Think Like a Relevance Engineer' training.

Andrew Lee, Director of Engineering for Search
DHI

Buy Tickets Want training for your team/town? check this out!