Haystack Europe - The Search Relevance Conference!
2nd October 2018 - London Sponsored by OpenSource Connections and Flax
Haystack is the conference for improving search relevance. If you're like us, you work to understand the shiny new tools or dense academic papers out there that promise the moon. Then you puzzle how to apply those insights to your search problem, in your search stack. But the path isn't always easy, and the promised gains don't always materialize.
Haystack is the no-holds-barred conference for organizations where search, matching, and relevance really matters to the bottom line. For search managers, developers & data scientists finding ways to innovate, see past the silver bullets, and share what actually has worked well for their unique problems. Please come share and learn!
Doors Open, Registration
Keynote - SOLR-8542Doug Turnbull, OpenSource Connections
Adding Learning to Rank to Solr has provided tremendous benefit to the search relevance community. Why did Bloomberg spend so much to just give it all away? See this 'one cool trick' that creates real extensibility, reuse, and maintainability for your org's software initiatives.
Getting started with search tuning and search relevanceKaren Renshaw, Grainger Global Online
Search Tuning and Relevance can seem daunting. Search gets a lot of noise from around the organization. Everyone has an opinion; everyone thinks it’s easy but it's a long term investment. Drawing from her many years experience managing search teams, Karen will cover how to get started, set objectives, manage expectations around the organization, how to consider the holistic search experience and how to measure and understand results and create ongoing plans. More »
Visualizing search resultsSebastian Russ, Tudock
Demystifying onsite search by creating transparency for our clients is our main focus and motivation. Our clients face challenges like a lack of transparency in onsite search especially in terms of ranking and search result quality. There might be a sudden change in staff managing onsite search, a general lack of internal ressources dedicated to search and old, undocumented artifacts that can lead to confusion, frustration and wrong assumptions. More »
Search quality evaluation: tools and techniquesAlessandro Benedetti & Andrea Gazzarini, Sease
Every search engineer ordinarily struggles with the task of evaluating how well a search engine is performing. Improving the correctness and effectiveness of a search system requires a set of tools which help measuring the direction where the system is going. The talk will describe the Rated Ranking Evaluator from a developer perspective. RRE is an open source search quality evaluation tool, that could be used for producing a set of deliverable reports and that could be integrated within a continuous integration infrastructure. More »
Learning Learning To RankTorsten Köster & Fabian Klenk (Shopping 24), René Kriegler (Freelancer)
At Shopping24, we have recently started to apply machine learning to the search result ranking on our Solr-based product search platform. We could easily train a ranking model using open source software and deploy it to Solr. However, we soon realised that this was only the easier part of it and that we had to put our efforts into the tasks and processes that empower us to train a successful model, such as: gathering valid training data, preparing judgement lists, feature engineering, expectation management, computing offline search quality metrics and- connecting offline and online metrics through A/B testing. More »
From user actions to better rankings: Challenges of using search quality feedback for learning to rankAgnes Van Belle, TextKernel
In this talk we’ll describe how we used different types of user feedback (both implicit and explicit) to improve search products for matching vacancies to CVs by Learning to Rank (LTR). We will focus on the pitfalls and surprising results we encountered when trying to leverage both types of feedback for LTR. Although there is a variety of literature about how to set up a system for explicit annotations, as well as much literature on how to model user click behaviour in search engines, the goal of using such feedback for training a reranker is often not targeted. More »
A visual approach to search strategy formulationTony Russell-Rose, UXLabs
Knowledge workers (such as healthcare information professionals, patent agents and legal researchers) need to undertake complex search tasks to identify relevant documents and insights within large domain-specific repositories and collections. The traditional solution is to use line-by-line query builders offered by proprietary database vendors. However, these offer limited support for error checking or query optimization, and their output can often be compromised by errors and inefficiencies. More »
PanelHosted by René Kriegler
Closing RemarksCharlie Hull, Doug Turnbull
Drinks reception (venue to be announced)
Conference cost is £99.
Hilda Clark Suite
173 Euston Road, London, NW1 2BJ
2nd October 2018