Our mission is to empower the world’s best search teams. Search teams ultimately generate value for their organizations through better, smarter search. That is: relevance.
Sadly relevance remains maddening! What are you even optimizing for? And how do you know you have the right answer? Then, finally, how do you cut through the buzzword soup like “cognitive”, “semantic”, and “machine learning” to know what your search team should actually implement?
This is why we’re doing Haystack our search relevance conference!
Haystack is an extension of knowledge sharing we do internally. Every Friday a team member leads a discussion on real technical solutions to search relevance. Many times the talk is around something seemingly mundane like Solr/Elasticsearch synonyms or boosting strategies. Other times, we dive into advanced topics like taxonomies, machine learning, learning to rank, intent classification, or incorporating NLP/personalization into search. Most importantly we support each other as we face the toughest relevance challenges ourselves.
With Haystack we want to open up the invite to practitioners from around the world similarly struggling on hard, meaty relevance problems. We want to hear and share what’s working and not working – the real-life war stories of theory encountering practice. What did that paper actually say, and how did you implement it in your search stack? Was the silver bullet for your team some kind of taxonomy? Or intent classification? Or perhaps learning to rank?
I see this as possibly the inception of a community of relevance practitioners. I’d love to build more open source tooling and infrastructure to support the hardest relevance problems. For example, why do I see so many orgs struggle with instrumenting A/B testing for search? Why isn’t generating search quality data from user events something easier to do?