When Zero Search Results is the right answer (& how to measure this in Quepid)
Sometimes a query should produce zero search results – but how do we score this correctly in Quepid, the search relevance workbench?
Sometimes a query should produce zero search results – but how do we score this correctly in Quepid, the search relevance workbench?
Are your users searching for particular brands? Can we use LLMs for brand detection and use this to improve search & drive business?
Eric Pugh and Heather Halter discuss how we can improve OpenSearch documentation after a Lightning Talk at OpenSearchCon EU
Here’s what people at two recent conferences thought about UBI, an open source solution for tracking user behavior we’re working on with the OpenSearch team
A review of Haystack US 2024, the search relevance conference, by the three winners of the Hughes Scholarship.
Search conference veteran Charlie Hull lists the different kinds of search conference and how you can get the most from them
Announcing the Hughes Scholarship to assist those at an early stage of their search & AI careers to attend Haystack conferences.
Query Understanding helps you find out what users actually mean when they search – and LLMs can be used to detect this user intent, relax and even replace queries for better recall
New Quepid features include improved user interfaces, APIs and the ability to work with any search backend, plus new metrics including Jaccard similarity
How do you manage access control & data quality when building AI-powered enterprise search? A reality check
The challenges with compound nouns in search, how to successfully tackle them with decompounding using Query Rewriting and the role of LLMs.
Continuous experimentation as part of your CI/CD pipeline can drive continuous improvement of search quality