Quepid Carries Your Caseload

Becky BillingsleyMarch 2, 2020

Now that you are using Quepid to collaborate with members of your team by sharing your cases with other folks, you need a way to view basic information for *ALL* of your owned and shared cases and navigate to those cases within Quepid. That is why we have the Multi Case Dashboard.

Importing Legacy Search Results into Quepid

Doug TurnbullFebruary 24, 2020

Migrating search engines? Want to prove the migration succeeded without worrying about lost customers, revenue, and the like. With Quepid, you can take a snapshot of the state of your products existing search results with a small amount of work. Then you'll always have this snapshot to compare your new search solution to.

Announcing Quepid 6.1.0

Eric PughFebruary 1, 2020

While Quepid has been open source for six months, this is the first release that you can deploy yourself!

Tesseract 3 and Tika

Eric PughDecember 10, 2019

In which we deal with learning that sometimes you don't get to use the latest version of Tesseract...

Demystifying nDCG and ERR

Max IrwinDecember 9, 2019

In this post, we unwrap the mystery behind two popular search relevance metrics, and discuss their pros and cons. Our subjects for this exercise are Normalized Discounted Cumulative Gain, and Expected Reciprocal Rank, commonly acronymified as nDCG and ERR. We'll start with some refresher background, visualize what these metrics actually look like, and paint a picture of how each can be either helpful or misleading, depending on the situation. Afterwards, you'll have a better understanding of their behavior and which ones to use when (and why).

It's time for Tika Tuesdays!

Eric PughNovember 22, 2019

It's time for Tika Tuesdays! Three years ago I started messing around with OCRing documents with Tika, and today that process is relatively straightforward. This weekly series will share what I've learned.

Understanding BERT and Search Relevance

Max IrwinNovember 5, 2019

There is a growing topic in search these days. The hype of BERT is all around us, and while it is an amazing breakthrough in contextual representation of unstructured text, newcomers to NLP are left scratching their heads wondering how and why it is changing the field. Many of the examples are tailored for tasks such as text classification, language understanding, multiple choice, and question answering. So what about just plain-old findin' stuff? This article gives an overview into the opportunities and challenges when applying advanced transformer models such as BERT to search.