Blog

Solr vs Elasticsearch for Relevancy: Battle of the Query DSLs

This article summarizes parts of Relevant Search. Use discount code turnbullmu to get 38% off!
vectors are fun

Last time on The Young and The RESTful, aka Elasticsearch vs Solr, we took a look at the two search giant’s ability to solve search relevance problems. We discussed how relevance comes down to controlling:

  1. Matching – what should be in/out of the set of results?
  2. Ranking – how will the result set be ordered?

Last time we discussed matching, where Elasticsearch was the clear winner. This time, we’ll take a look at controlling ranking. More specifically, we’ll see what happens when the two search engine’s Query DSLs duke it out! Next time, we’ll discuss how deeply you can plug each search engine to customize search relevance.

In this article, I avoid a blow-by-blow feature comparison. At this point in the genre of Elasticsearch vs Solr comparisons, the blow-by-blows quickly get out of date. These two search giants quickly catch up to one another at the feature level.

Rather it’s more useful to see the forest for the trees here. Think of each search engine’s Query DSL as a search ranking programming language. This is what needs comparing. Each programming language has specific syntactic and semantic quirks you’ll have to deal with in your work. Sure, both search engines interpret your queries into the same Lucene “machine code,” but can differ as much as Ruby, Python, or Haskell from each other.

What you’ll find is that Solr is a terse, Perl-like experience. On the other hand, Elasticsearch can feel a bit more like Java’s verbosity mixed with Python’s emphasis on being explicit. To see what I mean, we’ll start with simple cases of individual query construction and move on to more complex formulations.

Individual Query Construction: Is Terse Just The Worst?

Let’s begin to put together basic queries to introduce each search engine’s query DSL. We’ll start by looking at Solr, then comparing roughly equivelant constructions in Elasticsearch’s query DSL.

Above we compared Solr’s Query DSL to Perl. Solr’s Query DSL is like Perl for all the same reasons you love and hate Perl. For Solr diehards, nothing else feels as powerful. Compactness feels good when you’re writing the code. And if you’re steeped in its mysteries, glancing at a query conveys a lot of information without much scanning. On the other hand, just like Perl, other developers not steeped in the language easily feel lost. If you don’t do it everyday, Solr queries can quickly become “write only,” – hard to read and maintain.

Solr’s terse syntax originates from its origins as an entirely URL-based query language. It came up in a time in the Web when APIs entirely driven through the URL’s query parameters were in vogue. So Solr tries to fit everything you’d like to say in the URL bar. For simple searches, this makes a load of sense. A URL like http://solr.quepid.com/solr/statedecoded/select?q=text:(dog catcher law) seems fairly intuitive. Adding parameters to that search, such as &q.op=AND also makes sense (here we set the default operator to AND, making the query dog AND catcher AND law). Yet Solr expands on this with its localparams syntax, which scope parameters to a query. As an example, another way to rewrite the last query might be {!lucene q.op=AND}text:(dog catcher law). If you know localparams, this snippet is readable and easy to scan. If you don’t, it seems arcane.

Another aspect of Solr’s query DSL comes into focus in that last bit of localparams code. Solr’s interprets queries using a query parser. Above, we specified the query parser for our query string as lucene using the bang symbol {!lucene…}. Solr comes with a broad range of query parsers, each with their own parameters and syntax. Let’s look at an example using the “edismax” query parser. This example searches the provided keywords over two fields (catch_line and text), sums the relevance scores, and only returns results that match 50% of the query terms {!edismax mm=50% tie=1 qf='catch_line text'}dog catcher law. Now that’s quite terse! Each query parser, it should be added, tends to be contributed to Solr by different authors. So while this creates a broad library of query parser, each query parser can have different syntax and parameters.

Contrasting Solr, Elasticsearch ops for more verbosity and readability when building queries. Elasticsearch came up at a time when RESTful, JSON APIs were in vogue. Elasticsearch jives more with our current API sensibilities. You build Elasticsearch queries as JSON objects, as follows:

{"query": {    "multi_match": {        "query": "dog catcher law",        "fields": ["text", "catch_line"],        "minimum_should_match": "50%",        "type": "most_fields"    }}}

This verbosity pays off. It’s much easier for the uninitiated to look at the JSON and guess what’s happening. It’s clear that there’s a query, of type “multi_match” being passed a query stirng “dog catcher law.” You can see clearly the fields being searched. Without much knowledge, you could make guesses about what minimum_should_match ormost_fields might mean.

It’s also helpful that Elasticsearch always scopes the parameters to the current query. There’s no “local” vs “global” parameters. There’s just the current JSON query object and its arguments. To appreciate this point, you have to appreciate an annoying Solr localparams quirk. Solr localparams inherit the global query parameters. For example, let’s say you use the following query parameters q=dog catcher law&defType=edismax&q.op=AND&bq={!edismax mm=50% tie=1 qf='catch_line text'}cat (search for dog catcher law, boost (bq) by a ‘cat’ query). Well your scoped local params query unintuitively receives the outside parameter q.op=AND. More frustratingly, with this query you’ll get a deeply befuddling “Infinite Recursion” error from Solr. Why? because hey guess what, your local params query in bq also inherits the bq from the outside – aka itself! So in reality this query is bq={!edismax mm=50% tie=1 q.op=AND bq='{!edismax mm=50% tie=1 q.op=AND bq='...' qf='catch_line text'} qf='catch_line text'}. Solr keeps filling in that ‘bq’ from the outside bq, and therefore reports the not so intuitive:

org.apache.solr.search.SyntaxError: Infinite Recursion detected parsing query ‘dog catcher law’

To avoid accepting the external arguments, you need to be explicit in your local params query. Here we set no bq and change q.op to OR.

bq={!edismax mm=50% tie=1 bq='' q.op=OR qf='catch_line text'}

To me, at this atomic query-by-query construction level Elasticsearch is the clear winner. Elasticsearch helps you create queries with few surprises. (Look at the text above, much of it is spent explaining Solr quirks). However if terseness is high on your list, you might prefer Solr once you’ve digested the quirks you’ll encounter.

Composing Queries: Where Terse Wins The Purse

One area, however, where Solr shines is composing queries together. When you’ve advanced in your search knowledge you’ll eventually end up working on more than just one query. You’ll string multiple queries, boosts, and function queries (aka math) to come up with large, complex ranking solutions. Here Solr gives you more powerful higher-level programming constructs. Elasticsearch, on the other hand, focusses on the common use cases. By focussing on JSON, Elasticsearch’s hierarchical queries mean you can repeat yourself quite a bit, say if you want to execute multiple queries at the same time.

What do we mean? Well Solr’s desire for terseness has created features like parameter substitution and dereferencing. These features let you reuse parts of queries in a fairly readable fashion. Moreover, Solr’s function query syntax gives you an extremely powerful function (the query()) function that lets you combine relevance scoring and math more seamlessly than the Elasticsearch equivalents.

As an example, how would you multiply the relevance scores of two queries together? Let’s say a phrase query on your title field, and a normal term-by-term text score on your body field. Well in Solr, here’s how you could do it:

usersQuery=dog catcher law&phraseQuery={!field f=catch_line v=$usersQuery}&textQuery={!edismax qf=text v=$usersQuery}&q=_val_:"product(query($phraseQuery),query($textQuery))"

Now the q parameter here drives the relevance scoring. We place the user’s text query into our usersQuery parameter (something we create). We construct two additional variables phraseQuery and textQuery to execute the query we want to run on our corpus using usersQuery. Finally with q we combine the queries together by multiplying them in a function query (the wonky _val_ syntax).

Now this is something you can’t do in Elasticsearch. Elasticsearch’s function queries sandbox you to a single query’s relevance score. You don’t have access to Elasticsearch’s query DSL for more than a single query to work with the score mathematically. You can only combine text scores through subscribed formulations (Boolean queries, etc).

This is a rather big drawback for power users. The above multiplication example, is a surefire way to have to relevance scores amplify each other very strongly. Elasticsearch’s function queries, in their defense, try to cover the most important use cases where these power user strategies might matter. Yet in the course of developing our book, we noted a couple of times where we wished we had that extra relevance score to play with to demonstrate interesting relevance strategies.

Solr also lets you reuse query parameters. By giving parameters names, you can restate any of the parameters very easily without creating a giant, repetitive JSON query object. For example, if for our Solr query we wanted to also filter out anything that didn’t match our textQuery, well that’s rather simple. We simply apply a filter query that refers to our main query:

usersQuery=dog catcher law&phraseQuery={!field f=catch_line v=$usersQuery}&textQuery={!edismax qf=text v=$usersQuery}&fq=${textQuery}&q=_val_:"product(query($phraseQuery),query($textQuery))"

Complex Elasticsearch queries, on the other hand become copy-pasta. You end up with a complex JSON object like below (note this doesn’t exactly replicate the Solr query above, as multiplying query scores isn’t possible). The names you can give Solr sub queries help understand the larger intent of the programmer, even if the individual lego blocks are more terse and harder to read.

{    "query": {        "filtered": {             "query": {                "bool": {                    "should": [                        {"match_phrase": {                            "catch_line": "dog catcher law"                        }},                        {"match": {                            "text": "dog catcher law"                        }}                    ]                }                            },            "filter": {               "query": {                    "match_phrase": {                        "catch_line": "dog catcher law"                    }                }                 }              }    }}

To me, Solr wins here. Its easier to compose queries together in many arbitrary mathematical scoring combinations. Naming parts of queries helps readability. You’re not limited to only using relevance scores in prescribed ways in function queries. Solr enables you to avoid repeating yourself, making for less verbose and easier to read relevance solutions at the “big picture” level.

Close to Lucene or Speak the Language of a Query Parser?

We noted earlier how Solr relies on query parsers to interpret user queries and their parameters. Query parsers translate the query parameters into underlying Lucene queries. What’s interesting is that Elasticsearch works very differently. Instead, Elasticsearch’s Query DSL exposes more Lucene primitives to the user. For example in the last query, you saw a lengthy Boolean query

  "bool": {         "should": [            ...                      ]    }

Boolean and “SHOULD” clauses are very much a Lucene concern. They correspond directly to how Lucene puts queries together. Solr, on the other hand, often hides these details from you in its query parsers. For example, you might not realize that writing q=text:law&bq=catch_line:dog is effectively the same as doing two boolean SHOULD clauses. Solr says “use a boost query, the query parse will take care of you.” Elasticsearch says “learn how SHOULD queries score things, and use that primitive.”

This to me comes down to preference. In Solr, you tend to put more “application” search logic into a query parser. You tend to, as you’ll see next time, write your own query parsers with domain-specific search semantics meaningful to your use case. Elasticsearch on the other hand encourages you to build big JSON queries that look more like Lucene-ese. You have pieces closer to the search-engine metal. But using such lower level features can be challenging to convey semantics meaningful to your search task. Rather these semantic components tend to go in your search application.

So Who Won?

Result: Tie (it Depends)

Hopefully you’ve noticed a lot of “it depends” in this discussion. To me there’s no clear winner. If you want a readable search syntax that corresponds to Lucene’s metal, you’ll like Elasticsearch. It’s more explicit. It’s easy to build a mental model of what the search engine’s doing. If you like terse, abstracted semantics. If you build complex queries, the verbosity and low-level detail of Elasticsearch may put you off. Rather you might prefer steeping yourself in Solr-ese.

What I tend to see is that Solr succeeds for more advanced relevance use cases. Elasticsearch tends to create fewer surprises, but it can be harder to “push the boundaries” simply through the query DSL. Looking at the trends in the community bare this out as well. At the last Lucene Revolution, Solr seemed to attract far, far more talks on relevance and Information Retrieval. Elasticsearch, on the other hand, still focusses heavily on the analytics side to search. You can deliver strong solutions in either, but perhaps the communities are reaching a fork in the road, with Solr remaining focussed on advanced pure search problems and Elasticsearch going after analytics?

So what did I get wrong? I’m sure I missed something, and I’d love to hear your feedback on this article!

Finally, if you need help choosing between Elasticsearch or Solr for your use case, don’t be shy about getting in touch!