Your Recommendation Systems Aren't As Cool As My Friends

Doug TurnbullAugust 21, 2016

What is more important in a recommendation?

  1. That the recommendation is accurate based on my tastes OR
  2. That the person making the recommendation is someone I like and respect

The traditional regime of recommendation systems has been obsessed with (1). What’s the uplift of recommendation algorithm A vs recommendation algorithm B? Which is driving more click-thrus and conversions?

There’s something fundamentally broken in the this way of thinking though. I don’t care what the computer says. I care about what my friends say. The meaningful music I’ve discovered over the last 10 years has been music liked by someone I respect. Friends. Other musicians I like. Music critics. Online celebrities.

Because how do I decide when to really give music a chance? What music should I really listen deeply to? And which music do I listen to passively without much thought and consideration?

When a friend tells me I really should try out an album, my first interaction with the album might be clumsy and painful. I might not like it. But I tell myself “my friend Sean really knows music.” He’s a thoughtful guy, and I want to understand what interests him. So I keep listening. I give the album a second chance, and a third. I really want to like it. I put work into it. And eventually I do enjoy it.

When a computer recommends music (or movies, art…) I give it one fleeting chance. If I don’t like it I don’t put additional work into it. It’s either a catchy tune that I like on first listen, or I ignore the music. The music feels cheaper. Like top 50 radio. Despite the actual “accuracy” of the recommendation, I have a hard time disentangling my enjoyment of the music from my relationship with the thing doing the recommendation. Stupid computer, what does it know?

Recommendation systems treat our tastes as singular and pure. As if my interaction with an item highlights my inherit nature. Yet in social circles, making recommendations is akin to persuasion and influence. If you give a recommendation, you hope to convince your friends to like something you enjoy. Conversely, if you recieve a recommendation from someone you like, you’re incentivized to like it. By liking the item, you can deepen your relationship with that person.

In other words, there’s often a deeper, human story to why we like the things we do. “Happiness is only real when shared”. Our tastes are fungible units operating in a social sea. It’s much more fun to enjoy something with a friend than by ourselves. How many of our deepest held music preferences arise out of a desire to fit into a clic at high school? Or because of romantic courtship?

I think we need to fundamentally rethink the recommendation interaction. How can we do that? Well here’s a few spitball ideas (tell me yours!). Each of these ideas probably deserves its own blog post in turn :).

  • Instead of finding anonymous users extremely similar to me, focus on similar friends even if they only share that share a handful of my tastes
  • Don’t make recommendations to users, make sharing easier. Prompt users with “Doug might like " -- would you like to let Doug know? And be sure you can link to anything.
  • Deanonymize recommendations and find users with other factors in common with the target user (Bob lives in Charlottesville and likes X)
  • Focus on what the musicians I like in turn like. When I learn Led Zepellin is an influence on the Smashing Pumpkins (a band I like), I’m inclined to check out Led Zepellin. In graph terminology, these influencers would be “supernodes.” We often discount them because of the oprah book club problem. But perhaps there’s really an oprah-book club solution!
  • Understand users don’t value computer-generated recommendations. But use this to your advantage: focus on quantity and serendipity over accuracy. Focus on passive interaction modes that intentionally lack gravitas: radio stations, “channel flipping,” auto mixing after a chosen song, etc where its ok if 90% of what a user sees is “meh” because sometimes 10% is fun.
  • Conversely: I wonder if many computer generated recommendations helps create conundrums of choice. Can this be avoided by putting more social weight into the recommendations? I always wanted Netflix to just show me what Roger Ebert likes, not predict what I might like
  • Understand expert-curated reviews by humans are more effective recommendations than collaborative filtering. AVClub, Roger Ebert, and The Wirecutter have done more to influence my buying decisions than the aggregation behavior of Amazon users similar to me. Hire a really good reviewer/editor and let them build a following, don’t spend a million bucks on a kaggle competition.

Underlying some of these ideas is an assumption that we can know or figure out the user’s social network. How might we do that? Most obviously we do what Spotify does: explicitly invite the user to connect to their Facebook account.

However, alternatively we might attempt to implicitly gather user relationships. We could do this by making sharing easier and embedding a token in the URL being shared between user A and user B. Many sharing events between A and B could be assumed to be a relationship. Further, we can measure how much weight user B gives to shares from user A, basically measuring the weight B gives to relationship A.

  1. User B receives a shared item X from user A
  2. Relationship Weight = B’s apparent preference for item X / B’s predicted affinity for X based on Collab filtering

If B really likes X (they watch the movie; listen to the music), but B really shouldn’t like X (X is completely off the radar of B’s machine predicted tastes) then perhaps B really respects and listens to A. Perhaps B appears to be really working on changing their tastes to match A?

This could also be used to track “fandom” as a basis for recommendation. Here instead of A and B sharing between themselves, we simply note that when B is presented items that “Roger Ebert” likes, they seem to work heavily against their natural tastes. I have no reason to like TP-Link routers based on my Amazon buying history. But it was Wirecutter’s list of “routers everyone should buy.” You could measure the fact that I gave a lot of stock to what Wirecutter said if I in give the router a really good review even though I shouldn’t have liked it based on machine-based recommendation methods.

There’s many veins of thoughts to keep going in this direction. These ideas of mind are blind gropes in a new direction – I suspect significant research is needed. I’m sure there’s a great deal already out there on measuring how human influence can change tastes. For example, I wonder – what are the features of a relationship where person A is likely to modify their tastes to person B? Perhaps Doug Turnbull knows, I’ll have to ask him.

But the main takeaway is the future of recommendations is more about fitting in existing systems of influence, and less about pure collaborative filtering. In some ways, I think we might need less machine learning and more machine-guided social interactions.

If you’re interested in exploring these ideas do please get in touch to let me know how much you completely disagree with everything I’ve just said :). And check out our free lunch and learn’s if you want to chat about these ideas with me one on one with your team.

More blog articles:

Let's do a project together!

We provide tailored search, discovery and analytics solutions using Solr and Elasticsearch. Learn more about our service offerings