Okay, so we’re not going to be matching the one million dollar prize winning algorithm that Netflix is based on, but I will be showing you how to code your own movie recommendation engine from the ground up. This means we will take some data on how people rate films and try and find preference relationships between these people. Based on the strength of these relationships we can than apply some weighted scores to work out what films individual people may well like.

You can get the full source code for this project Bitbucket.

Collaborative filtering in movie recommender engines

The system we’ll be building is based on a collaborative filtering system. Collaborative filtering is something you’ve been doing yourself “manually” your whole life. In the case of what we’re looking at; films, there are certain people you’ve observed that have a similar taste in films to you. You may take their opinions more seriously that than of somebody you know you have very difference tastes from when it comes to recommending what film to watch next. A collaborative filtering based recommendation engine allows you to do this at a much larger scale, with an accuracy that was not previously possible.

Collecting the data

Before we begin, we need a data-set to work with. Rather than copy and pasting my data, it can be more fun to make a survey with Google Docs and gather data from family and friends, the results will be more interesting!.

I picked 20 films from the IMDB Top 250 rated films and asked my colleagues via a Google form to rate these films on a scale of 1 (hated it) to 10 (loved it), with instructions just to leave blank any film they hadn’t seen.

If you were doing this on a large scale, it would make sense to use a database, but since we’re only working with a tiny amount of data, we’ll just use nested Python dictionaries.

Create recommendations.py and add your critic data

Feel free to use the data below which has a dictionary called critics which then has each critic’s name as a dictionary containing a dictionary of films and their rating of that film.

Finding similar users

Before we can begin to think about recommending films, our first job is to find out which users have similar tastes to each other. This is the basis of our collaborative filtering; the assumption with if Person A has scored a lot of films similar to Person B, then if Person A likes a film Person B hasn’t seen, there is a above average chance that Person A will also like this film.

There are several ways to go about calculate what you might consider to be a similar user. Although we are not going to use it in our final engine because of drawbacks that we’ll discuss in a moment, it’s worth making a small diversion to discuss Euclidean distance score.

Euclidean Distance Score Function

This is a quick and dirty way to see how similar two users scores are. Euclidean distance score will take the items that people have ranked in common and uses them for axes of a chart and plots them in “preference space”.

For instance, let’s see how Jamie and Emily’s rankings of Forrest Gump and The Matrix look:

euclidean distance score

Jamie & Emily’s film ratings in preference space

To calculate the distance between Jamie and Emily on the chart, take the difference on each axis, square them and add them together. Then take the square root of the sum.

We can do this from Windows Powershell by running Python and using the pow(n,2) to square a number and sqrt to take the square root.

This gives us the distance between the two points, which will be a smaller number when there is a closer match. Ideally, we would like a function that gives higher values for people who are similar. This can be done by adding 1 to the function (so we avoid dividing by zero) and investing it:

Add Euclidean function to recommendations.py

Let’s add a function to our code to allow us to find the similarity score based on this working.

We can now use this function with two names to get a similarity score. Running your shell from the directory of your source code, try the following:

While this can provide us with some interesting results, which we can compare with other methods later, there are some drawbacks to using Euclidean distance scoring; especially in cases where the data you’re using isn’t well normalised.

For instance, in this case our Euclidean distance score could give the impression that two people with very similar tastes in films are actually quite far apart depending on their own interpretation of the 1 to 10 system.

If Jamie thinks that a “great” film is worth about a 7 and a “bad” film is worth a 2, whereas Emily thinks a “great” film is worth a “10” and a “bad” film is wroth a 4, then despite them feeling the same about the films they are rating, the Euclidean distance score in that case will show them as far apart.

Pearson Correlation Score

A better suited method to find how close the preferences of two peoples’ film tastes are, based on the data we have would be to use Pearson correlation score (or Pearson correlation coefficient as it is also known). The correlation coefficient is a measure of how well two sets of data fit on a straight line.

Pearson Coefficient

Jamie & Emily’s film tastes with Pearson Coefficient

 

This method, which produces a “best fit” line allows us to correct for the grade inflation problem we just highlighted. Our Pearson coefficient will give a result of between -1 and 1. With 1 being a “perfect” correlation, where the line intersects all points, 0 between that there is no correlation between the data points and -1 being a perfect negative correlation, meaning critic A hates everything that critic B loves.

Charted Pearson Coefficient

How these different values would look on a chart

 

Add Pearson function to recommendations.py

The formula we will be putting into our function is as follows:

Pearson Formulae
Where:

N = number of pairs of scores
∑xy = sum of the products of paired scores
∑x = sum of x scores
∑y = sum of y scores
∑x² = sum of squared x scores
∑y² = sum of squared y scores

Following this formula, let’s create a new function:

With this function in place, we can now compare how similar two specified critics are:

Add a ranking function to recommendations.py

Now we can find out similar two specific people are, the next step is to add the ability to compare one to all.

Our function will use Python list comprehension to compare me to every other user in the dictionary using one of our distance metrics and it will return the first n items of sorted results.

You can now try calling this function and get the top 3 matches to that person:

Even at this very early stage, we are getting some interesting data! I ran this function for all users that took part and compiled the data by colour, with negative correlations between red, ranging up to positive correlations being green.

Film critic data for recommendation engine

Compiled critic data with Pearson coefficient scores

With the dataset being so small, it is not uncommon for outliers to crop up and upset things. However, the value of this data over time increases and would allow you to spot trends that are occurring within these groups.

Add a recommendation function to recommendations.py

Our final step is adding our recommendation function that will hopefully help us pick films that a specified user will like. There are several ways we could accomplish this. Your first thought may be to simply look for the person(s) with the closet correlation and recommend films they have seen that the specific users hasn’t, but this can cause problems.

It may be the case that your closest matched critic rated a film highly that was hated by almost everyone else or they haven’t reviewed some of the movies that you might like. To solve these kinds of potential issues, you need to score items by producing a weighted score. This means modifying the ratings critics give films depending on how closely correlated they are in tastes to you.

Let’s look at an example of how we would score two films for Mark: The Matrix & Forrest Gump

Weighted recommendations based on similarity

Weighted recommendations based on similarity

The table shows the correlation scores (similarity) and the 1-10 rating each of those critics gave the film (The Matrix and Forrest Gump). The S*Matrix and S*Forrest Gump columns indicate the weighted score, that is the score after it has been multiplied by that critic’s similarity. This means that critics that have closely related film tastes are taken as more important, whereas critics with little correlation have the importance of their ratings dampened. As you can imagine, any critics with a negative correlation will actually lower the ranking of the film, the higher the rating they give.

The next step is to add up all of these weighted scores and take care of the issue of the films that simply have more reviews being scored higher. For this reason, we take the sum of all the weight scores and divide it by the sum of all of the similarity scores (Sim. Sum). This produces our final film scoring.

We can now implement this weighted scoring into our recommendation engine, providing us with a list of movies we may like to watch next, ignoring ones we have already seen.

This code loops through every other person in the prefs dictionary. In each case, it calculates how similar they are to the specified person. It will then loop through every item for which they’ve given a score and calculate a final score (line 18). The scores are then normalised by diving each of them by the similarity sum and the sorted results returned.

Congratulations!

Test your recommendation engine

As a cool side-effect, not only do we get a ranked list of movies, we get a prediction at what that person would actually rate those movies as well. This information can be fed back into the system to make further improvements. For instance, you could specify that if the movies lined up for recommendation are less than a predicted 5/10 in ranking, then don’t show them.

This kind of recommendation engine can be adapted and used for many different types of data or similarity metrics, experiment and enjoy!

You can find this project and many more in the great book Programming Collective Intelligence.