i83 Blog by Mark Cook
giftcorp logo

GiftCorp: A side-scrolling game for Windows & Web

Play it now

Play it in your web browser here (HTML5)
Download the Windows exe version (4.75Mb)

Built in 6 hours

The digital agency I currently work at has one day a month they call ‘Supercharged’. These are sort-of hack days where all of the agency team gets to work on something new.

The Christmas edition of Supercharged had the theme of ‘make a game’.  While we could have made any type of game, I was paired with Shaun and we’d previously been talking about collaborating on a computer-based game, so I jumped at the chance to try and make that a reality.


Our team ended up consisting of myself, Shaun (who is a developer in his day job) and Kate, the office manager. After some initial discussions at lunch about the possibilities, Shaun was keen to underline the fact that making a game in six hours was quite “optimistic”. Aside from time, work distribution would be a challenge, Kate had no programming experience and both Shaun and I would need to work on the code simultaneously.

We quite hastily put together a game design document in the week before the Supercharged day from a template that Shaun found. This gave us at least a basic idea of what objects we would need to use in the game as well as the asset requirement from an art and sound point of view. While I’m guilty on personal projects just ploughing ahead without a design document, I can’t stress enough how important they are when working with a team. Sometimes, even the core idea may differ from person to person unless you take the time to document it and discuss the detail. By getting everybody to contribute to this template, I think Shaun pre-empted several issues we would have had.

Shaun and I had already had experience using Gamemaker: Studio which is excellent for fast prototyping. Another advantage with Gamemaker is that you can develop once and easily export to Windows, Mac, Linux, HTML5, Android and iOS. For ease, we originally aimed just to get the project working for Windows and if we had time, tweak it to work well in-browser. For simultaneous development we used a repo in Bitbucket using the SourceTree client. This actually turned out to be something of a nightmare as Gamemaker: Studio didn’t play that nicely. After having several frustrating file overwrites during the day, we found that after pulling the latest repo you had to reload the project within Gamemaker: Studio so that it was using the latest files. We probably lost around an hour of development time between us rewriting code and resolving various merge problems.

Prior to the day, I tried to write a few pieces of code that solved some of the challenges I could see us facing, such as a good method of infinitely and randomly generating the play area. Going into the day with the confidence that we wouldn’t run into any major problems coding the core mechanics of the game really helped.

We went into the day with the idea of making:

A fast-paced multiplayer party game where each player is a rival present delivery corporation. Players must attempt to drop as many gifts as they can into different size chimneys that whizz past the bottom of the screen. The game lasts 60 seconds and random powerups and mutators will spawn to help or hinder the player.


We got off to a quick start thanks to the design document and Kate had a baptism of fire in pixel art using Piksel to start producing the art assets we needed.

Kate using Piskel

Kate using Piksel

While Kate was working on the “proper” art, I just made some beautiful MS Paint placeholder art that was the correct dimensions and allowed us to start building the layout of the play screen.

Basic test of an early version of GiftCorp

Basic test of an early version of GiftCorp

Within an hour we had something very basic, two player controlled blocks to move around and the illusion of movement by having a slow scrolling, repeatable background and chimneys that spawned off-screen and moved slightly faster from the right to left.

All of this was pretty straight forward and we got the basic mechanics working of confining the players to the screen, adding the countdown timer and increasing the speed of the moving chimneys every 10 seconds to make the game harder.

Shaun was very keen to add controller support, which was incredibly quick and easy to do with Gamemaker. As I worked on the timings for chimney generation he very quickly got controllers working alongside the originally planned keyboard controls.

GiftCorp - Now with controller support

GiftCorp – Now with controller support

Neither Shaun nor I were particularly keen to tackle the collision detection, as it’s usually the hardest (and normally least fun) to debug. After some discussion, we settled that I should do it for reasons I have still yet to understand (:

We considered using a score that was adjusted for how accurately the player dropped the present into the chimney, similar to the accuracy code I have written for the Jungle Joey project. However, with only two hours left to go, I made the unilateral decision to have the collision detection would simply work on a “hit or miss” basis, with more points awarded for dropping presents into smaller chimneys.

To ensure this play mechanic worked, we added a “fire delay” mechanism to each player, meaning they could only drop 1 present per second. This meant that you couldn’t just hold down, or hammer the “drop present” button and effectively forced the player to choose a target. Would they use their drop to get guaranteed points on a big target or risk missing in pursuit of having to make a more accurate shot for a small chimney?

The powerup challenge

We originally planned four powerups to be present in the game:

  • Christmas Pudding: Halves the movement speed of opposing player for 5 seconds
  • Selesti Logo: Doubles the rate at which you can drop presents for 5 seconds
  • Carrot: Doubles your movement speed for 5 seconds
  • Beer: Reverses your movement controls for 5 seconds

It turns out that powerups were slightly more difficult to implement that we first thought.

Some things that were not initially considered

  1. As powerups sometimes need to affect the player that collects them and other times the opposing player, we need to take this into account when applying the effects.
  2. As each powerup is time limited, we need  way to track which powerups a player currently had affecting them and how long for.

While these are pretty garden variety problems, the new snags were taking their toll after five hours of coding. I passed the problem to Shaun who eventually passed it back to me. In the end, we simply made a separate object that handled all the status effects by triggering a specific timer depending on which effect and which player. It was a dirty solution, but it worked – which was good enough for now!

How it all came together

Dirty fixes aside, things were coming together very nicely and it was very satisfying seeing Kate’s art overlaid onto the code we had produced.

The original idea was to have “60 second santa”, a red and blue santa competing to drop presents. This transformed into “Gift Delivery Corporation”, two rival companies delivering presents to try and win the yearly delivery contract. Somehow, this game ended up being Santa versus… Donald Trump. I’m pretty sure it was because Kate wanted to draw his “hair” flapping in the wind. Anyway, it was deemed that Donald Trump wouldn’t drop presents, so Kate had to finally hammer out some pixel grinches for him to drop.

Gift Corp Grinches

Because Donald Trump. That’s why.

All three of us worked together to source some sound effects from freesfx.co.uk and I contacted a guitar instructor on YouTube if he wouldn’t mind me using his metal version of a Christmas theme for the game music.

Play time!

While we had a playable game at the end of the day, I went on and made a few tweaks just before the office party to make it playable via a web browser. This simply involved a little bit of resizing and changing how we handled some variables to work better in Javascript. It was fantastic seeing people enjoying the game during the Christmas party!

netflix recommendation engine in python

Build a Netflix like recommendation engine in Python

Okay, so we’re not going to be matching the one million dollar prize winning algorithm that Netflix is based on, but I will be showing you how to code your own movie recommendation engine from the ground up. This means we will take some data on how people rate films and try and find preference relationships between these people. Based on the strength of these relationships we can than apply some weighted scores to work out what films individual people may well like.

You can get the full source code for this project Bitbucket.

Collaborative filtering in movie recommender engines

The system we’ll be building is based on a collaborative filtering system. Collaborative filtering is something you’ve been doing yourself “manually” your whole life. In the case of what we’re looking at; films, there are certain people you’ve observed that have a similar taste in films to you. You may take their opinions more seriously that than of somebody you know you have very difference tastes from when it comes to recommending what film to watch next. A collaborative filtering based recommendation engine allows you to do this at a much larger scale, with an accuracy that was not previously possible.

Collecting the data

Before we begin, we need a data-set to work with. Rather than copy and pasting my data, it can be more fun to make a survey with Google Docs and gather data from family and friends, the results will be more interesting!.

I picked 20 films from the IMDB Top 250 rated films and asked my colleagues via a Google form to rate these films on a scale of 1 (hated it) to 10 (loved it), with instructions just to leave blank any film they hadn’t seen.

If you were doing this on a large scale, it would make sense to use a database, but since we’re only working with a tiny amount of data, we’ll just use nested Python dictionaries.

Create recommendations.py and add your critic data

Feel free to use the data below which has a dictionary called critics which then has each critic’s name as a dictionary containing a dictionary of films and their rating of that film.

Finding similar users

Before we can begin to think about recommending films, our first job is to find out which users have similar tastes to each other. This is the basis of our collaborative filtering; the assumption with if Person A has scored a lot of films similar to Person B, then if Person A likes a film Person B hasn’t seen, there is a above average chance that Person A will also like this film.

There are several ways to go about calculate what you might consider to be a similar user. Although we are not going to use it in our final engine because of drawbacks that we’ll discuss in a moment, it’s worth making a small diversion to discuss Euclidean distance score.

Euclidean Distance Score Function

This is a quick and dirty way to see how similar two users scores are. Euclidean distance score will take the items that people have ranked in common and uses them for axes of a chart and plots them in “preference space”.

For instance, let’s see how Jamie and Emily’s rankings of Forrest Gump and The Matrix look:

euclidean distance score

Jamie & Emily’s film ratings in preference space

To calculate the distance between Jamie and Emily on the chart, take the difference on each axis, square them and add them together. Then take the square root of the sum.

We can do this from Windows Powershell by running Python and using the pow(n,2) to square a number and sqrt to take the square root.

This gives us the distance between the two points, which will be a smaller number when there is a closer match. Ideally, we would like a function that gives higher values for people who are similar. This can be done by adding 1 to the function (so we avoid dividing by zero) and investing it:

Add Euclidean function to recommendations.py

Let’s add a function to our code to allow us to find the similarity score based on this working.

We can now use this function with two names to get a similarity score. Running your shell from the directory of your source code, try the following:

While this can provide us with some interesting results, which we can compare with other methods later, there are some drawbacks to using Euclidean distance scoring; especially in cases where the data you’re using isn’t well normalised.

For instance, in this case our Euclidean distance score could give the impression that two people with very similar tastes in films are actually quite far apart depending on their own interpretation of the 1 to 10 system.

If Jamie thinks that a “great” film is worth about a 7 and a “bad” film is worth a 2, whereas Emily thinks a “great” film is worth a “10” and a “bad” film is wroth a 4, then despite them feeling the same about the films they are rating, the Euclidean distance score in that case will show them as far apart.

Pearson Correlation Score

A better suited method to find how close the preferences of two peoples’ film tastes are, based on the data we have would be to use Pearson correlation score (or Pearson correlation coefficient as it is also known). The correlation coefficient is a measure of how well two sets of data fit on a straight line.

Pearson Coefficient

Jamie & Emily’s film tastes with Pearson Coefficient


This method, which produces a “best fit” line allows us to correct for the grade inflation problem we just highlighted. Our Pearson coefficient will give a result of between -1 and 1. With 1 being a “perfect” correlation, where the line intersects all points, 0 between that there is no correlation between the data points and -1 being a perfect negative correlation, meaning critic A hates everything that critic B loves.

Charted Pearson Coefficient

How these different values would look on a chart


Add Pearson function to recommendations.py

The formula we will be putting into our function is as follows:

Pearson Formulae

N = number of pairs of scores
∑xy = sum of the products of paired scores
∑x = sum of x scores
∑y = sum of y scores
∑x² = sum of squared x scores
∑y² = sum of squared y scores

Following this formula, let’s create a new function:

With this function in place, we can now compare how similar two specified critics are:

Add a ranking function to recommendations.py

Now we can find out similar two specific people are, the next step is to add the ability to compare one to all.

Our function will use Python list comprehension to compare me to every other user in the dictionary using one of our distance metrics and it will return the first n items of sorted results.

You can now try calling this function and get the top 3 matches to that person:

Even at this very early stage, we are getting some interesting data! I ran this function for all users that took part and compiled the data by colour, with negative correlations between red, ranging up to positive correlations being green.

Film critic data for recommendation engine

Compiled critic data with Pearson coefficient scores

With the dataset being so small, it is not uncommon for outliers to crop up and upset things. However, the value of this data over time increases and would allow you to spot trends that are occurring within these groups.

Add a recommendation function to recommendations.py

Our final step is adding our recommendation function that will hopefully help us pick films that a specified user will like. There are several ways we could accomplish this. Your first thought may be to simply look for the person(s) with the closet correlation and recommend films they have seen that the specific users hasn’t, but this can cause problems.

It may be the case that your closest matched critic rated a film highly that was hated by almost everyone else or they haven’t reviewed some of the movies that you might like. To solve these kinds of potential issues, you need to score items by producing a weighted score. This means modifying the ratings critics give films depending on how closely correlated they are in tastes to you.

Let’s look at an example of how we would score two films for Mark: The Matrix & Forrest Gump

Weighted recommendations based on similarity

Weighted recommendations based on similarity

The table shows the correlation scores (similarity) and the 1-10 rating each of those critics gave the film (The Matrix and Forrest Gump). The S*Matrix and S*Forrest Gump columns indicate the weighted score, that is the score after it has been multiplied by that critic’s similarity. This means that critics that have closely related film tastes are taken as more important, whereas critics with little correlation have the importance of their ratings dampened. As you can imagine, any critics with a negative correlation will actually lower the ranking of the film, the higher the rating they give.

The next step is to add up all of these weighted scores and take care of the issue of the films that simply have more reviews being scored higher. For this reason, we take the sum of all the weight scores and divide it by the sum of all of the similarity scores (Sim. Sum). This produces our final film scoring.

We can now implement this weighted scoring into our recommendation engine, providing us with a list of movies we may like to watch next, ignoring ones we have already seen.

This code loops through every other person in the prefs dictionary. In each case, it calculates how similar they are to the specified person. It will then loop through every item for which they’ve given a score and calculate a final score (line 18). The scores are then normalised by diving each of them by the similarity sum and the sorted results returned.


Test your recommendation engine

As a cool side-effect, not only do we get a ranked list of movies, we get a prediction at what that person would actually rate those movies as well. This information can be fed back into the system to make further improvements. For instance, you could specify that if the movies lined up for recommendation are less than a predicted 5/10 in ranking, then don’t show them.

This kind of recommendation engine can be adapted and used for many different types of data or similarity metrics, experiment and enjoy!

You can find this project and many more in the great book Programming Collective Intelligence.

© 2016 i83 Blog by Mark Cook — Search engines, algorithms, machine learning and digital marketing. Up ↑