How did the model do this weekend?
What does last weekend mean for the leagues come May?
How does the model compare to bookies’ odds in the past?
Well it was a bit of mixed weekend for the model. Munster’s appalling loss to Edinburgh and Ulster’s last minute draw means I had a meagre return of £3.60 on £6 worth of bets. On the bright side, the Crystal Egg correctly predicted the outcome of nine of twelve games at the weekend. Given the model’s average certainty on the winner was about 75%, this is exactly the level of accuracy you would expect.
Leinster narrowly losing to Glasgow was not particularly surprising, nor was Ulster’s draw with Scarlets. 95% might have been an overconfident estimate of Munster’s chance to win, but it’s as much a testament to just how poorly they played on Friday as anything else, and it’s probably not too unfair to suggest that it was a 1 in 20 poor performance from a team that really should have won. Every other game – and in particular every game in the Premiership – went as predicted, even if Saint’s thumping of Gloucester and Exeter Chiefs nilling London Welsh were a bit more emphatic than their probabilities would have suggested.
It’s an important point to remember that no model is going to predict every game correctly – ultimately if the proportion of games correctly called matches up to the level of certainty in the predictions (hopefully a minimum of around 70-75%), I’ll be a happy man. I’ll continue to tinker with the parameters of the model in the hopes of making it more accurate and try and amend my betting strategy a little too…
This week’s round of games has also had a slight impact on the model’s expectation of how the leagues will look by the time of the playoffs. This is both because teams are starting to actually accumulate points, but also because their performances are starting to impact how good the mode thinks they are – thus we see Munster slip behind Ospreys, as a combination of their poor performance on Friday and the fact that Ospreys now have a four point head start on them in games that probably should have been 5 points a piece.
1 | Leinster | 80.3543 |
2 | Ulster | 73.0828 |
3 | Ospreys | 68.6608 |
4 | Munster | 67.7557 |
5 | Glasgow | 65.6481 |
6 | Scarlets | 55.8077 |
7 | Cardiff Blues | 49.1063 |
8 | Edinburgh | 42.4221 |
9 | Connacht | 38.7535 |
10 | Dragons | 35.3488 |
11 | Benetton Treviso | 33.5122 |
12 | Zebre | 18.8409 |
1 | Saracens | 80.0582 |
2 | Leicester Tigers | 74.6077 |
3 | Northampton Saints | 73.8762 |
4 | Harlequins | 67.7086 |
5 | Bath Rugby | 58.3497 |
6 | Exeter Chiefs | 55.4413 |
7 | Gloucester Rugby | 47.9011 |
8 | London Wasps | 46.3152 |
9 | London Irish | 43.2159 |
10 | Sale Sharks | 37.4321 |
11 | Newcastle Falcons | 26.0724 |
12 | London Welsh | 23.1672 |
Mostly the predictions are similar (although you’ll notice expected points are higher all round – this is due to a bug I have hopefully corrected from last week’s calculation), but we see London Welsh’s poor showing seeing them drop below Newcastle Falcons to be 61% favourites to go back down to the Championship, while Edinburgh’s fine performance sees them climb the expected league table to the giddy heights of eighth.
Something I may have to consider for the next round is how much weight the most recent games should have – currently there is a linear relationship between how important a game is in the weightings and how old it is. It may be that this has to be boosted for more recent games to ensure it is more responsive to sudden changes in form, but this is something I will look at in the future.
On another note, I promised last week to compare the Crystal Egg’s predictions to bookies’ predictions as another test of accuracy. I’ve done this for the 2013-14 season, by comparing the average bookie’s odds on a game, to my model’s prediction (disclaimer: these “predictions” were made by the predictive model after the games in question, so should not carry as much weight as predictions before a game. Nevertheless, it’s interesting to take a look while we’re still waiting for the rest of the season).
Odds can be converted into implied probabilities by a simple formula of inverting the decimal odds. So odds of 2.0 on an event occurring imply a probability of 50%, odds of 1.5 imply a 66% chance, and so on. A further adjustment however needs to be made however in the case of bookies, as they run a system known as a Dutch Book, where all of these implied probabilities add up to more than 100%, so that no one can be guaranteed to make money by betting on all possible combinations of events. Adjusting these figures slightly so that they add up to exactly 100% gives the effective prediction that bookmakers are making, and allows for direct comparison to the model.
The results are shown below on a graph which shows the probability of a home win for each game in last year’s Pro 12, as predicted both by the bookies and by my model. The two systems are extremely highly correlated (r= 0.93 for those interested in that sort of thing), with only one or two outliers where the model was more or less optimistic than bookies. The average difference between the two was only 7%. They had similar success rates too – 74% for the Egg, 73% for bookies.
All of this is fascinating in that two independent methods could arrive at about the same level of accuracy. Unfortunately it also means that getting an edge on the bookies may end up being a fool’s errand – surprise surprise! On that optimistic note, I’ll leave you until the next set of quixotic predictions on Friday.