Quote:
Originally Posted by JerryBoyle
For something like horse racing, where your selections are multi class instead of binary, the Brier score is going to range from 0 - 2 with 0 being best and 2 being worst. It's best not to think of "good" vs "bad" in absolute sense, but rather, is this model's score better than the previous model? That said, in my personal experience, I had a model which was about break even over ~5k races with a Brier score of .773. One of the downfalls of using the Brier score is that it's not scoring your predictions relative to other bettors, which is what we would want if we plan to bet.
Personally, for scoring my predictions without actually running a betting simulation, I prefer something called Bayesian Information Reward. It has a natural fit with gambling, because it compares your estimates to the public's, and rewards you in cases where you had a prediction greater than the public when the event happens (or when you had a prediction less than the public when the event did not happen), and penalizes you in the opposite cases. The actual formula and a discussion about its properties can be found here: http://users.monash.edu/~korb/shadowfax/pubs/ai02.pdf. The formula is Definition 3 on page 4.
|
Thanks for the reply.
I must be doing something different than you. I followed the link provided by the other poster and followed the instructions. Here are the Brier Scores I came up with:
.099
.116
.078
.065
I used 1 for the winner and 0 for the losers.