Jeff,
I love the idea of local models. However, there are some real issues I've not been able to get past.
1. There are simply too many models to develop.
It is one thing to say, "Here's an AQU model." But, logically, the slicing and dicing continues after that.
Obviously, we're not going to handicap a 4f dash for 2yr olds the same way we'd handicap a Graded Stakes on the turf at 9f.
This slicing/dicing is what really expands the system/model list.
2. There are too many models to support.
We had a user a few years ago who was managing 1,300 "systems." Each one was built for a specific track-surface-distance.
The HSH software handles which system fires based in a given race automatically. That part was easy.
The problem comes in when you have to decide when a model needs to be rebuilt. Is it because of a random down-turn that is just a normal aberration or is the model itself fundamentally flawed?
When you start tossing around system counts like 1,300, it becomes a full-time job just to manage those models; just to decide when something is really wrong.
What most of us have done...
Most of our HSH users have gone to a completely dynamic, race-by-race modelling system. That is, when you open a race, HSH queries the database for races "like this one" (based upon whatever filtering parameters you've created.
Then the software builds a system from that data, based upon the factors you've selected. (There can be static factors involved as well.)
In this way, we really only have a single system to maintain!
Of course, we can still slice and dice the results to determine where our strengths and weaknesses are, and then address them across the entire approach.
Dave
|