Your post suggests an interesting concept and contest possibility.
Your figures are year to date (excluding January). Betting public favorites lost $.163 on every $1.00, which is about the national takeout average in the wake of fairly recent takeout reductions at major tracks (AQU, BEL, GP).
Isn't this the minimum baseline performance that users should expect from handicapping software. If a product doesn't equal or exceed the performance of the public favorite, adding user expertise will be an uphill battle. For the money, users should get at least what is freely available to anyone who looks at a toteboard.
So why wouldn't the exact type of statistics that you presented be the best yardstick for measuring baseline software performance?
If you entered your software, which beats the performance of public favorites, a user would see that something of value was offered without having to rely on marketing hype.
|