|
|
03-05-2014, 11:16 PM
|
#61
|
Registered User
Join Date: Nov 2011
Location: Lecanto, Florida
Posts: 740
|
Interesting Development
In using Mitchell's original method and adding a new factor to it, the results
are promising. I used Bris 2nd call Pace number that was highest in a good
race, last 3 races. If the horse had no good finishes in last 3 then he gets
a 0 goose-egg.
The first choice was Backyard Kitten that won and paid $10.80
The second choice was Doherty and he ran second. Ex paid $39.40
The third choice was Piceance that did run 3rd and the Tri Paid $87.00
The file for 3rd race Gulfstream is attached.
__________________
If at first you don't succeed....don't go Sky diving!
|
|
|
03-05-2014, 11:19 PM
|
#62
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
Quote:
Originally Posted by jerry-g
In using Mitchell's original method and adding a new factor to it, the results
are promising. I used Bris 2nd call Pace number that was highest in a good
race, last 3 races. If the horse had no good finishes in last 3 then he gets
a 0 goose-egg.
The first choice was Backyard Kitten that won and paid $10.80
The second choice was Doherty and he ran second. Ex paid $39.40
The third choice was Piceance that did run 3rd and the Tri Paid $87.00
The file for 3rd race Gulfstream is attached.
|
Funny, I had already decided, after testing DeD, to split the difference between Aqu and DeD, and test GP next - LOL. I'll try to do that tomorrow.
|
|
|
03-05-2014, 11:45 PM
|
#63
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
Quote:
Originally Posted by classhandicapper
You might be able to improve the "hit rate" by insisting on good current form (like best last race of the top 3 or something like that).
|
My experience with testing potential methods is that current form analysis always improves results. The problem is in implementation, scaling the form number properly and weighting it against the other factors in the method. I have a form cycle number and an improve/decline/static indicator, but putting it into this method would be tough, regarding keeping the method entirely mechanical and automated.
Maybe, after I finish testing several more tracks with the core CCR method, I'll see what I can come up with on a finite form number and weighting.
|
|
|
03-06-2014, 09:29 AM
|
#64
|
Registered User
Join Date: Mar 2005
Location: Queens, NY
Posts: 20,610
|
Quote:
Originally Posted by raybo
My experience with testing potential methods is that current form analysis always improves results. The problem is in implementation, scaling the form number properly and weighting it against the other factors in the method. I have a form cycle number and an improve/decline/static indicator, but putting it into this method would be tough, regarding keeping the method entirely mechanical and automated.
Maybe, after I finish testing several more tracks with the core CCR method, I'll see what I can come up with on a finite form number and weighting.
|
One of the reasons I like to use some kind of current form metric is that many people intuitively compare the win% of the method they are testing to the win% of favorites or "top speed figure last out" and things like that. Then they get disappointed. But things like "top figure last out" or "best of last 2" already has some element of current form built into them. So you want to try to put them on equal footing to see how predictive the factor is.
__________________
"Unlearning is the highest form of learning"
|
|
|
03-06-2014, 10:13 AM
|
#65
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
Quote:
Originally Posted by classhandicapper
One of the reasons I like to use some kind of current form metric is that many people intuitively compare the win% of the method they are testing to the win% of favorites or "top speed figure last out" and things like that. Then they get disappointed. But things like "top figure last out" or "best of last 2" already has some element of current form built into them. So you want to try to put them on equal footing to see how predictive the factor is.
|
Understand, we want to add factors that are as independent of the existing factors as possible. Fortunately my own program doesn't use any recency or past finish position data in it's methods, so I am free to use those as form cycle and improve/decline/static additional factors. This CCR method is not so lucky, as it considers finish position and recency already.
|
|
|
03-06-2014, 04:40 PM
|
#66
|
Registered User
Join Date: Mar 2005
Location: Queens, NY
Posts: 20,610
|
Quote:
Originally Posted by raybo
Understand, we want to add factors that are as independent of the existing factors as possible. Fortunately my own program doesn't use any recency or past finish position data in it's methods, so I am free to use those as form cycle and improve/decline/static additional factors. This CCR method is not so lucky, as it considers finish position and recency already.
|
How about if we change the weighting of the races and put more weight on the most recent race and progressively less as you go back?
So a win last out would be worth more than a win 4 or 5 races ago even if at the same class.
That's the way I think about it intuitively anyway. It wouldn't introduce a new factor but it would tend to push the result towards good recent form.
__________________
"Unlearning is the highest form of learning"
|
|
|
03-06-2014, 05:06 PM
|
#67
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
Quote:
Originally Posted by classhandicapper
How about if we change the weighting of the races and put more weight on the most recent race and progressively less as you go back?
So a win last out would be worth more than a win 4 or 5 races ago even if at the same class.
That's the way I think about it intuitively anyway. It wouldn't introduce a new factor but it would tend to push the result towards good recent form.
|
That would work, as you are not introducing a new factor that is already included in your existing factors, you're just re-weighting that existing factor.
|
|
|
03-06-2014, 07:53 PM
|
#68
|
Registered User
Join Date: Nov 2011
Location: Lecanto, Florida
Posts: 740
|
Mitchell weighted his method with 40-30-30 percent of the SV (scaled ver)
for ES, W% and I$%. The SV is used so that he can get a number between
60-100 for the CCR. That's why the min/max fields in the template. He
never used the last six races for W,P,S and stayed with what was in the
box. The last six races thingy was what Pitlak came up with as an improvement.
One problem with the number is that horses are only a max of 40 points
apart. This skews with the odds/line and makes more horses tied.
It seems to me that the E/S is the problem but when I open that box I get
on a sort of "TiltaWhirl" thinking about it. Horses that win their last out or
next to last out and moving up in class deserve some recognition for class
as this is the normal process. As long as they are not moving up too much
for training purposes. Higher rating that type of Win just makes sense to me.
I believe the answer is there but not very apparent and will require some
think tanking to get to a relevant weight.
__________________
If at first you don't succeed....don't go Sky diving!
|
|
|
03-06-2014, 10:09 PM
|
#69
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
3rd Mitchell CCR track test. I decided to try to get in the middle of 2 extremes, Aqu and DeD, so GP sounded about right. I expected it to perform somewhere between the other 2 tracks, and it did. Again, results were pretty consistent throughout the test, with a slightly larger spread between the 1st pick hit rate and the 3rd pick hit rate, about 8 points versus 6 points for the other 2 tracks. That is probably best explained by the multitude of turf races run at GP.
The test started on 2/1/2013 through 4/5/2013 and continued with 9/1/2013 through 3/5/2014. The method played 795 races (passed all races having 20+% of the field with no distance or surface qualified races).
The combined 3 horse hit rate ranged from 48% to 54% and the ROI ranged from 0.70 to 0.79, with the final accumulated hit rate at 52.08% and the ROI at 0.75.
The individual top 3 picks hit rates stayed fairly consistent throughout also, and ended the test at 22.01% for the top pick, 16.35% for 2nd pick, and 13.71% for 3rd pick - a spread from 1st to 3rd picks of about 8%.
I'm still a bit amazed at the consistency of this method, even though it is nowhere near profitable, but starting to expect it now. The consistency should mean something worthwhile, just haven't put my finger on it yet. IMO, consistency in handicapping (and wagering of course), is a very valuable thing, if you can just figure out how to make it work for you. At least you're starting with a stable platform with CCR, which is more than can be said for most other methods or systems.
Here's the screenshot of the summary report:
Last edited by raybo; 03-06-2014 at 10:12 PM.
|
|
|
03-07-2014, 10:14 PM
|
#70
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
One more test (actually 2 in 1), then I'll move on to the "improved" CCR stuff and test it.
This one is for Arlington Park, moving from the northeast and south east to the northern midwest. I didn't really know what to expect at AP, but thought it would be somehwere between Aqu and GP. And it was.
I tested the whole 2012 meet first, from 5/5/2012 through 9/30/2012. This was 83 cards and the method played a little over 500 races.
The consistency I saw at the other tracks was there again at AP. Combined hit rates hovered around 47-49% and ROI was between 0.72-0.78. The final hit rate for the meet was 49.50% and the ROI was 0.76. The individual picks' hit rates ended up at 20%, 19% and 11%, for 1st, 2nd, and 3rd picks respectively, a difference between 1st and 3rd picks of about 9%.
The 2nd test was for the whole 2013 meet, about the same number of cards and played races as 2012. Combined picks' hit rate was around 52-55% most of the meet and ROI was around .081 to 0.85 throughout. The final hit rate was 54.09 and the ROI was 0.83. Individual picks' hit rates were 22%, 16%, and 16% respectively, and the difference between 3 picks' hit rates was about 6%.
Combining the 2 meets I got a combined hit rate of 51.8% and an ROI of 0.795, with the individual picks' hit rates at 21%, 17.5%, and 13.5%, a difference between the 3 picks of 7.5%.
Don't know why there was such a difference between 2012 and 2013, whether it was just normal variance or if something "physically" changed at AP between the 2012 meet and the 2013 meet, surface change, weather differences, changes in the horses/trainers/jockeys coming from feeder tracks or leaving for other tracks. My gut feeling is that it was just normal variance in general conditions at the track.
|
|
|
03-17-2014, 06:51 PM
|
#71
|
Registered User
Join Date: Mar 2005
Location: Queens, NY
Posts: 20,610
|
Any more progress on this with weighting the recent races or anything like that?
__________________
"Unlearning is the highest form of learning"
|
|
|
03-18-2014, 12:47 PM
|
#72
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
Quote:
Originally Posted by classhandicapper
Any more progress on this with weighting the recent races or anything like that?
|
I haven't yet added a current form metric to the original Mitchell CCR.
What we need is an algorithm that results in a "number" that reflects projected "improvement" off it's last race, "decline" off its last race, or "static"or no change off its last race. To do this fairly we need to look at distance changes, track changes and resulting surface speed changes, days off since last race and whether or not that time period is logical under the circumstances of that last race. Do we look at early speed at a longer distance in that last race, late speed at a shorter distance in that last race, as indicators of projected improvement at the new distance? Do we consider the pace scenario in that last race, running style versus fast/slow pace, and project improvement under a less than ideal pace shape, or project decline under ideal pace shape? I could go on and on, but the bottom line is that current form assessment is not easily turned into a number, and even if you accomplish that, you still have to assign the weighting of that number, regarding combining it with the existing base number (CCR).
Will a much more generalized form analysis hold up at different tracks, distances, surfaces, etc? I have doubts that it will, so we would then need to break out by track/surface and distance, which leads to much smaller samples.
I tend to think that all this will have to be track specific, at least, and that not doing so will not noticeably improve the hit rate of the core CCR numbers. I can test that in my program easily, but the larger problem is what factors we use in the form analysis.
Any ideas as to what those factors should be? Necessarily, they will have to be "hard" factors that are available in the public data. The easy thing would be to look at the last few races, hoping that they are truly recent, and do some "averaging" of finish positions and beaten lengths, etc.. But, is that going to be enough to significantly improve the core CCR hit rate?
In the program I am using for these tests, I have a form cycle number, from 1 to 8, 1 being best and 8 being worst. I also have an indicator of improvement in last race, decline in last race, and static or no change in last race ("+", "-", and "=" respectively). In order to use those 2 things with the CCR number, each would have to be weighted to the CCR (and the improve/decline/static indicator converted to a number first (maybe +1, -1, and 0 respectively).
|
|
|
03-18-2014, 01:47 PM
|
#73
|
Registered User
Join Date: Mar 2005
Location: Queens, NY
Posts: 20,610
|
To start, I would just rate each of the races in the same way you are now with earnings but with different weights for each to see if that adds any value at all.
Maybe start with just last 3 races weighted at 57, 29, and 14 and see if that moves the needle at all.
Ideally you'd have a very elaborate form analysis like you suggest, but that will take a lot of time.
__________________
"Unlearning is the highest form of learning"
|
|
|
03-18-2014, 03:26 PM
|
#74
|
EXCEL with SUPERFECTAS
Join Date: Mar 2004
Posts: 10,206
|
Quote:
Originally Posted by classhandicapper
To start, I would just rate each of the races in the same way you are now with earnings but with different weights for each to see if that adds any value at all.
Maybe start with just last 3 races weighted at 57, 29, and 14 and see if that moves the needle at all.
Ideally you'd have a very elaborate form analysis like you suggest, but that will take a lot of time.
|
Yeah, that sounds like a good place to start, I'll have to see how those weightings should be applied, from earlier posts I assume. I really did not fully analyze what was in those earlier posts, as I was sticking with the basic Mitchell CCR stuff for track testing at the time.
|
|
|
12-22-2014, 11:06 AM
|
#75
|
Registered User
Join Date: Mar 2005
Location: Queens, NY
Posts: 20,610
|
Giving this one a bump because there are some good ideas in this thread.
__________________
"Unlearning is the highest form of learning"
|
|
|
|
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
|
|