PDA

View Full Version : Quantum picks. A report


formula_2002
09-26-2006, 04:32 PM
Quantum picks. A report

176,606 horses are in the current data base.
Quantum picks 6899 horses as plays.

The 6899 plays returned $17,506 for a $2.00 win pool bet, $14,153 place pool and $13,329 show pool.
The ROI’s were 1.26, 1.025 and .97 respectively.

There were 1239 actual winners and 1103 “expected” winners.
Expected” winners = the sum of the booking percentage, (1/odds+1).

These 6899 plays occurred within 4264 races.
That would offer a race percentage of 29%.

Note: this is an "old and tired" data base.
Hopefully, the large quanity of plays has over come any "back fitting" bias!! ;)

formula_2002
09-26-2006, 06:04 PM
http://www.paceadvantage.com/forum/showthread.php?t=31229

formula_2002
10-01-2006, 10:58 AM
Quantum picks. A report

176,606 horses are in the current data base.
Quantum picks 6899 horses as plays.

The 6899 plays returned $17,506 for a $2.00 win pool bet, $14,153 place pool and $13,329 show pool.
The ROI’s were 1.26, 1.025 and .97 respectively.

There were 1239 actual winners and 1103 “expected” winners.
Expected” winners = the sum of the booking percentage, (1/odds+1).

These 6899 plays occurred within 4264 races.
That would offer a race percentage of 29%.

Note: this is an "old and tired" data base.
Hopefully, the large quanity of plays has over come any "back fitting" bias!! ;)


Worth a little follow up.
posted picks in the "Selections Forum" from 9/26 through 9/30 with the following results;

18 wins, 26 expected wins, in 146 plays, returning a flat bet roi of .51 and a book percentage bet roi of .69.
The final plays differ a bit from the posed plays due to scratches.

ryesteve
10-01-2006, 11:39 AM
Hopefully, the large quanity of plays has over come any "back fitting" bias!!
If you're reporting on the races on which you developed the model (which, given the volume of results you're reporting, appears to be the case) there's always going to be a back fitting bias.

PlanB
10-01-2006, 11:57 AM
I don't agree with Always a Back Fitting Bias idea. There's no math to back-up
its INEVITABILITY. Yes, it often happens, mostly with small samples taken
non-randomly, but it needn't happen, and besides, there's always correction
estimates, just as they are with IV's non-independence, except FEW seem
to know how to do it or care to do it.

garyoz
10-01-2006, 12:19 PM
Obviously need to split the sample in half, and use half to build the model and the other half to test. Then let's see the stats. Also should be done by a third or disinterested party.

formula_2002
10-01-2006, 12:20 PM
I don't agree with Always a Back Fitting Bias idea. There's no math to back-up
its INEVITABILITY.


What is inevitable? It's enevitable that the 140 + plays returned an roi 50%+- less than the sample?

Or is it inevitable that in 1,000,000,000 plays, the roi with tend to equal 1-track take -out?

formula_2002
10-01-2006, 12:26 PM
If you're reporting on the races on which you developed the model (which, given the volume of results you're reporting, appears to be the case) there's always going to be a back fitting bias.

I think the 7000+-plays only represented about 3% of all the horses. Perhaps one could better predict the winners if the plays represented a larger portion of the horses. Question, what would that percentage be?

formula_2002
10-01-2006, 12:35 PM
Obviously need to split the sample in half, and use half to build the model and the other half to test. Then let's see the stats. Also should be done by a third or disinterested party.

Well I did even more spliting than that.
I would say about 4 or 5 splits.
But the problem seems to be, that you can not keep working with the same data base. As I said, it's an old and tired data base. After awhile you inherently get to figure out what works..
And even after you play around with fresh data, you tend to tweek the old "formula" to fit the new data.
And the process has to begin all over again.
I added another 50,000 horses to the 175,000 study and still get good results. But I have had access to those 50,000 horse previoulsy. I'm sure they have influenced my formula..

PlanB
10-01-2006, 12:37 PM
Maybe you know where I stand on this issue: I think taking 7K races from
your large DB (~3% as you said) is a VERY WISE DECISION. But, I would
like you to take 100 such 7K samples and plot the distribution of such samples.
7K is a lot of races to really play, but more likely than 250K races. LOL, if
your puter can just check that out.

formula_2002
10-01-2006, 12:44 PM
But, I would
like you to take 100 such 7K samples and plot the distribution of such samples.
7K is a lot of races to really play, but more likely than 250K races. LOL, if
your puter can just check that out.
I would need 17,500,000 horses to find 100, 7k plays.
Better yet, suppose I find the probability of obtaining a 65% roi in 140 plays within my 7000 plays.
Perhaps it's normal ;)

PlanB
10-01-2006, 12:53 PM
LOL, now thats a lotta horses. Just pick a race & toss it back into the pool;
yeah, of course then you might get the same race more than once, but what
of it. Make it truly "random" picks & you'll always know the Pr of selecting
any 1 race. Last Minute Relection: Cut it back to 2K samples.

traynor
10-01-2006, 02:50 PM
Maybe you know where I stand on this issue: I think taking 7K races from
your large DB (~3% as you said) is a VERY WISE DECISION. But, I would
like you to take 100 such 7K samples and plot the distribution of such samples.
7K is a lot of races to really play, but more likely than 250K races. LOL, if
your puter can just check that out.


Run a bootstrap on it, extracting 100 random samples without replacement. Print a chart of the results. If the resulting data points are all roughly the same height, you can be fairly assured that data is representative, no matter what kind of slice you take out of it. In Excel or SPSS, it takes VERY little time. It also eliminates a lot of conjecture.

Bootstrapping is really useful for testing large datasets. Unfortunately, a lot of researchers try to extrapolate from testing an inappropriately small sample; it works most effectively in analyzing larger samples for consistency.
Good Luck :)

formula_2002
10-01-2006, 04:35 PM
Traynor, I just ran an analysis of 14 set of 140 plays each from the 7000+ plays.
I'd run more, but I just watched the jets-colts game and I'm trying to come down from the excitement..wow, what a game!! Great football!

Is that similar to "bootstraping" ?

the roi's for each of the sets look like the following;
.91
.69
.9
.8
.87
1.06
.82
.91
.72
1.11
.97
.79
.86
That looks interesting.. from that I can come up with the std deviation and the confidence level..cool ;)

K9Pup
10-01-2006, 06:39 PM
Run a bootstrap on it, extracting 100 random samples without replacement. Print a chart of the results. If the resulting data points are all roughly the same height, you can be fairly assured that data is representative, no matter what kind of slice you take out of it. In Excel or SPSS, it takes VERY little time. It also eliminates a lot of conjecture.

Bootstrapping is really useful for testing large datasets. Unfortunately, a lot of researchers try to extrapolate from testing an inappropriately small sample; it works most effectively in analyzing larger samples for consistency.
Good Luck :)
I'm also interested in HOW to do these bootstraps. I downloaded an excel add-in that does them. It looks to me like input to the bootstrap is one variable and 2 statistics about that variable (i.e. average and mean). Then you tell the sheet to generate X number of samples. How would this be used on horse handicapping data? What would the variable be? And how does this differ from a Monte Carlo simulation ??? Thanks !!!

formula_2002
10-01-2006, 08:49 PM
I used my dbase programing to determine the following;
33 of 49 sets of 140 plays (67%) returned a profit.

never did it return an roi of <.70

ryesteve
10-01-2006, 09:12 PM
I think the 7000+-plays only represented about 3% of all the horses. Perhaps one could better predict the winners if the plays represented a larger portion of the horses.
That doesn't address the issue I was talking about... if these "plays" were used to develop your model, you need to validate it on independent data... like what you're doing in the Selections section.

formula_2002
10-02-2006, 03:32 AM
if these "plays" were used to develop your model, you need to validate it on independent data... like what you're doing in the Selections section.

I agree