Horse Racing Forum - PaceAdvantage.Com - Horse Racing Message Board

Go Back   Horse Racing Forum - PaceAdvantage.Com - Horse Racing Message Board


Thread: Neural Networks
View Single Post
Old 01-17-2009, 02:21 PM   #2
Dave Schwartz
 
Dave Schwartz's Avatar
 
Join Date: Mar 2001
Location: Reno, NV
Posts: 16,878
As someone who has written many neural nets from scratch in the past 20 years, I can tell you that there is an inherent problem with neural nets.

There are three problems with NN's.

The first problem is that the nature of a NN is that it wants to be perfect and it is very sensitive to "bad" data.

What is "bad" data? Well, an obvious example of bad data would be a race where (say) the #1 horse turned right coming out of the gate and wiped out 3 other horses allowing a $168 winner to romp home.

When the NN trains it assumes, logically, that in every race you gave it the winner should be able to become the best horse. In the case outlined above, the brain will do anything it can to "jusitfy" that $168 winner.

It will even say that the reason this horse won the race is that he was ranked last in the field for (say) FT in the last race. This is not a reason for winning but, by changing around all of its weighting system it can get this horse (which is what it will do).

Now, one can put in the time to remove the more obvious races... those with DQ's. One should not jsut remove the large price-races - that would be folly. The real problem with this is that there are simply "atypical" races where the absolute, dead-bang, dead-nuts, can't-lose horse loses. Why did he lose? Not because of anything that was predictable by looking at the PPs.

One such horse that comes to mind was Bayakoa at Del Mar many years ago. As I recall, it was a 6 horse field and Bayakoa, a 1/5 or 2/5 favorite (who looked worth that price) finished out of the money. The horse was simply first for everything. How does the brain reconcile that sometimes the 1/20 horse loses?

The answr is it can't. - which brings us to the 2nd problem.

With a NN, there is a threshold score (between 0.00 and 1.00) for an event happening. Take a basketball game. This is simplistic, but the way it typically works is that if the score is above 0.90 you bet the home team and below 0.10 you bet the visitor (or favorite/dog, whatever).

In other words, there is a 10% "margin for error." Now recall that the system is trying to get every "fact" right. Essentially, its goal is to get 100% of the games/races 90% right.

What we need is a system that tries to get 90% of the races 100% right. In other words, it needs to be able to say, "Hey, there are just some races that I can't get."

Finally, there is a third, though less important drawback. The system becomes good at predicting what it gets the most of. If you look at 100 races, about 800 horses, you will have 100 winners and 700 losers. A NN is much better at predicting losers than it is winner because that is where its experience lies.


GAs are a better deal.

Regards,
Dave Schwartz
Dave Schwartz is offline   Reply With Quote Reply
 
» Advertisement
Powered by vBadvanced CMPS v3.2.3

All times are GMT -4. The time now is 09:25 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright 1999 - 2023 -- PaceAdvantage.Com -- All Rights Reserved
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program
designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.