Maybe I am misunderstanding but in order to find out if the point setup is "optimal", wouldn't you have to run all possible combinations of point setups against the actual results? Basically this is backfitting, but the program could be designed so that the user could split the data -- maybe 75/25 or whatever is statistically best -- and then run all of the combos on the 75% and then see if the settings "go forward" by testing it on the remaining 25%. That way the point model wasn't biased by the results of the 25% as that data was not included.
|