Quote:
|
Originally Posted by thehef
- If you are using real transactions (thus, players playing in the same ballparks as in real life) and real stats, then the affect will be..
|
Since you're still lacking an answer despite multiple requests, let me attempt an answer based on my understanding of what has been posted, observations of ratings, and testing.
Assuming one real recalc and no TCR of development, a players rating is adjusted by the information on park factors in the game's era_ballparks file. However this adjustment is largely irrelevant when players are playing in their historical ballparks.
For example (and ignoring the possible effects of the "in a modern neutral environment" statement in the editor concerning raw ratings), if a player hit 40 HRs but his home ball park was hitter friendly to the point he was helped by 4 HRs then his rating is 36. However since in the game his home park is his historic one, he performance will be boosted so his effective rating is 40 HRs.
However if his 40 HR year was an outlier for him - say he never hit more than 22 in another year - his rating is adjusted further to make it extremely likely he won't exceed 40 and extremely likely he won't get close to 40. Another player who hit 35 and has multiple mid 30ish HR years in his career is likely to hit more HRs than the 40 HR player since the 35 HR player will not have his rating reduced as much or probably at all.
Middle range performers even perhaps up to the level of the consistent 30ish home run player get additional performance as there is a transfer of output to them from low AB players. High performance low AB players have their performance reduced as the low sample size makes their raw rating unreliable. Although it could be considered low performance low AB players is also unreliable due to low sample size and thus the output should be transferred to them, it is not. Instead is it sent to players who are not low AB players even though their sample size is large enough that their performance is reliable.
If three or five year recalc is used the 40 HR player's rating for the year will be even lower than the downward adjusted single year rating while the consistent mid 30ish HR player will have ratings similar to single year ratings.
With players playing in their historical parks and with historical lineups, the main effect of the adjustment of the ratings with the park factors file is to change the distribution of player's performance among the various parks. Overall performance is not changed by the park factors. Any change in the per park performance is so low as to not be discernable amid the normal randomness.
In summary, peak HR performance is reduced in players best years. Performance of low AB players is reduced. In both cases the reductions are transferred to players with AB totals that are greater than the low AB threshold. As a group, players above the low AB threshold will perform better (except those having an extraordinarily good HR year compared with their other years),
Except for HR performances like the peak years of Roger Maris and Davey Johnson, casual examination is unlikely to reveal the deviations from historical performance. And people who play "historical what it", perhaps the most popular form of historical play, will find the performance differences lost in the noise caused by players playing for different teams, missed time due to non historic injuries or time not missed due to historic injuries, development settings, TCR, and the other things used to make the game more interesting to a human GM. But it's still there.
OK, I know there are four questions, but as can be seen from the amount of material needed to explain just this one, the others are going to have to wait until my typing fingers get a rest!
In the meantime, amplifications and corrections to the above are welcome.