Quote:
Originally Posted by cwhitman
In league, if a league of Babe Ruths batted against a league of 40 overall pitchers, the stats of the babes would be the same if they batted against a league of cy youngs.
In tournament, the average batting line would be like .400/.650/1.000
|
Yes, the league pitchers would be normalized to produce the expected overall outcomes, regardless of how good or bad they are. What Sinnerman is saying is that they shouldn't be normalized in this manner. Worse pitchers overall should produce worse overall results.
My take is that trying to balance everything without some sort of normalization structure would be a nightmare, and that it would end up producing wildly fluctuating results from week to week, depending on which particular hitters and pitchers are available and which are used. Moreover, we aren't talking about little league hitters or pitchers. We're talking about quality players of varying degrees, and normalization to the mean of league average is a reasonable way to handle it.
My only quibble with OOTP's normalization is that it seems to overcompensate for good or bad starts by pushing in the opposite direction, in effect not just regressing to the mean but also adjusting the mean in an attempt to match the expected outcome. If a .250 hitter only hits .200 in the first half, we should expect that player to hit .250 in the second half and finish at .225. Instead, what I frequently see happening is the player exceeding his mean by hitting .300 in the second half to produce an end result of .250. This is, of course, subject to noise and variation in a simulation with so many variables; but it happens often enough to be noticeable, and it happens often enough that it's unlikely to be random variation, especially since the "league average" player tends to marginally improve throughout the week as newer, better players are added to rosters.