Quote:
Originally Posted by Scruff
I'm curious, does anyone know if the error rating in OOTP14 is based on errors per inning/game, or errors per chance?
This would make a big difference for some custom ratings I'm doing. I've seen different games calculate this differently, using both of the above methods, so it would be great to know how OOTP does it.
Thanks for any help.
|
I'll take a shot at this and probably with disastrous results but what the heck.
Might you be mixing cause and effect in your thinking? Perhaps I am not understanding your question correctly, but I am wondering if you know that a player's error ratings are based on simple numeric values in his profile, as seen below. These numbers are randomly generated upon player creation and (after a bit of improvement that comes from playing experience) produce the infield/outfield error ratings that you see in a player's profile.
See the two screen prints below. This player is a catcher, so his infield/outfield error ratings are going to be low. On a scale of 1-250, 44 and 30 are pretty low. Hence the red bar error ratings in his profile.
Now, when I say "mixing cause and effect," I mean that errors per inning/game and errors per chance are
by-products of the error ratings themselves over the course of game simulation. The ratings do not
come from errors per inning/game or errors per chance.
This may be a fine point but it's important, I think. It's like this: If I put that catcher out in SS for a while, his errors per inning/game and errors per chance will be very high but I will have know this would happen already because the error ratings are
predictors of those outcomes, not the
results of those outcomes.
How did I do?