At this point I wouldn't agree with anything.
This thread has sparked my curiosity enough that the first thing I do with my test league is to run some seasons and get some break downs with true complete data over a few years time.
Then I have to pull like data from reality to compare it to. A cursory look into the relief vs starter question prompted me to want more information, which will take a lot of time that I haven't had. I pulled off 15 pitchers from 2005 that pitched 15+ starts with a relatively equal amount of relief appearances... only 1 of them fit every assumption we had (relievers give up less hits, less HRs, and have less ERA, while giving up more walks and getting more Ks). Only 6 of them fit 3 of the 5, and only 2 fit 2 of the 5.
(Those numbers aren't exact, I'm pulling it from memory - it may be way off, but it was not nearly a pretty picture)
At the same time, a low number were traditionally relievers, a slightly higher number were traditionally starters, but most were rookies. That's a very bad sample size to draw any conclusion whatsoever from. But that's an important data set to analyze in the question of whether any pitcher in relief should have a ratings boost.
Basically for this question to get a resolution it's going to take a lot of work with the in game data to discover where the problem is, or if there is a problem. Cursory examinations and small sample sizes lead to nothing.
That being said, I can give you a TRU DAT that if a pitcher is used as a starter (given more than just a few uses), it is rare for them to consistently pitch only 4 innings the majority of the those starts.
I have a hard time believing (hopefully) that the reports put up are from a real league running full tilt with the game we'll be getting on the 31st. at least one of those teams had only 1 pitcher shown as being in a starter role.