Long time no see from an old school OOTPer, which is why it pains me to be the person who has to respond to this with...
Not only no but heeeeeeeeeell no. For most of the things that would "work", LLMs really aren't "AI" in the sense of there being some, like, actual, like "soul" that's actually taking in data and making writing decisions. It's much more like a giant game of Mad Libs in which the AI scours its database for data it more or less stole from people who actually did creative work and then randomly pops it in. I'm sorry if that makes me sound old or something but this is pretty much all LLMs do.
I think that if anything LLMs will get worse at this in the future, not better, because many places are already beginning to limit what data they have access to and so it's left with mostly pulling stuff from the public domain or else using whichever proprietary data it can get its hands on. That's not my primary issue with using LLMs, of course (see above) but I think when you're faced with a thing that's at least morally grey and stands a chance to not be as good as currently is in 5 years (and let's be honest, AI right now reads like freshman-in-HS writing assignments; I wouldn't even put them at "stringer" level), it should give you pause.
I'm a little bit less against using neural networks to develop AI but if I'm being honest I don't think that's going to do what you think it's going to do. AI is more or less the best at taking some situation with a relatively small number of inputs, doing what amounts to random crap along the thing it's allowed to influence, and then, after being told which approaches are successful and which are not, to come up with "better" solutions until it reaches something that is in some way, according to the metrics you're using to define success, etc., to find "optimal".
That's just not at all how managers and GMs think, for better or for worse. To be honest, this is for better in many ways: no real-life manager would ever hit their pitcher leadoff but an AI is liable to try it unless you tell it not to. On the other hand, managers and GMs primarily make moves with a mind of not getting fired and so if you train an AI based on, say, a 5 year rolling average of wins and losses, it might come up with some really, really non-traditional baseball moves. This might seem like fun in the abstract but when you're, like, trying to play a game that you want to feel like "real" baseball and like the White Sox are putting their pitcher in the lineup and hitting him cleanup because that helps them tank better in Year One, for example, you're not going to walk away from that with a good feeling.
And that's even something that does make sense if you think about it. What if the AI "figures out" something that actually works in baseball but is just plain never used IRL? You can say "hey, the AI figured out that a 6 man rotation where the starters go 3 innings is the most optimal and also that LOOGYs are the most valuable commodity in the game so it signs them to $30M a year contracts" all you want, but that doesn't feel like *baseball*. Worse, what if those "baseball" exploits the AI uncovers are actually just OOTP exploits? I guess on one level if gives the developers potentially valuable information in terms of loopholes to close. On the other, you aren't going to release that AI to the public, at least not after you've closed that particular loophole. What if the AI finds a thing that's like 0.04% more efficient than what's done IRL - or even only just as efficient but since that particular behavior isn't directly harmful it doesn't get "succes rate"-ed out but just isn't done for good or for bad reasons?
Again I think using neural networks to figure out AI is probably the most morally acceptable use. That doesn't actually make it a good idea.
__________________
Quote:
Originally Posted by Markus Heinsohn
You bastard.... 
|
The Great American Baseball Thrift Book - Like reading the Sporting News from back in the day, only with fake players. REAL LIFE DRAMA THOUGH maybe not
|