Can one suggest a good explation why?
The model is trained and stays "as is". So why?
Does opensi uses users rating (thumb up/down) for fine tuning or what?
It's possible to apply a layer of fine-tuning "on top" of the base pretrained model. I'm sure OpenAI has been doing that a lot, and including ever more "don't run through puddles and splash pedestrians" restrictions that are making it harder and harder for the model to think.
They don’t want it to say dumb things, so they train it to say “I’m sorry, I cannot do that” to different prompts. This has been known to degrade the quality of the model for quite some time, so this is probably the likely reason.
Hm... Probably. I read something about chatgpt tricks in this area.
Theoretically, this should impact web chat, not Impact API (where you pay for usage of the model).