Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds
Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds
The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.
Can we discuss how it's possible that the paid model (gpt4) got worse and the free one (gpt3.5) got better? Is it because the free one is being trained on a larger pool of users or what?