Skip Navigation

ChatGPT gets code questions wrong 52% of the time

Machine Learning - Learning/Language Models @lemmy.intai.tech

ChatGPT gets code questions wrong 52% of the time

2 1
10 comments
  • Any one of us who actually codes/scripts knows ChatGPT spits out hot garbage when asked to produce anything beyond maybe a single short one or two line code snippet or bash/powershell command. Like the article said the AI lacks context of what you're trying to do. It will confidently spit out either completely wrong or made up code with commands that don't even exist.

    Also, this will go really fucking well. Don't give them any ideas.

    Kabir said, "From our findings and observation from this research, we would suggest that Stack Overflow may want to incorporate effective methods to detect toxicity and negative sentiments in comments and answers in order to improve sentiment and politeness.

  • It's interesting that the sharp fall in traffic mimics the fall of Twitter and Reddit.

    Anecdotally, I would find code answrs on Reddit or Twitter, that would direct to Stack to view the full answer, or a more complete explanation of why X should be done that way.

    Considering the (relatively) small decline, I'm surprised that Stack think the answer is ChatGPT(or similar), and not the loss of semantic details added by a Reddit/Twitter thread.

10 comments