Skip Navigation

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

134 comments
  • Okay and? GPT lies how is this news every other day? Lazy ass journalists.

  • Look, I am all for seeing pros and cons. A.I. has a massive benefit to humanity and it has its issues but this article is just silly.

    Why in the fuck are you using ChatGPT to set a cancer plan? When did ChatGPT claim to be a medical doctor.

    Just go see a damn doctor.

    • I have been getting surveys asking my opinion on ai as a healthcare practitioner (pharmacist). I feel like they are testing the waters.

      AI is really dangerous for healthcare right now. I'm sure people are using it to ask regular questions they normally Google. I'm sure administrators are trying to see how they can use it to "take the pressure off" their employees (then fire some employees to "tighten the belt").

      If they can figure out how to fact check the AI results, maybe my opinion can change, but as long as AI can convincingly lie and not even know it's lying, it's a super dangerous tool.

      • For me the issue isn't the tool. It's people. The tool is used just as it is. A tool.

        I always like to compare these things to other physical tools. If you take a philips screw driver to a flathead screw you don't blame the tool you blame yourself for bringing the improper tool because as a human you can make mistakes. As a human you should have figured out prior, "do I need a flathead or philips?" There are tools capable of doing the job and doing it properly.

        Same if you are an operator on a piece of machinery. If you take a forklift to destroy a house you probably aren't going to get very far.

        All of these tools were designed to make life easier and provide a positive to life when doing something but it is how you use the tool that matters.

        The same with a gun. I am not a gun ownership kind of guy because of all the shit human beings that just can't use one properly or claim to use it properly. Guns get more complicated and so do their use cases but the truth is a gun was designed to kill or defend from being killed (this is not a topic about gun rights just using it as an example.) However, in the hands of the wrong person a gun can kill unintentionally. That isn't the guns fault after all its design was to kill.

        ChatGPT wasn't designed to kill, inherently. It wasn't designed to do anything other than take databases of information and provide what it thinks is correct. If you as a person don't know how to use it or what to do with it probably and you aren't seeking actual medical attention or advice from a professional then I think that is the person's fault.

        ChatGPT can't make a disclaimer for every little thing. A car on the other hand having a recall issue can. If you want to compare to a faulty part in a car then sure. Modify ChatGPT to just not provide medical advice.

        See tools can be changed midway through. The tool isn't the problem how the person uses the tool is the issue. Access to that tool and what that tool has access to can be an issue but the great thing about tools is laws can change and tools can change.

        It isn't the A.I.s fault if your legislature doesn't care to enforce that change or law. The same legislature that half of Lemmy is opposed to literally all the time. Tools are only good in ways they can be used as well.

        So let's say for arguments sake the tool is dangerous and in your defense it absolutely can be used dangerously. Do you call upon the government to shut it down just like you would call upon the government to regulate or change gun laws?

        Do you also ignore the positive impacts ChatGPT can have because it is doing something else terribly? Imagine a system that medical professionals do create and they modify a version that does provide good medical advice, accurate, and professional? What then? Is ChatGPT still bad? It's not out of the realm of possibility. A.I isn't the enemy because someone's leadership decided to fire you. Leadership is the enemy. Tools are only as bad as the people using them.

        Or for the sake of a recalled care that can kill they are as bad as the user manufacturing them. I don't deny you can get a bad car, a bad screwdriver. My point is if you let the bad outweigh the good then you are missing the point. The bad should be handled by people who understand it better and can design laws and tools to enforce better usage to make something less bad. So again don't blame the tool blame the people that aren't protecting you with said tool.

    • The issue is hospital administrators thinking that AI is the answer to boost profits

  • This is the best summary I could come up with:


    According to the study, which was published in the journal JAMA Oncology and initially reported by Bloomberg – when asked to generate treatment plans for a variety of cancer cases, one-third of the large language model's responses contained incorrect information.

    The chatbot sparked a rush to invest in AI companies and an intense debate over the long-term impact of artificial intelligence; Goldman Sachs research found it could affect 300 million jobs globally.

    Famously, Google's ChatGPT rival Bard wiped $120 billion off the company's stock value when it gave an inaccurate answer to a question about the James Webb space telescope.

    Earlier this month, a major study found that using AI to screen for breast cancer was safe, and suggested it could almost halve the workload of radiologists.

    A computer scientist at Harvard recently found that GPT-4, the latest version of the model, could pass the US medical licensing exam with flying colors – and suggested it had better clinical judgment than some doctors.

    The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.


    The original article contains 523 words, the summary contains 195 words. Saved 63%. I'm a bot and I'm open source!

  • It speeds things up for people who know what they're talking about. The doctor asking for the plan could probably argue a few of the errors and GPT will say "oh you're right, I'll change that to something better" and then it's good to go.

    Yes you can't just rely on it to be right all the time, but you can often use it to find the right answer with a small conversation, which would be quicker than just doing it alone.

    I recently won a client with GPTs help in my industry.

    I personally think I'm very knowledgeable in what I do, but to save time I asked what I should be looking out for, and it gave me a long list of areas to consider in a proposal. That list alone was a great starting block to get going. Some of the list wasn't relevant to me or the client, so had to be ignored, but the majority of it was solid, and started me out an hour ahead, essentially tackling the planning stage for me.

    To someone outside of my industry, if they used that list verbatim, they would have brought up a lot of irrelevant information and covered topics that would make no sense.

    I feel it's a tool or partner rather than a replacement for experts. It helps me get to where I need to go quicker, and it's fantastic at brainstorming ideas or potential issues in plans. It takes some of the pressure off as I get things done.

  • I thought it released in 2021. Maybe it was on the cusp. I was basically using it to find what I couldn't seem to find in the docs. Its definitely replaced my rubber ducky, but I still have to double check it after my Unity experience.

134 comments