Who's going to tell me how to eat though?
Who's going to tell me how to eat though?


Who's going to tell me how to eat though?
We're so cooked.
Take 1 moment to imagine the enshitification of medicine.
We're gonna long for the days of sawbones and "you got ghosts in your blood. You should do cocaine about it"
Medicine has already been enshitified...
PE initiated take of provider groups in early 2010s.
Consolidation by PE and health insurance parasites is about complete.
Nurse and mid level providers are being pressured. Doctors are next on the chopping block.
Service quality is down across the board and they haven't even started squeezing in earnest.
You would get better service at 2005 McDonalds than at 2025 urgent care 🤡
What is “PE”?
Good news. The MCAT, USMLE, and board exams are all done in a proctored environment with no electronic devices allowed. Hell, you cant even take a calculator in for the MCAT, so you better be cool with doing Arrhenius Equations by hand.
As far as doctors go you might be able to get through your premed degree with ChatGPT, but you're not going to get 500+ on your MCAT and you certainly aren't passing Step 1 of the USMLE.
Jokes on Future Doctor because we're closing down the hospitals, cancelling the research grants, and taking all the sick people to jail for the crime of being unemployable.
"I'm sorry, but you have Fistobulimia. You may want to put your affairs in order."
"Oh my god, Doc! That's terrible. I came here for a runny nose. Are you sure?"
"Pretty sure. It lists... runny nose, tightness of jaw, numb toes, and a pain in your Scallambra region as typical symptoms."
"I don't have any of those other things and what the heck is a Scallambra?"
"You don't have those? Hmm, let me double-check."
(types)
"Good news! It's likely a type of Fornamic Pellarsy. Says 76.2387% recovery rate by leeching. System's already sent a referral."
Therrre it is. Or, here it comes maybe.
Yeah people think all doctors were straight A students thru med school. Ya'd never know if the one treating you right now was a C- muthafucka.
To be fair, just passing med school, even by the slimmest margins, is no easy feat. The idea that a doctor who got D- grades is somehow bad is wrong because they're still good enough to pass an extremely difficult program.
Yeah, I'd be more comfortable with an A+ doctor, but a doctor still graduated from med school, you know?
If you could pick between two doctors, one A student and one D student, you know you'd pick the A student.
But what if the A student was from some sketchy barely accredited medical school and the D student was from Johns Hopkins.
Who do you pick now?
Med school isn’t easy bro
That C+ doctor retained more knowledge and has a better intuition that the chatGPT doctor
Eh. I think it's hard to say. These days I think we can be expected to learn so much more than previous generations depending on the field. My psych undergrad emphasized a lot of neurobiology and other hard sciences with the humanities. I use chat to supplement recreational study on top of that, prompting for academic studies, etc., plus following experts around social media. I also saved as much material as I could between textbooks and other resources.
You also don't know if the guy that got an A was cheating
You also need to realize school ratings does not always represent person's intelligence. Some straight As have turned into nutcases who scream about autism injections
Ds get degrees
Haven't you heard that one 'joke,' what do you call the student that passed with the lowest grade in med school?....Doctor.
I'll take a Dr with enough real world experience to have good intuition over a recently graduated straight-A doc any day.
But this is why doctors have like 8 years of practical, hands on experience with oversight before they're allowed to actually practice solo in most places. They spend more time learning hands on, than they do in class.
Even a straight-C "level" doctor should be more than prepared to handle whatever you their their way. Even if they don't know, they probably know how to find out, or who to ask.
Cant wait for vibedoctoring!
Vibe chef
Vibe counsellor
Vibe investor
Vibe structural engineer
Vibe foreign policy writer
Vibe tactical battle management
Vibe strategic nuclear weapons targeting and launch profile analyst
Vibe radiation poisoning treatment
Vibe chef (human meat)
Just allow AIs to write prescriptions. I’m sure it will be fine.
/s
9001 kg of bananas to be taken rectally
now that's what i call a bananza
Peeled or unpeeled?
Hmmm... would that be enough to cause radioactive effects if you someone condensed them enough to actually fit?
One step closer to Idiocracy.
idiocracy would be a step up from current murica
at least Camacho noticed and hired the smartest guy he found
I really like Harvard's Nutrition Source for science-based nutrition info that's easy to understand.
I have had absolutely terrible luck with PCPs believing my symptoms or looking at them holistically - even just to gat a referral to the right specialist. In this moment AI has been better at pointing me in the right direction than my previous PCPs. 🤷
This is what people miss. If you've experienced a chronic condition that doctors don't know what to do with, then trying alternatives seems pretty attractive.
I call it the "witchcraft par of my health journey."
Doctors doing this has been a historical issue but it only becoming obvious now that they behave like this, thanks to the internet.
Many doctors view the patient with contempt
"Why are you sitting on your sandwich?"
"ChatGPT said the healthiest way to eat was through the anus."
that was a decent south park episode. i will never be able to get the image of martha stewart sitting on a turkey out of my head
To be blunt, if you were to train a gpt model on all the current medical information available, it actually might be a good starting point for most doctors to "collaborate" with and formulate theories on more difficult cases.
However, since GPT and list other LLMs are trained on information generally available on the Internet, they're not going to come up with anything that could possibly be trusted in any field where bad decisions could mean life or death.
It's basically advanced text prediction based on whatever intent statements you made in your prompt. So if you can feed a bunch of symptoms into a machine learning model, that's been trained on the sum of all relatively recent medical texts and case work, that would have some relevant results.
Since chat gpt isn't that, heh. I doubt it would even help someone pass medical school, quite bluntly... Apart from the hiring boiler plate stuff and filling in words and sentences that just take time to write out and don't contribute in any significant manner to the content of their work. (Eg, introduction paragraphs, sentence structures for entering information and conclusions... Etc).
There's a lot of good, time saving stuff ML, in its current form, can do, diagnostics, not so much.
Doctors right now Google or ask chat gpt what they don't know ( atleast the goofling part is good if rather than barking out false stuff )
I'm a doctor and I google all the time. There's nothing inherently wrong with googling the question is what source are you using from there.
4th year med student here: I use Google to find studies and STATPearls pages because the built-in NIH search function sucks.
thats exactly what i said , googling is fine as the doctor can also judge the sources and stuff and preferred rather than a doctor barking something out without knowledge because he/ she doesnt want to google. , using AI ? is on another level of bad.
My doctor regularly Google's my symptoms while I'm sitting in the room so IDK if ChatGPT is worse.
It definitely is worse. Depending on which hits your doctor uses, there is legit medical knowledge on the internet, while chatgpt will just make stuff up.
I know a lot of young people see no difference between chatgpting something and googling it, and google has become really awful, but there is still a huge difference, since you can make google results better by using your brain and source criticism, while there is no such option with chatgpt.
You can prompt or customize chat to give sources. I include prompts like "check for coherence, scientific accuracy" etc. too, depending on what I'm using it for.
But if ChatGPT is able to pass med school it must mean AI is good?
I could pass that too if I had all the books and can sift through them in seconds.
Have you ever had an open book exam?
Ok but my counter argument is that if they pass their exam with GPT, shouldn't they be allowed to practice medicine with GPT in hand?
Preferably using a model that's been specifically trained to support physicians.
I've seen doctors that are outright hazards to patients, hopefully this would limit the amount of damage from the things they misremember...
EDIT: ITT bunch of AI deniers who can't provide a single valid argument, but that doesn't matter because they have strong feelings. Be sure to slam the "this doesn't align with how I want my world to be" button!
I love having a doctor who offloaded so much of their knowledge onto a machine that they can't operate without a cell phone in hand. It's a good thing hospitals famously have impenetrable security, and have never had network outages. And fuck patient confidentiality, right? My medical issues are between me, my doctor, and Sam Altman
And the people Sam Altman sold your info to.
Do you realize your argument is basically the same argument people used to make about calculators? That they were evil and should never be used because they make kids stupid and how will their brains develop and yap yap yap.
There is a scenario where doctors are AIDED by AI tools and save more lives. You outright reject this on the edge case that they loose that tool and have to * checks notes * do what they do right now. How does that even make sense?
Going by that past example, this is how it'll go: you'll keep removed and moaning that it's useless and evil up until your dying breath, an old generation that will never embrace the new tech, while the world around you moves on and figures out out how to best take advantage of it. We're at the stage where it's capabilities are overhyped and oversold, as it always happens when something is new, and eventually we'll figure out how to best use them and when/where to avoid them.
And fuck patient confidentiality, right?
How is this an AI problem? That's already fucked - 5 million patients data breached, here other 4.5M patients, the massive Brazil one with 250 million patient records, etc etc. The list is endless, as health data increasingly goes online, best you come to terms that it will be unlawfully accessed sooner rather than later, with or without AI.
EDIT to add: on your case of network outages, do you know what happens right now when there's a network outage at a hospital? It already stops working - you can't admit patients, you don't have access to their history or exams, basically can't prescribe anything, can't process payments. Being unable to access AI tools is the least of the concerns.
Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It's just not there, far from it
It's not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
Why bother going to the doctor then? Just use Web Md.
"just replace developers with ai"
You bother going to the doctor because an expert using a tool is different than Karen using the same tool.
For what it's worth I recently was urged by chatGPT to go to the hospital after explaining symptoms and turns out I had appendisitis
Out of curiosity, i put in some organic chemistry practice questions into ChatGPT just now, it got 1 out of 5 correct. Im not an outright hater of AI (I do dislike how its being forced into some things and makes the original product worse, and the enviromental impact it has) but i'm sure in the future it'll be able to do some wonderous things.
As it stands though, I would rather my doc do a review of literature rather than trusting ChatGPT alone.
Out of curiosity, what questions?
Organic chemistry isn't a subject for a medical degree, at least not in my neck of the woods (we have biochemistry) so I'm not super familiar with the subject, but curious enough to see what it got wrong.