Just ordinary trust issues...
Just ordinary trust issues...
Just ordinary trust issues...
At my work, it's become common for people to say "AI level" when giving a confidence score. Without saying anything else, everyone seems to perfectly understand the situation, even if hearing it for the first time.
Keep in mind, we have our own in-house models that are bloody fantastic, used for different sciences and research. We'd never talk ill of those, but it's not the first thing that comes to mind when people hear "AI" these days.
"Keep in mind, we have our own in-house models that are bloody fantastic, used for different sciences and research."
I'm a scientist who has become super interested in this stuff in recent years, and I have adopted the habit of calling the legit stuff "machine learning", reserving "AI" for the hype machine bullshit
This hits hard. I was in college when I first learned a machine solved the double pendulum problem. Problem is we have no idea how the equation works though. I remember thinking all the stuff machine learning could solve. Then they over hyped these LLMs that are good at ::checks my notes:: chatting with you.....
Lmao I'm doing the exact thing. I'm a ChemE and I have been doing a lot of work on AI based process controls and I have coached members of my team to use "ML" and "Machine Learning" to refer to these systems because things like ChatGPT that most people see as toys are all that people think about with "AI." The other day someone asked me "So have you gotten ChatGPT running the plant yet?" And I laughed and said no and explained the difference between what we're doing and AI that you see in the news. I even have had to include slides in just about every presentation I've done on this to say "no, we are not just asking ChatGPT how to run the process" because that's the first thing that comes to mind and it scares them because ChatGPT is famously very prone to making shit up.
It's not wrong though...There's one r and one rr in strawberry
Wrong! There's no r in strawberry, only an str and an rr.
python
str(awberry)
Found the Spanish speaker (they count rr as a separate letter)
It didn't say one and only one eh! One r, then one r again!
I asked this question to a variety of LLM models, never had it go wrong once. Is this very old?
They fixed it in the meantime:
py
if "strawberry" in token_list: return {"r": 3}
Try "Jerry strawberry". ChatGPT couldn't give me the right number of r's a month ago. I think "strawberry" by itself was either manually fixed or trained in from feedback.
You're right ChatGPT got it wrong, Claude got it right
Works for me
5 — “jerry” has 2 r’s, “strawberry” has 3.
Smaller models still struggle with it, and the large models did too like a year ago
It has to do with the fact that the model doesn't "read" individual letters, but groups of letters, so it's less straight forward to count letters
Seeing how it start with an apology, it must've been told they're wrong about the amount. Basically being bullied to say this.