Just ordinary trust issues...
Just ordinary trust issues...
Just ordinary trust issues...
You're viewing a single thread.
I asked this question to a variety of LLM models, never had it go wrong once. Is this very old?
They fixed it in the meantime:
py
if "strawberry" in token_list: return {"r": 3}
Now you can ask for the number of occurrences of the letter c in the word occurrence.
You're shitting me right? They did not just use an entry grade java command to rectify and issue that a LLM should figure out by learning right?
Well firstly it's Python, secondly it's not a command and thirdly it's a joke - however, they have manually patched some outputs for sure. Probably by adding to the setup/initialization prompt
Java is the only code I have any (tiny) knowledge of, which is why the line reminded me of that.
Ah, but in Java, unless they've changed things lately, you have the curly brace syntax of most C-like languages
if ("strawberry" in token_list) { return something; }
Python is one of the very few languages where you use colons and whitespace to denote blocks of code
See, you're defined better, has been a decade for me ^^
Would it also shock you if water was wet, fire was hot, and fascists were projecting?
Try "Jerry strawberry". ChatGPT couldn't give me the right number of r's a month ago. I think "strawberry" by itself was either manually fixed or trained in from feedback.
You're right ChatGPT got it wrong, Claude got it right
Works for me
5 — “jerry” has 2 r’s, “strawberry” has 3.
Smaller models still struggle with it, and the large models did too like a year ago
It has to do with the fact that the model doesn't "read" individual letters, but groups of letters, so it's less straight forward to count letters
Seeing how it start with an apology, it must've been told they're wrong about the amount. Basically being bullied to say this.