Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Irrelevant red herrings lead to “catastrophic” failure of logical inference.
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Irrelevant red herrings lead to “catastrophic” failure of logical inference.
You're viewing a single thread.
statistical engine suggesting words that sound like they'd probably be correct is bad at reasoning
How can this be??
I would say that if anything, LLMs are showing cracks in our way of reasoning.
Or the problem with tech billionaires selling "magic solutions" to problems that don't actually exist. Or how people are too gullible in the modern internet to understand when they're being sold snake oil in the form of "technological advancement" when it's actually just repackaged plagiarized material.
But what if they're wearing an expensive leather jacket
Totally unexpectable!!!
antianticipatable!
astonisurprising!