Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Irrelevant red herrings lead to “catastrophic” failure of logical inference.
Apple study exposes deep cracks in LLMs’ “reasoning” capabilities
Irrelevant red herrings lead to “catastrophic” failure of logical inference.
You're viewing a single thread.
The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding "seemingly relevant but ultimately inconsequential statements" to the questions
Good thing they're being trained on random posts and comments on the internet, which are known for being succinct and accurate.
Yeah, especially given that so many popular vegetables are members of the brassica genus
Absolutely. It would be a shame if AI didn't know that the common maple tree is actually placed in the family cannabaceae.
I think modern AI would know that though, since it follows almost immediately from Fermat's Little Theorem.
Definitely true! And ordering pizza without rocks as a topping should be outlawed, it literally has no texture without it, any human would know that very obvious fact.