They are simple, but they are not easy. Sorting M&Ms according to colour is also a simple task for any human with normal colour vision, but doing it with an Olympic-sized swimming pool full of M&Ms is not easy.
Computers are very good at examining data for patterns, and doing so in exhaustive detail. LLMs can detect patterns of types not visible to previous algorithms (and sometimes screw up royally and detect patterns that aren't there, or that we want to get rid of even if they exist). That doesn't make LLMs intelligent, it just makes them good tools for certain purposes. Nearly all of your examples are just applying a pattern that the algorithm has discerned—in bank records, in natural language, in sound samples, or whatever.
As for people being fooled by chatbots, that's been happening for more than fifty years. The 'bot can be exceedingly primitive, and some people will still believe it's a person because they want to believe. The fewer obvious mistakes the 'bot makes, the more lonely and vulnerable people will be willing to suspend their disbelief.