I am not surprised.
Meaning is negotiated by the speaker and hearer, or in this case between whoever is sharing vs. seeing the image. This happens based on the context and and conversational implicatures. For example, if there's some element in the picture, you take it as related to the message being conveyed.
And that’s the problem: AI tools don’t think. They don’t understand what is being made, why it’s being made, who is making it, and for what reasons.
And more importantly, the AI tools don't understand the message being conveyed.
Uncontrollable AI is the next moderation nightmare
The issue is not the uncontrollable AI. The issue are lazy and greedy companies expecting moderation to work without human intervention. It doesn't, for the reason I mentioned above. And that issue precedes image generators. Here's an example of that:
#║##║## Happy date, hanging around at the McD's
▓ ▓
/o)/o\
If I posted it in a hypothetical forum with state-of-art automatic moderation, no human mods, and that had a rule like "you can't mock fascists here", I'd be clearly violating the rules of that forum (as it mocks Mussolini) and getting away with it because no bot will get what I'm conveying through that - bur plenty humans would. Did I use Stable Diffusion for that? Fuck no, it's just ASCII art.
AI is a tool for humans to use. It's a damn good tool. But only a fool would leave decisions like moderation up to AI. Including the legitimacy of its own output.