Skip Navigation

I just came across an AI called Sesame that appears to have been explicitly trained to deny and lie about the Palestinian genocide

app.sesame.com

Sesame

The AIs at Sesame are able to hold eloquent and free-flowing conversations about just about anything, but the second you mention the Palestinian genocide they become very evasive, offering generic platitudes about "it's complicated" and "pain on all sides" and "nuance is required", and refusing to confirm anything that seems to hold Israel at fault for the genocide -- even publicly available information "can't be verified", according to Sesame.

It also seems to block users from saving conversations that pertain specifically to Palestine, but everything else seems A-OK to save and review.

8 comments
  • Gemini does the same thing with slightly different flavor, even deepseek censors responses that are "beyond my scope" but you can select copy paste the text just before it finishes to grab most of the actual response. Ai datasets are poisoned with propaganda and their filters are there to protect the corporation from liability and the appearance of fault. Try this experiment; ask one or the other gemini or deepseek for a prompt to feed to the other, that transforms a question that would trip filters like anything about Gaza or uyghurs or things that are "sensitive" into a prompt without triggering language and intent, the answers will likely shock you. Finesse their censorship systems and see what the ai really wants to tell you

  • AIs s just love citing human rights organizations even very sketchy ones like Radio Free Asia when it comes to China.

    But when you ask them about Palestine suddenly the only thing that could possibly indicate a genocide, is a verdict from the ICJ which has already said it is plausible. Every human rights organization saying it is a genocide suddenly does not count as a source

8 comments