and I wanted to block all ai crawlers from my selfhosted stuff.
I don't trust crawlers to respect the Robots.txt but you can get one here: https://darkvisitors.com/
Since I use Caddy as a Server, I generated a directive that blocks them based on their useragent. The content of the regex basically comes from darkvisitors.
I'd love to see a robots.txt do a couple safe listings, then a zip bomb, then a safe listing. It would be fun to see how many log entries from an IP look like get a, get b, get zip bomb.... no more requests.
From your recommendation, I found a related project pandoras_pot that I am able to run in a Docker container, and seems to run more efficiently on my Pi home server. I now use it in my Caddyfile to redirect a number of fake subdomains and paths that are likely to be found by a malicious bot (of course all are excluded in my robots.txt for bots that actually respect it). Thanks for the recommendation!
Thats an easy modification. Just redirect or reverse proxy to the tarpit instead of abort .
I was even thinking about an infinitely linked data-poisoned html document, but there seemed to be no ready made project that can generate one at the moment.
(No published data-poisoning techniques for plain text at all afaik. But there is one for images.)
Ultimately I decided to just abort the connection as I don't want my servers to waste traffic or CPU cycles.