Skip Navigation

Got a warning for my blog going over 100GB in bandwidth this month... which sounded incredibly unusual. My blog is text and a couple images and I haven't posted anything to it in ages... like how would that even be possible?

Turns out it's possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for 'Unknown robot'? This is actually bonkers.

Edit: As Thunraz points out below, there's a footnote that reads "Numbers after + are successful hits on 'robots.txt' files" and not scientific notation.

Edit 2: After doing more digging, the culprit is a post where I shared a few wallpapers for download. The bots have been downloading these wallpapers over and over, using 100GB of bandwidth usage in the first 12 days of November. That's when my account was suspended for exceeding bandwidth (it's an artificial limit I put on there awhile back and forgot about...) that's also why the 'last visit' for all the bots is November 12th.

91 comments
  • I run an ecommerce site and lately they've latched onto one very specific product with attempts to hammer its page and any of those branching from it for no readily identifiable reason, at the rate of several hundred times every second. I found out pretty quickly, because suddenly our view stats for that page in particular rocketed into the millions.

    I had to insert a little script to IP ban these fuckers, which kicks in if I see a malformed user agent string or if you try to hit this page specifically more than 100 times. Through this I discovered that the requests are coming from hundreds of thousands of individual random IP addresses, many of which are located in Singapore, Brazil, and India, and mostly resolve down into those owned by local ISPs and cell phone carriers.

    Of course they ignore your robots.txt as well. This smells like some kind of botnet thing to me.

    • I don’t really get those bots.

      Like, there are bots that are trying to scrape product info, or prices, or scan for quantity fields. But why the hell do some of these bots behave the way they do?

      Do you use Shopify by chance? With Shopify the bots could be scraping the product.json endpoint unless it’s disabled in your theme. Shopify just seems to show the updated at timestamp from the db in their headers+product data, so inventory quantity changes actually result in a timestamp change that can be used to estimate your sales.

      There are companies that do that and sell sales numbers to competitors.

      No idea why they have inventory info on their products table, it’s probably a performance optimization.

      I haven’t really done much scraping work in a while, not since before these new stupid scrapers started proliferating.

      • Negative. Our solution is completely home grown. All artisinal-like, from scratch. I can't imagine I reveal anything anyone would care about much except product specs, and our inventory and pricing really doesn't change very frequently.

        Even so, you think someone bothering to run a botnet to hound our site would distribute page loads across all of our products, right? Not just one. It's nonsensical.

      • Have you ever tried writing a scrapper? I have for offline reference material. You'll make a mistake like that a few times and know but there are sure to be other times you don't notice. I usually only want a relatively small site (say a Khan Academy lesson which doesn't save text offline, just videos) and put in a large delay between requests but I'll still come back after thinking I have it down and it's thrashed something

    • I see the same thing but hitting my lemmy instance. Not much you can do other than start up banning or geoip banning.

  • Check out Anubis. If you have a reverse proxy it is very easy to add, and for the bots stopped spamming after I added it to mine

    • I also recommend it.

    • I recently added Anubis and its validation rate is under 40%. In other words, 60% of the incoming requests are likely bots and are now getting blocked. Definitely recommend.

      • I was a single server with only me and 2 others or so, and then saw that I had thousands of requests per minutes at times! Absolutely nuts! My cloud bill was way higher. Adding anubis and it dropped down to just our requests, and bills dropped too. Very very strong proponent now.

    • It's interesting that anubis has worked so well for you in practice.

      What do you think of this guy's take?

      https://lock.cmpxchg8b.com/anubis.html

      • This dance to get access is just a minor annoyance for me, but I question how it proves I’m not a bot. These steps can be trivially and cheaply automated.

        I don't think the author understands the point of Anubis. The point isn't to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it's meant to be solved by a phone or any consumer device.

        The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren't one single host or machine, they're thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let's say that's 1000 scrapes of your site, that's now 5000 seconds to scrape, roughly an hour and a half. Now we're talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It's that it's not worth it for them to scrape the site anymore.

        So for Anubis they're "judging your value" by saying "Are you willing to put your money where your mouth is to access this site?" For consumer it's a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it's real dollars wasted on my little lemmy instance/blog, and thankfully they've stopped caring.

  • does your blog have a blackhole in it somewhere you forgot about 😄

  • I just geo-restrict my server to my country, certain services I’ll run an ip-blacklist and only whitelist the known few networks.

    Works okay I suppose, kills the need for a WAF, haven’t had any issues with it.

  • That's insane... Can't a website owner require bots (at least those who are identifying themselves as such) to prove at least they're affiliated with a certain domain?

  • You have to grow spikes and make it painful for bots to crawl your site. It sucks, and it costs a lot of extra bandwidth for a few months, but eventually they all blacklist your site and leave you alone.

91 comments