Messing with scraper bots

(herman.bearblog.dev)

222 points | by HermanMartinus a day ago ago

77 comments

  • simondotau 17 hours ago ago

    The more things change, the more they stay the same.

    About 10-15 years ago, the scourge I was fighting was social media monitoring services, companies paid by big brands to watch sentiment across forums and other online communities. I was running a very popular and completely free (and ad-free) discussion forum in my spare time, and their scraping was irritating for two reasons. First, they were monetising my community when I wasn’t. Second, their crawlers would hit the servers as hard as they could, creating real load issues. I kept having to beg our hosting sponsor for more capacity.

    Once I figured out what was happening, I blocked their user agent. Within a week they were scraping with a generic one. I blocked their IP range; a week later they were back on a different range. So I built a filter that would pseudo-randomly[0] inject company names[1] into forum posts. Then any time I re-identified[2] their bot, I enabled that filter for their requests.

    The scraping stopped within two days and never came back.

    --

    [0] Random but deterministic based on post ID, so the injected text stayed consistent.

    [1] I collated a list of around 100 major consumer brands, plus every company name the monitoring services proudly listed as clients on their own websites.

    [2] This was back around 2009 or so, so things weren't nearly as sophisticated as they are today, both in terms of bots and anti-bot strategies. One of the most effective tools I remember deploying back then was analysis of all HTTP headers. Bots would spoof a browser UA, but almost none would get the full header set right, things like Accept-Encoding or Accept-Language were either absent, or static strings that didn't exactly match what the real browser would ever send.

    • DamnInteresting 6 hours ago ago

      I did something similar with someone who was using my site’s donation form to test huge batches of credit cards numbers. I would see hundreds of attempted (and mostly declined) $1 donations start pouring in, and I’d block the IP. A little while later it would restart from another IP. When it became clear they were not giving up easily, I changed tack: instead of blocking them, I would return random success/failure messages at the same rate they were seeing success on previous attempts. I didn’t really try to charge those cards, of course.

      I like how this kind of response is very difficult for them to detect when I turn it on, and as a bonus, it pollutes their data. They stopped trying a few days after that.

      • simondotau 5 hours ago ago

        Was it always $1? If I was the attacker, surely you’d pick a random number. My guess is that $1 donations would be an outlier in the distribution and therefore easy to spot.

        It’s also interesting that merchants (presumably) don’t have a mechanism to flag transactions as being >0% chance of being suspect. Or that you waive any dispute rights.

        As a merchant, it would be nice if you could demand the bank verify certain transactions with their customer. If I was a customer, I would want to know that someone tried to use my card numbers to donate to some death metal training school in the Netherlands.

    • grishka 10 hours ago ago

      Thank you very much for the observation about headers. I just looked closer at the bot traffic I'm currently receiving on my small fediverse server and noticed that it's user agents of old Chrome versions but also that the Accept-Language header is never set, which is indeed something that no real Chromium browser would do. So I added a rule to my nginx config to return a 403 to these requests. The amount of these per second seems to have started declining.

      • simondotau 5 hours ago ago

        The important thing is to be aware of your adversary. If it’s a big network which doesn’t care about you specifically, block away. But if it’s a motivated group interested in your site specifically, then you have to be very careful. The extreme example of the latter is yt-dlp, which continues to work despite YouTube’s best efforts.

        For those adversaries, you need to work out a careful balance between deterrence, solving problems (e.g. resource abuse), and your desire to “win”. In extreme cases your best strategy is for your filter to “work” but be broken in hard to detect ways. For example, showing all but the most valuable content. Or spiking the data with just enough rubbish to diminish its value. Or having the content indexes return delayed/stale/incomplete data.

        And whatever you do, don’t use incrementing integers. Ask me how I know.

        • grishka 5 hours ago ago

          In my particular case, I don't mind the crawling. It's a fediverse server. There is nothing secret there. All content is available via ActivityPub anyway for anyone to grab. However, these bots specifically violated both robots.txt and rel="nofollow" while hitting endpoints like "log in to like this post" pages tens of times per second. They were just wasting my server's resources for nothing.

          • simondotau 3 hours ago ago

            My base advice is to make sure you have a very efficient code path for login pages. 10 pages per second is nothing if you don’t have to perform any database queries (because you don’t have any authentication token to validate).

            Beyond that, look for how the bots are finding new URLs to probe, and don’t give them access to those lists/indexes. In particular, don’t forget about site maps. I use cloudflare rules to restrict my site map to known bots only.

            • grishka 20 minutes ago ago

              Of course. My server wasn't struggling with that. I haven't benchmarked that server, but on an M1 Max, the app can easily serve hundreds of requests per second for profile pages, which is the heaviest thing an unauthenticated user can access (I cache a lot in memory, but posts, photos, and friend lists aren't among that). It was just a mild annoyance.

              They discovered those URLs simply by parsing pages that contain like buttons. Those do have rel="nofollow" on them, and the URL pattern is disallowed in robots.txt, but I'd be surprised it that'd stop someone who uses thousands of IPs to proxy their requests. I don't have a site map.

      • grishka 7 hours ago ago

        It's been a few hours. These particular bots have completely stopped. There are still some bot-looking requests in the log, with a newer-version Chrome UA on both Mac and Windows, but there aren't nearly as many of them.

        Config snippet for anyone interested:

            if ($http_user_agent ~* "Chrome/\d{2,3}\.\d+\.\d{2,}\.\d{2,}") {
              set $block 1;
            }
            if ($http_accept_language = "") {
              set $block "${block}1";
            }
            if ($block = "11") {
              return 403;
            }
      • AJMaxwell 9 hours ago ago

        That's a simple and effective way to block a lot of bots, gonna implement that on my sites. Thanks!

    • tesin 12 hours ago ago

      The vast majority of bots are still failing the header test - we organically arrived at the except same filtering in 2025. The bots followed the exact same progression too. One ip, lie about the user agent, one ASN, multiple ASNs, then lie about everything and use residential IPs, but still botch the headers

    • thephyber 7 hours ago ago

      In the movie The Imitation Game, the Alan Turing character recognizes that acting 100% of the time gives away to the opposition that you identified them and sets off the next iteration of “cat and mouse”. He comes up with a specific percentage of the time that the Allies should sit on the intelligence and not warn their own people.

      If, instead, you only act on a percentage of requests, you can add noise in an insidious way without signaling that you caught them. It will make their job troubleshooting and crafting the next iteration much harder. Also, making the response less predictable is a good idea - throw different HTTP error codes, respond with somewhat inaccurate content, etc

    • wvbdmp 12 hours ago ago

      Why do the company names chase away bots? Is it just that you’re destroying their signal because they’re looking for mentions of those brands?

      • simondotau 8 hours ago ago

        It’s both a destruction of signal and an injection of noise. Imagine you worked for Adidas and you started getting a stream of notifications about your brand, and they were all nonsense. This would be an annoyance and harm the reputation of that monitoring service.

        They would have received multiple complaints about it from customers, performed an investigation, and ultimately perform a manual excision of the junk data from their system; both the raw scrapes and anywhere it was ingested and processed. This was probably a simple operation, but might not have been if their architecture didn’t account for this vulnerability.

      • akoboldfrying 11 hours ago ago

        I also didn't follow that part. Their step 2 seem to be a general-purpose bot detection strategy that works independently of their step 1 ("randomly mention companies").

        • SAI_Peregrinus 10 hours ago ago

          It spams the bot with false-positives. Encourages the bot admins to denylist the site to protect the bot's signal:noise ratio.

          • akoboldfrying 10 hours ago ago

            That was my first thought too -- but then why would the bot company care about a few false positives?

            I suppose it could have an impact if 30% of all, say, Coca Cola mentions on the web came from that site, but then it would have to be a very big site. I don't think the bot company would notice, let alone care, if it was 0.01% of the mentions.

            • simondotau 8 hours ago ago

              Everyone’s definition of “big” is different, but back then it was big enough to get its own little island in a far corner of XKCD 802.

              https://xkcd.com/802/

  • VladVladikoff 16 hours ago ago

    This is a fundamental misunderstanding of what those bots are requesting. They aren’t parsing those PHP files, they are using their existence for fingerprinting — they are trying to determine the existence of known vulnerabilities. They probably immediately stop reading after receiving a http response code and discard the remainder of the request packets.

    • amypetrik8 3 hours ago ago

      > They aren’t parsing those PHP files, they are using their existence for fingerprinting — they are trying to determine the existence of known vulnerabilities.

      So would the natural strategy then be to flag some vulnerability of interest? Either one typically requiring more manual effort (waste their time), or one that is easily automated so as to trap a bot in a honeybot i.e. "you got in, what do next? oh upload all your kit and show how you work? sure" see: the cuckoos egg

    • holysoles 13 hours ago ago

      You're right, something like fail2ban or crowdsec would probably be more effective here. Crowdsec has made it apparent to me how much vulnerability probing is done, its a bit shocking for a low-traffic host.

      • ajsnigrutin 13 hours ago ago

        And you'd ban the ip, their one day lease on the VM+IP would expire, someone else will get the same IP on a new VM and be blocked from everywhere.

        Would be usable to ban the ip for a few hours to have the bot cool down for a bit and move onto a next domain.

        • holysoles 12 hours ago ago

          I was referring to the rules/patterns provided by crowdsec rather than the distribution of known "bad" IPs through their Central API.

          The default ban for traffic detected by your crowdsec instance is 4 hours, so that concern isn't very relevant in that case.

          The decisions from the Central API from other users can be quite a bit longer (I see some at ~6 days), but you also don't have to use those if you're worried about that scenario.

    • mattgreenrocks 14 hours ago ago

      It would be such a terrible thing if some LLM scrapers were using those responses to learn more about PHP, especially because of that recent paper pointing out it doesn't take that many data points to poison LLMs.

  • Kiro 17 hours ago ago

    I remember when you used to get scolded on HN for preventing scrapers or bots. "How I access your site is irrelevant".

    • grishka 11 hours ago ago

      It's different. I'm fine with someone scraping my website as a good citizen, by identifying themselves in their user-agent string and preferably respecting robots.txt. I'm not, however, fine with tens of requests per second to every possible URL from random IPs I'm receiving right now, all pretending to be different old versions of Chrome.

    • hollow-moe 14 hours ago ago

      There's this and that. "How I [i.e. an individual human looking for myself] access your site is irrelevant." and "How I [i.e. an AI company DDOSing (which is illegal in some places btw) trying to maximize profit and offloading cost to you] access your site is irrelevant."

      When you get paid big buck to make the world worse for everyone it's really simple forgetting "little details".

    • elashri 13 hours ago ago

      I have a side project as an academic that scrape a couple of academic jobs sites in my field and then serve them in static HTML page. It is running using github action and request every 24 hours exactly one time. It is useful for me and a couple of people in my circle. I would consider this to be fine and within the reasonable expectations. Many projects rely on such scenarios and people share them all the time.

      It is completely different if I am hitting it looking for WordPress vulnerabilities or scraping content every minute for LLM training material.

    • Analemma_ 13 hours ago ago

      To me that's the one of the most depressing developments about AI (which is chock-full of depressing developments): that its mere existence is eroding long-held ethics, not even necessarily out of a lack of commitment but out of practical necessity.

      The tech people are all turning against scraping, independent artists are now clamoring for brutal IP crackdowns and Disney-style copyright maximalism (which I never would've predicted just 5 years ago, that crowd used to be staunchly against such things), people everywhere want more attestation and elimination of anonymity now that it's effectively free to make a swarm of convincingly-human misinformation agents, etc.

      It's making people worse.

  • iam-TJ 19 hours ago ago

    This reminds me of a recent discussion about using a tarpit for A.I. and other scrapers. I've kept a tab alive with a reference to a neat tool and approach called Nepenthes that VERY SLOWLY drip feeds endless generated data into the connection. I've not had an opportunity to experiment with it as yet:

    https://zadzmo.org/code/nepenthes/

  • jcynix 20 hours ago ago

    If you control your own Apache server and just want to shortcut to "go away" instead of feeding scrapers, the RewriteEngine is your friend, for example:

          RewriteEngine On
    
          # Block requests that reference .php anywhere (path, query, or encoded)
          RewriteCond %{REQUEST_URI} (\.php|%2ephp|%2e%70%68%70) [NC,OR]
          RewriteCond %{QUERY_STRING} \.php [NC,OR]
          RewriteCond %{THE_REQUEST} \.php [NC]
          RewriteRule .* - [F,L]
    
    Notes: there's no PHP on my servers, so if someone asks for it, they are one of the "bad boys" IMHO. Your mileage may differ.
    • palsecam 16 hours ago ago

      I do something quite similar with nginx:

        # Nothing to hack around here, I’m just a teapot:
        location ~* \.(?:php|aspx?|jsp|dll|sql|bak)$ { 
            return 418; 
        }
        error_page 418 /418.html;
      
      No hard block, instead reply to bots the funny HTTP 418 code (https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...). That makes filtering logs easier.

      Live example: https://FreeSolitaire.win/wp-login.php (NB: /wp-login.php is WordPress login URL, and it’s commonly blindly requested by bots searching for weak WordPress installs.)

      • jcynix 15 hours ago ago

        418? Nice I'll think about it ;-) I would, in addition, prefer that "402 Payment Required" would be instantiated for scrapers ...

        https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

      • kijin 15 hours ago ago

        nginx also has "return 444", a special code that makes it drop the connection altogether. This is quite useful if you don't even want to waste any bandwidth serving an error page. You have an image on your error page, which some crappy bots will download over and over again.

        • quesera 5 hours ago ago

          Beware of nginx 444 if your webserver is behind a load balancer.

          The LB will see the unresponded requests and think your webserver is failing.

          Ideal would be to respond at the webserver and let the LB drop the response.

        • palsecam 14 hours ago ago

          Yes @ 444 (https://http.cat/status/444). That’s indeed the lightest-weight option.

          > You have an image on your error page, which some crappy bots will download over and over again.

          Most bots won’t download subresources (almost none of them do, actually). The HTML page itself is lean (475 bytes); the image is an Easter egg for humans ;-) Moreover, I use a caching CDN (Cloudflare).

        • MadnessASAP 8 hours ago ago

          Does it also tell the kernel to drop the socket? Or is a TCP FIN packet still sent?

          Be better if the scraper is left waiting for a packet that'll never arrive (till it times out obviously)

  • ArcHound 20 hours ago ago

    Neat! Most of the offensive scrapers I met try and exploit WordPress sites (hence the focus on PHP). They don't want to see php files, but their outputs.

    What you have here is quite close to a honeypot, sadly I don't see an easy way to counter-abuse such bots. If the attack is not following their script, they move on.

    • jojobas 17 hours ago ago

      Yeah, I bet they run a regex on the output and if there's no admin logon thingie where they can run exploits or stuff credentials they'll just skip.

      So as to battles of efficiency, generating a 4kb bullshit PHP is harder than running a regex.

  • firefoxd 12 hours ago ago

    I had to revisit my strategy after posting about my zipbombs on HN [0]. My server traffic went from tens of thousands to ~100k daily, hosted on a $6 vps. It was not sustainable.

    Now I target only the most aggressive bots with zipbombs and the rest get a 403. My new spam strategy seems to work, but I don't know if I should post it on HN again...

    [0]: https://news.ycombinator.com/item?id=43826798

  • BigBalli 13 hours ago ago

    I always had fail2ban but a while back I wanted to set up something juicier...

    .htaccess diverts suspicious paths (e.g., /.git, /wp-login) to decoy.php and forces decoy.zip downloads (10GB), so scanners hitting common “secret” files never touch real content and get stuck downloading a huge dummy archive.

    decoy.php mimics whatever sensitive file was requested by endless streaming of fake config/log/SQL data, keeping bots busy while revealing nothing.

  • vachina 16 hours ago ago

    They’re not scraping for php files, they’re probing for known vulns in popular frameworks, and then using them as entry points for pwning.

    This is done very efficiently. If you return anything unexpected, they’ll just drop you and move on.

  • s0meON3 20 hours ago ago
    • lavela 20 hours ago ago

      "Gzip only provides a compression ratio of a little over 1000: If I want a file that expands to 100 GB, I’ve got to serve a 100 MB asset. Worse, when I tried it, the bots just shrugged it off, with some even coming back for more."

      https://maurycyz.com/misc/the_cost_of_trash/#:~:text=throw%2...

      • LunaSea 18 hours ago ago

        You could try different compression methods supported by browsers like brotli.

        Otherwise you can also chain compression methods like: "Content-Encoding: gzip gzip".

      • kalkin 5 hours ago ago

        Ah cool that site's robots.txt is still broken, just like it was when it first came up on HN...

    • renegat0x0 19 hours ago ago

      Even I, who does not know much, implemented a workaround.

      I have a web crawler and I have both scraping byte limit and timeout, so zip bombs dont bother me much.

      https://github.com/rumca-js/crawler-buddy

      I think garbage blabber would be more effective.

  • aduwah 18 hours ago ago

    I wonder if the abuse bots could be somehow made to mine some crypto to give back to the bills they cause

    • boxedemp 17 hours ago ago

      You could try to get them to run JavaScript, but I'm sure many is them have countermeasures.

  • Surac 17 hours ago ago

    I have just cut out up ranges that can not connect. I am blocking USA, Asia and Middle East to prevent most malicious accesses

    • breppp 17 hours ago ago

      Blocking most of the world's population is one way of reducing malicious traffic

      • gessha 17 hours ago ago

        If nobody can connect to your site, it’s perfectly secure.

      • warkdarrior 16 hours ago ago

        Make sure to block your own IP address to minimize the chance of a social engineering attack.

        • bot403 15 hours ago ago

          Include 127.0.0.1 as well just in case they get into the server.

          • testing22321 4 hours ago ago

            The server I have not built yet is my most secure one yet!

  • holysoles 13 hours ago ago

    I wrote a Traefik plugin [1] that controls traffic based on known bad bot user agents, you can just block or even send them to a markov babbler if you've set one up. I've been using nepenthes [2].

    [1] https://github.com/holysoles/bot-wrangler-traefik-plugin

    [2] https://zadzmo.org/code/nepenthes/

  • localhostinger 20 hours ago ago

    Interesting! It's nice to see people are experimenting with these, and I wonder if this kind of junk data generators will become its own product. Or maybe at least a feature/integration in existing software. I could see it going there.

    • arbol 11 hours ago ago

      They could be used by AI companies to sabotage each others models

  • ronsor 12 hours ago ago

    These aren't scraper bots; they're vulnerability scanners. They don't expect PHP source code and probably don't even read the response body at all.

    I don't know why people would assume these are AI/LLM scrapers seeking PHP source code on random servers(!) short of it being related to this brainless "AI is stealing all the data" nonsense that has infected the minds of many people here.

  • NoiseBert69 20 hours ago ago

    Hm.. why not using dumbed down small, self-hosted LLM networks to feet the big scrapers with bullshit?

    I'd sacrifice two CPU cores for this just to make their life awful.

    • Findecanor 18 hours ago ago

      You don't need an LLM for that. There is a link in the article to an approach using Markov chains created from real-world books, but then you'd let the scrapers' LLMs re-enforce their training on those books and not on random garbage.

      I would make a list of words from each word class, and a list of sentence structures where each item is a word class. Pick a pseudo-random sentence; for each word class in the sentence, pick a pseudo-random word; output; repeat. That should be pretty simple and fast.

      I'd think the most important thing though is to add delays to serving the requests. The purpose is to slow the scrapers down, not to induce demand on your garbage well.

    • mnau 5 hours ago ago

      He addresses that. Basically, there are gatekeepers and if you get on the wrong side of them, only manual intervention can save you. And we all know how Google loves providing a human to resolve problems.

      > I came to the conclusion that running this can be risky for your website. The main risk is that despite correctly using robots.txt, nofollow, and noindex rules, there's still a chance that Googlebot or other search engines scrapers will scrape the wrong endpoint and determine you're spamming.

    • qezz 18 hours ago ago

      That's very expensive.

  • re-lre-l 19 hours ago ago

    Don’t get me wrong, but what’s the problem with scrapers? People invest in SEO to become more visible, yet at the same time they fight against “scraper bots.” I’ve always thought the whole point of publicly available information is to be visible. If you want to make money, just put it behind a paywall. Isn’t that the idea?

    • georgefrowny 19 hours ago ago

      There's a difference between putting information easily online for your customers or even people in general (eg as a hobby), and working in concert with scraping for greater visibility via search, and giving that work away, or at a cost, to companies who at best don't care and possibly may be competition, see themselves as replacing you or otherwise adversarial.

      The line is "I technically and able to do this" and "I am engaging with a system in good faith".

      Public parks are just there and I can technically drive up and dump rubbish there and if they didn't want me to they should have installed a gate and sold tickets.

      Many scrapers these days are sort of equivalent in that analogy to people starting entire fleets of waste disposal vehicles that all drive to parks to unload, putting strain on park operations and making the parks a less tenable service in general.

      • akoboldfrying 11 hours ago ago

        > The line is "I technically and able to do this" and "I am engaging with a system in good faith".

        This is where the line should be, always. But in practice this criterion is applied very selectively here on HN and elsewhere.

        After all: What is ad blocking, other than direct subversion of the site owner's clear intention to make money from the viewer's attention?

        Applying your criterion here gives a very simple conclusion: If you don't want to watch the ads, don't visit the site.

        Right?

        • akoboldfrying 6 hours ago ago

          I see downvotes, but no counterarguments.

          Does anyone have a counterargument?

          • ryantgtg 22 minutes ago ago

            I think the counterargument is that a while ago ads became super annoying. They move, they grow in size, they feature nsfw things, they have weird js that annoys you when you try to leave. Perhaps some of this has toned down in recent years, but the damage is done. The ads are not good actors. It’s not as black and white as subverting or not subverting the will of the site owner.

    • nrhrjrjrjtntbt 19 hours ago ago

      The old scrapers indexed your site so you may get traffic. This benefits you.

      AI scrapers will plagiarise your work and bring you zero traffic.

      • ProofHouse 18 hours ago ago

        Ya make sure you hold dear that grain of sand on a beach of pre-training data that is used to slightly adjust some embedding weights

        • jcynix 17 hours ago ago

          Sand is the world's second most used natural resource and sand usable for concrete gets even illegally removed all over the world nowadays.

          So to continue your analogy, I made my part of the beach accessible for visitors to enjoy, but certain people think they can carry it away for their own purpose ...

        • boxedemp 17 hours ago ago

          One Reddit post can get an LLM to recommend putting glue in your pizza. But the takeaway here is to cheese the bots.

        • throwawa14223 15 hours ago ago

          I have no reason to help the richest companies on earth adjust weights at a cost to myself.

        • exe34 17 hours ago ago

          that grain of sand used to bring traffic, now it doesn't. it's pretty much an economic catastrophe for those who relied on it. and it's not free to provide the data to those who will replace you - they abuse your servers while doing it.

    • saltysalt 17 hours ago ago

      You are correct, and the hard reality is that content producers don't get to pick and choose who gets to index their public content because the bad bots don't play by the rules of robots.txt or user-agent strings. In my experience, bad bots do everything they can to identify as regular users: fake IPs, fake agent strings...so it's hard to sort them from regular traffic.

    • Dilettante_ 18 hours ago ago

      Did you read TFA?

      These scrapers drown peoples' servers in requests, taking up literally all the resources and driving up cost.