Browser extensions turn nearly 1M browsers into website-scraping bots

(arstechnica.com)

25 points | by chha 12 hours ago ago

16 comments

  • paulryanrogers 11 hours ago ago

    Extensions and VPNs have been doing this for years, it's not a secret. Where I worked we paid a proxy/scraping company that also offered 'stealth' scraping using residential IPs. They got those IPs using techniques like these extensions.

    Chrome web store changed its policy years ago to prohibit these with the rationale that an extension should have a single purpose. Apparently their scanning tools aren't enforcing the policy strictly enough.

    • mmsc 10 hours ago ago

      Indeed, it's not a secret and it's not just extensions and VPNs, but everything you could imagine. Lots of applications that advertise themselves as "ways to make money for your unused internet bandwidth" are available which do this -- openly.

      This type of software is bundled into system executables as well - just like the "free antivirus and browser toolbars" of yesterday, these are the new bundled software.

      If a company has an "internal network" (lol) that consists of security that can be described as Swiss cheese, then this stuff is a massive gap there.

    • josephg 10 hours ago ago

      > Extensions and VPNs have been doing this for years, it's not a secret.

      Its not a secret in the industry, but I bet money that most of your users have no idea this is happening. They almost certainly wouldn't install those web extensions if this information was widely known.

      As a rule of thumb, if you need to do something in secret to get away with it, its probably not ethical.

      • paulryanrogers 7 hours ago ago

        It's supposed to be in the terms of service. Otherwise it is indeed fraud/abuse. Though I'd agree that most users don't read the fine print.

  • riedel 9 hours ago ago

    I wonder why nothing like F-Droid did ever take off for browser extensions. Even if tons of stuff is open source, the standard distribution format are zip files with unknown content. And browser vendors never lived up to their promise that they even checked the most basic things. Also the whole manifest mess is rather a means to secure ad revenue and not to protect users.

    • mdaniel 7 hours ago ago

      I can think of 2 pragmatic reasons:

      1. If one wished to use .xpi/.crx (akin to F-Droid's install pathway) then the user would have to teach the browser to trust the signature of them. F-Droid doesn't suffer from this because each .apk is self-trusting, meaning it is signed, and that signature conveys lineage (v1.0 is owned by the same publisher as v1.1, so safe to upgrade), but the operating system doesn't have to be informed about any chain of custody for the .apk cert

      2. I am not aware of any self-hosting extension registry, even from Mozilla, and extra lol for Chromium. If such a thing existed, the browser would have to allow the user to add "trusted extension registries" (along with their trusted CA chain). It would actually be snazzy if they went the Helm/Homebrew route and just leveraged OCI distribution (aka docker registry) for that, since it would open up almost unlimited self-hosting options, including publishing right from GitHub Actions to ghcr.io

      • riedel 6 hours ago ago

        IMHO it would be rather easy to overcome this by forking. I anyways have used forks like librewolf, betterbird and recently Zen for Mozilla stuff due to all this telemetry (I guess you will need not care about malware if the browser already contains so many trackers)

  • mdaniel 7 hours ago ago

    I'm shocked that command-f "honey" didn't return any hits

  • xnx 10 hours ago ago

    I'd be OK with an open reciprocal crawling network for non-personal/private pages as it would be a distributed force against walled gardens.

    I'm very against this being done surreptitiously/deceptively and on private content (emails, chats, etc.)

    • mdaniel 7 hours ago ago

      I ran an extension that automatically submitted pages to the Internet Archive as I browsed them, but managing the allowlist/denylist turned into a major hassle, so I eventually just installed the extension into a "public browsing" profile, but as is often the case it turned into "I don't feel like switching to that profile" and it fell by the wayside

      But, in the same vein as your comment, I have long wished for Common Crawl to really lean into their mission, and not just publish monthly snaps of whatever their bots can see but do what you said and accept .har or .warc files from anyone and serve the ... hourly? ... .warc via Bittorrent

  • nerdjon 11 hours ago ago

    I have to wonder, how long until the browsers just natively do this.

    Gets around the AI blockers that CloudFlare is pushing with the added benefit of seeing information that a crawler would never see.

    Just hide it behind an "AI Browser" that just sends everything your browser sees to the cloud anyways for processing...

    Throw in some vague "privacy" promise for good measure.

    (I realize this is being more sneaky and doing stuff in the background, but my question remains)

    • Cthulhu_ 10 hours ago ago

      This may already be happening to a point; I forgot what it's called but in Chrome you can opt-in to sharing analytical data, which is used by Google's page speed insights tooling and/or Lighthouse to measure your site's performance by a wide range of devices and internet connections.

  • consumer451 10 hours ago ago

    [flagged]

    • 10 hours ago ago
      [deleted]
  • 11 hours ago ago
    [deleted]
    • 11 hours ago ago
      [deleted]