• Lv_InSaNe_vL@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    I honestly don’t really see the problem here. This seems to mostly be targeting scrapers.

    For unauthenticated users you are limited to public data only and 60 requests per hour, or 30k if you’re using Git LFS. And for authenticated users it’s 60k/hr.

    What could you possibly be doing besides scraping that would hit those limits?

    • chaospatterns@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      5 hours ago

      You might behind a shared IP with NAT or CG-NAT that shares that limit with others, or might be fetching files from raw.githubusercontent.com as part of an update system that doesn’t have access to browser credentials, or Git cloning over https:// to avoid having to unlock your SSH key every time, or cloning a Git repo with submodules that separately issue requests. An hour is a long time. Imagine if you let uBlock Origin update filter lists, then you git clone something with a few modules, and so does your coworker and now you’re blocked for an entire hour.

    • Disregard3145@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 hours ago

      I hit those many times when signed out just scrolling through the code. The front end must be sending off tonnes of background requests