• vegetaaaaaaa@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    61
    ·
    2 days ago

    I have copied the latest git revision c67b943aa894b90103c4752ac430958886b996b2 from https://gitlab.tt-rss.org/tt-rss/tt-rss to my gitea instance which is mirrored to https://gitlab.com/nodiscc/tt-rss and https://github.com/nodiscc/tt-rss.

    I don’t intend to make changes or bugfixes (it’s working fine), but I will try to keep it compatible with the PHP version in Debian stable, since I’ve been using it for years and would really like to keep doing so.

  • clb92@feddit.dk
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 days ago

    The loss of Google Reader is basically what taught me not to get too attached to services I can’t host myself. I’m hosting an older version of TT-RSS (due to migration issues to newer versions), and will continue with that until it no longer works for me, and then I will probably move on to CommaFeed. I’ve already tested all the commonly self hosted RSS readers out there, and that’s the one that fits my needs best, other than TT-RSS.

  • Swedneck@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    11
    ·
    2 days ago

    i don’t get why people use web services for rss, it can be done completely clientside, that’s… kind of the whole point of rss…

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      You could want to have multiple clients in sync.

      Also a web service could be fetching 24/7 and perform classification algorithms before serving to the client that will only connect a few times a day.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      1 day ago

      No, it isn’t the whole point. The point is to curate our own news. And a separate question is how to browse the results. If you use two devices, you might want a server side solution. Maybe. There are many reasonable setups.

    • John Colagioia@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      In my case (not necessarily your case, of course), the cheapest selling-point has become that I already have a browser open for almost everything else, so that’s one less thing to install and check in on. But it’s also easier to keep up to date reading when individual computers have problems and usually has a nicer API for scripting, if you need that sort of thing.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 hours ago

      Yeah, it’s great, fast, works with lots of local clients and has lots of plug ins for whatever esoteric need you might have. I can fly through the days articles very quickly with a handful of key presses.

    • cyberwolfie@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I like FreshRSS - I also have some readers that connect to my instance, like FluentReader that provides a better full article view, but I mostly use FreshRSS directly these days.

      • tehWrapper@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Looks like it supports a wide range of readers with two different API .

        FreshRSS supports access from mobile / native apps for Linux, Android, iOS, Windows and macOS, via two distinct APIs: Google Reader API (best), and Fever API (limited features, less efficient, less safe).

    • refract@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      If you already have a Nextcloud instance I can recommend the “App” called News. There is an official android app that works well.

    • clb92@feddit.dk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I didn’t like it, as it didn’t have the exact full article view mode I desired, but lots of people like it.

    • tehWrapper@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Kinda hope someone else picks up the work cause I have been using ttrss for well over 15 years.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    edit-2
    2 days ago

    I no longer find it fun to maintain public-facing anything

    I think the kids would say: “Mood.”

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    edit-2
    2 days ago

    The post is a bit low on details, but I strongly suspect this is a victim of AI scraping.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      It really doesn’t seem like that’s the case. It doesn’t even makes much sense. What do tou think was being AI scrapped? The source code?

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 day ago

        It makes a lot of sense. Both the git repos that they hosted and things like a RSS feed-reader are things that are the prime target for AI scrapers and the same time quite database query heavy on the backend so that the scraping really has a big impact on the costs of running these services.

        And yes source-code is among what is the most targeted data to ingest by AI scrapers, mainly to train coding assistants but apparently it also helps LLMs to understand logic better.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          First, source code is on github.

          Second, RSS aggregators are self hostable, not a service provided by the dev. The dev would have not issues of a public instance of ttrss hosted by someone gets scrapped.

          Third, RSS aggregators doesn’t really tend to be public facing. Due to their personal nature they don’t tend to be open. They are more account based.

          Sorry, I really don’t see the case here.

          • poVoq@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 day ago

            What? They explicitly talk about shutting down their self-hosted infrastructure which includes two git services and other targets of AI scraping. Did you even read the post?

            • daniskarma@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 day ago

              They are closing the whole project.

              Specifically they say that they are tired of pushing fixes and that they don’t find excitement in maintaining the project. With zero mentions at all to being scrapped or having any kind of AI related issue.

              I don’t know if you knew the project before seeing this post. I did, I was considering between this and freshrss and chose freshrss specifically because I knew that the end of ttrss was close (this was like 2 years ago). There were a lot of signs that the development was ending and the project was on route to be abandoned.

              • poVoq@slrpnk.net
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                1 day ago

                No, they are shutting down their publicly hosted infrastructure and say that their project is “finished” anyways, so it doesn’t matter that much as a justification. But the main point about the post is the public facing infrastructure and how they lost motivation to run it.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Well, they’ve also been maintaining the software since 2005. They said why they’re closing shop, so why not take their words at face value? They have no obvious reason to lie.

      Many of us have started and maintained projects and then moved on when our lives changed. That is just normal.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        1 day ago

        Yes and the reason they state sounds a lot like AI scraping made hosting public services such a PITA that they lost motivation to continue doing it. Lots of long running projects that used to require very little maintainance are now DDOSed by these scrapers.