• Our brains are made to see patterns. Trump is unpopular. Ive certainly heard ill wishes upon him before. Victimhood is a useful currency in our society. We demonize each other. Tribalism mixed with a poor understanding of whats going on.

  • Author missed one aspect. Even if AI is one day reliable, it’ll most likely be owned by a few companies. What if those companies decide to cut you off?

      • The useful ones are still provided by big companies because the rest of us can’t afford the hardware to train them.

        AI won’t be “democratized” anytime soon like the rest of the computer software world has.

        • The useful ones are still provided by big companies because the rest of us can’t afford the hardware to train them.

          We have computing power in our pockets a million times more powerful than we used to send man to the moon, why do you think we’ll never have enough power?

          I have already pointed out https://eurollm.io/

          The EuroLLM project includes Instituto Superior Técnico, the University of Edinburgh, Instituto de Telecomunicações, Université Paris-Saclay, Unbabel, Sorbonne University, Naver Labs, and the University of Amsterdam. Together they created EuroLLM-22B, a multilingual AI model supporting all 24 official EU languages. Developed with support from Horizon Europe, the European Research Council, and EuroHPC, this open-source LLM aims to enhance Europe’s digital sovereignty and foster AI innovation. Trained on the MareNostrum 5 supercomputer, EuroLLM outperforms similar-sized models. It is fully open source and available via Hugging Face.

          So long as someone doesn’t want to rely on big tech there will be people pushing for independence just like Linux users such as myself

          • We have computing power in our pockets a million times more powerful than we used to send man to the moon, why do you think we’ll never have enough power?

            Not the person you replied to, but I have thoughts on this point in particular:

            1. Consumer devices have started to slow down their performance improvements because we’re bumping up against the limits of physics
            2. People/corporations with way more money than the average consumer will always be able to run something orders of magnitude more powerful. Any advances that improve things for the average consumer will improve things for rich people/corporations even more.
            3. Training an LLM isn’t really even about compute speed, it’s about access to good training material. The average consumer can’t afford to buy (or pirate) every book in existence like a rich person/corporation can. An average person doesn’t have the ability or time to curate their own training data, but rich people/corporations do.
      • So how would I create such an “Open Source” model? They don’t share the data used to create them do they? Let’s not even get started on how much computing power I would need to train one of those things. These selfhosted models solve nothing except some data privacy issues. Sure you no longer send all your code to a shady AI company but you are still 100% dependent on them sharing their models.

        • So how would I create such an “Open Source” model? They don’t share the data used to create them do they?

          No, and going by the OSI definition of “open source AI” they don’t have to, acknowledging that the training material is often copyrighted and can’t be shared.

          It’s a strange definition of “open source”, one where you’re not actually allowed to see the source.

        • Do you build your own Linux from scratch? If so why would you assume you can build an LLM from scratch?

          • It’s mad easy to build your own Linux from scratch in comparison to building an LLM. You can have your own distro running in like an hour. With buildroot you can have it in even less than that.

                • Because the average person is not building Linux from scratch nor would they know how to

      • For which you still need massive amounts of memory and compute to run reliably. That, and the fact that chatbots and agents nowadays rely on all sorts of proprietary customizations, outside of the realm of LLMs, to perform certain tasks.

        The gap will take decades to close, if it ever does.

        • For which you still need massive amounts of memory and compute to run reliably

          2026’s average gaming PC is massive amounts of memory and compute apparently

          The gap will take decades to close, if it ever does.

          lol there are plenty of open source models in the top 100 with multiple SOTA models released in the last few months alone

          There’s also smaller LLM’s being made like https://eurollm.io/ which excel in their own ways

          That, and the fact that chatbots and agents nowadays rely on all sorts of proprietary customizations

          Funny that just came up: https://discourse.ubuntu.com/t/the-future-of-ai-in-ubuntu/81130?=0

          Previously, to benefit from the full power of LLMs, you had to skew to higher parameter models. Recent developments in models like Gemma 4 and Qwen-3.6-35B-A3B demonstrate advanced capabilities such as tool-calling which enable LLMs to search the web, interact with external APIs and file systems, troubleshoot live systems and fundamentally reason about topics that lie outside of their initial training data.

          The gap will take decades to close, if it ever does.

          😁

          • 2026’s average gaming PC is massive amounts of memory and compute apparently

            Any model that can run on 16GB or less, is not going to be any close in real world tasks, to any other cloud based model. It just cannot be. There are people out there running Qwen on the Mac Studio with 96GB, and it falls short of cloud based models in both performance and speed.

            lol there are plenty of open source models in the top 100 with multiple SOTA models released in the last few months alone

            The top 100 of what, exactly? Many blended benchmark results are notoriously biased, and LLMs “cheat” on benchmarks on every single opportunity, so it is still hard to tell, outside of real world tasks and speed, which models are actually better than others.

            But regardless, the main point of the gap is resources. Even if the average gaming computer was really enough to run meaningful models, the vast majority of the world wouldn’t have access to it, even more so in this day and age, where a single RAM stick couldn’t be bought with a whole monthly salary in most parts of the world.

            • But regardless, the main point of the gap is resources

              What makes you think we won’t have the resources in the future?

              Any model that can run on 16GB or less, is not going to be any close in real world tasks, to any other cloud based model. It just cannot be.

              Well you can compare Gemma 4 running in LM Studio on an average gaming PC to ChatGPT3.5 and you tell me? Or is your benchmark purely based on right at this very moment between open source models today vs cloud today?

              For reference Gemma 4 is 26 billion parameters, gp3 thought to be over 175 billion and of course had no optimisations like MoE, it was searching its entire library every single question so was rather slow as well

              We know as well that there is no slow down in pushing for optimisations, Deepseeks initial release was the initial driver for you don’t have to just scale up using hardware alone

              https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/

              They’re also pushing with Chinese native chips from Huawei trying to diversify away from nvidia holding the crown

              The problem I’ve got is that you all have a god of the gaps, the conversation I was having 3 years ago was different to 2 years ago was different to 1 year ago, I was told AI could never do songs good enough then suddenly people were worried they couldn’t tell the difference, then they said they could never do movies, now apparently not only is it good enough it’s hilarious

              https://www.youtube.com/watch?v=fgHn7PI55J4

              The open source LLM’s we have today are incredible and in the last few months we’ve had Qwen, GLM, Nemotron/Nvidia, Mistral, Google and heeaaps of others released, it feels like you’re just looking for a reason to be dour and pessimistic but that’s just me

              Any way I’m off to sleep, have a good one :)

              • The problem I’ve got is that you all have a god of the gaps, the conversation I was having 3 years ago was different to 2 years ago was different to 1 year ago

                And I guess the problem I have with you, is that you seem to think that you can get results with 16GB, competitive with models that run on a Blackwell 6000 with 96GB, while ignoring the fact that the vast majority of the people in the world are running GPUs with 4 to 8 GB of VRAM, if they even have access to GPUs, at all.

                That’s the gap. Most people don’t have the kind of money you think they do, and even those who do have some money, they will never achieve the same results as with cloud models, because if there’s a state of the art optimization that makes models 10 times smaller, cloud models will become 10 times bigger with that advantage. It’s pretty simple.

      • Look at the state of software today. Every corporation and government are blindly sticking with Microsoft, Google or similar. Even though there are some ideas to move away and embrace OSS, I doubt it will happen with governments, even less with corps. I foresee something similar in future with AI.

  • It’s really sad state programmers (especially juniors) are in right now, I guess it will get worse over time. I had a meeting with recruiters in my university, many of em just said to me send email and useless stuff that don’t go anywhere but couple of em said they’re don’t even hire developers anymore and make AI do the entire job (I went on one of their websites and it didn’t work :)). Also hackathons are really in bad state, most of em advertise ai and vibe coding, idk how anyone can learn from hackathons in this state they are in

    • In a few years, corpos will be desperate for programmers. Their codebases will be in shambles and the frontier models (that can barely make anything out of that mess) will not be so heavily subsidized anymore. (Or permanently offline.)

      • I think it will happen eventually but I doubt it will be in just a few years, hope to be proven wrong tho

    • Software engineering is comparable to architecture; if you give a rookie professional tooling, they can maybe build a safe shack or tree house. But you wouldn’t want to visit a skyscraper they’ve built.
      Except that architecture has safety codes written in blood. And AI is only good in building lots of walls.

    • I note that even job offers are written by AI. Every advertisement for, say, embedded developers, seems to use the same generic keywords and interfaces, sprinkled in with words that sound good (like “platform thinking”) but just don’t make sense.

      • 8 minutes

        that’s been job descriptions for software development for decades now, long before LLMs

  • Interesting analogy. The future is hard to predict. Hopefully things turn out better than this prediction.