- 15 minutes
Our brains are made to see patterns. Trump is unpopular. Ive certainly heard ill wishes upon him before. Victimhood is a useful currency in our society. We demonize each other. Tribalism mixed with a poor understanding of whats going on.
- 8 hours
Author missed one aspect. Even if AI is one day reliable, it’ll most likely be owned by a few companies. What if those companies decide to cut you off?
- 2 hours
The useful ones are still provided by big companies because the rest of us can’t afford the hardware to train them.
AI won’t be “democratized” anytime soon like the rest of the computer software world has.
- 2 hours
The useful ones are still provided by big companies because the rest of us can’t afford the hardware to train them.
We have computing power in our pockets a million times more powerful than we used to send man to the moon, why do you think we’ll never have enough power?
I have already pointed out https://eurollm.io/
The EuroLLM project includes Instituto Superior Técnico, the University of Edinburgh, Instituto de Telecomunicações, Université Paris-Saclay, Unbabel, Sorbonne University, Naver Labs, and the University of Amsterdam. Together they created EuroLLM-22B, a multilingual AI model supporting all 24 official EU languages. Developed with support from Horizon Europe, the European Research Council, and EuroHPC, this open-source LLM aims to enhance Europe’s digital sovereignty and foster AI innovation. Trained on the MareNostrum 5 supercomputer, EuroLLM outperforms similar-sized models. It is fully open source and available via Hugging Face.
So long as someone doesn’t want to rely on big tech there will be people pushing for independence just like Linux users such as myself
- 33 minutes
We have computing power in our pockets a million times more powerful than we used to send man to the moon, why do you think we’ll never have enough power?
Not the person you replied to, but I have thoughts on this point in particular:
- Consumer devices have started to slow down their performance improvements because we’re bumping up against the limits of physics
- People/corporations with way more money than the average consumer will always be able to run something orders of magnitude more powerful. Any advances that improve things for the average consumer will improve things for rich people/corporations even more.
- Training an LLM isn’t really even about compute speed, it’s about access to good training material. The average consumer can’t afford to buy (or pirate) every book in existence like a rich person/corporation can. An average person doesn’t have the ability or time to curate their own training data, but rich people/corporations do.
- 3 hours
So how would I create such an “Open Source” model? They don’t share the data used to create them do they? Let’s not even get started on how much computing power I would need to train one of those things. These selfhosted models solve nothing except some data privacy issues. Sure you no longer send all your code to a shady AI company but you are still 100% dependent on them sharing their models.
The_Decryptor@aussie.zoneEnglish
2 hoursSo how would I create such an “Open Source” model? They don’t share the data used to create them do they?
No, and going by the OSI definition of “open source AI” they don’t have to, acknowledging that the training material is often copyrighted and can’t be shared.
It’s a strange definition of “open source”, one where you’re not actually allowed to see the source.
- 36 minutes
The model is named Apertus – Latin for “open” – highlighting its distinctive feature: the entire development process, including its architecture, model weights, training data and methods, is openly accessible and fully documented.
There is also a move into synthetic data and human trained so we will have to see where the training data goes copyright wise in the future
- 3 hours
Do you build your own Linux from scratch? If so why would you assume you can build an LLM from scratch?
- 1 hour
It’s mad easy to build your own Linux from scratch in comparison to building an LLM. You can have your own distro running in like an hour. With buildroot you can have it in even less than that.
- 1 hour
Because the average person is not building Linux from scratch nor would they know how to
- 4 hours
For which you still need massive amounts of memory and compute to run reliably. That, and the fact that chatbots and agents nowadays rely on all sorts of proprietary customizations, outside of the realm of LLMs, to perform certain tasks.
The gap will take decades to close, if it ever does.
- 2 hours
For which you still need massive amounts of memory and compute to run reliably
2026’s average gaming PC is massive amounts of memory and compute apparently
The gap will take decades to close, if it ever does.
lol there are plenty of open source models in the top 100 with multiple SOTA models released in the last few months alone
There’s also smaller LLM’s being made like https://eurollm.io/ which excel in their own ways
That, and the fact that chatbots and agents nowadays rely on all sorts of proprietary customizations
Funny that just came up: https://discourse.ubuntu.com/t/the-future-of-ai-in-ubuntu/81130?=0
Previously, to benefit from the full power of LLMs, you had to skew to higher parameter models. Recent developments in models like Gemma 4 and Qwen-3.6-35B-A3B demonstrate advanced capabilities such as tool-calling which enable LLMs to search the web, interact with external APIs and file systems, troubleshoot live systems and fundamentally reason about topics that lie outside of their initial training data.
The gap will take decades to close, if it ever does.
😁
- 2 hours
2026’s average gaming PC is massive amounts of memory and compute apparently
Any model that can run on 16GB or less, is not going to be any close in real world tasks, to any other cloud based model. It just cannot be. There are people out there running Qwen on the Mac Studio with 96GB, and it falls short of cloud based models in both performance and speed.
lol there are plenty of open source models in the top 100 with multiple SOTA models released in the last few months alone
The top 100 of what, exactly? Many blended benchmark results are notoriously biased, and LLMs “cheat” on benchmarks on every single opportunity, so it is still hard to tell, outside of real world tasks and speed, which models are actually better than others.
But regardless, the main point of the gap is resources. Even if the average gaming computer was really enough to run meaningful models, the vast majority of the world wouldn’t have access to it, even more so in this day and age, where a single RAM stick couldn’t be bought with a whole monthly salary in most parts of the world.
- 1 hour
But regardless, the main point of the gap is resources
What makes you think we won’t have the resources in the future?
Any model that can run on 16GB or less, is not going to be any close in real world tasks, to any other cloud based model. It just cannot be.
Well you can compare Gemma 4 running in LM Studio on an average gaming PC to ChatGPT3.5 and you tell me? Or is your benchmark purely based on right at this very moment between open source models today vs cloud today?
For reference Gemma 4 is 26 billion parameters, gp3 thought to be over 175 billion and of course had no optimisations like MoE, it was searching its entire library every single question so was rather slow as well
We know as well that there is no slow down in pushing for optimisations, Deepseeks initial release was the initial driver for you don’t have to just scale up using hardware alone
https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/
They’re also pushing with Chinese native chips from Huawei trying to diversify away from nvidia holding the crown
The problem I’ve got is that you all have a god of the gaps, the conversation I was having 3 years ago was different to 2 years ago was different to 1 year ago, I was told AI could never do songs good enough then suddenly people were worried they couldn’t tell the difference, then they said they could never do movies, now apparently not only is it good enough it’s hilarious

https://www.youtube.com/watch?v=fgHn7PI55J4
The open source LLM’s we have today are incredible and in the last few months we’ve had Qwen, GLM, Nemotron/Nvidia, Mistral, Google and heeaaps of others released, it feels like you’re just looking for a reason to be dour and pessimistic but that’s just me
Any way I’m off to sleep, have a good one :)
- Mihies@programming.devEnglish8 hours
Look at the state of software today. Every corporation and government are blindly sticking with Microsoft, Google or similar. Even though there are some ideas to move away and embrace OSS, I doubt it will happen with governments, even less with corps. I foresee something similar in future with AI.
- 3 hours
Every corporation and government are blindly sticking with Microsoft
Are you sure?
France to remove Windows from government computers in sovereignty push
https://tuta.com/blog/countries-ditching-microsoft-choosing-linux-digital-sovereignty
I doubt it will happen with governments
It does not take much for things to change, you might like this:

We’ve Hit A Wall With Transport. Here’s Why | Black Swans 3 | If You’re Listening
- 28 minutes
Great, all we need is a few decades and a world superpower becoming world-threateningly corrupt
Sure but it’s mostly been that way for awhile. The players on the board shift, but it’s almost always Java, or Microsoft’s flavor of the decade or classic C or objective c or switch or whatever. Are you arguing that big tech will lock down their documentation on APIs and proprietary language behind their own AIs so that developers are focred to “vibe code” them through AI interaction only, and open source models will be unable to train on them?
- 8 hours
It’s really sad state programmers (especially juniors) are in right now, I guess it will get worse over time. I had a meeting with recruiters in my university, many of em just said to me send email and useless stuff that don’t go anywhere but couple of em said they’re don’t even hire developers anymore and make AI do the entire job (I went on one of their websites and it didn’t work :)). Also hackathons are really in bad state, most of em advertise ai and vibe coding, idk how anyone can learn from hackathons in this state they are in
- kibiz0r@midwest.socialEnglish3 hours
In a few years, corpos will be desperate for programmers. Their codebases will be in shambles and the frontier models (that can barely make anything out of that mess) will not be so heavily subsidized anymore. (Or permanently offline.)
- 3 hours
I think it will happen eventually but I doubt it will be in just a few years, hope to be proven wrong tho
- 5 hours
Software engineering is comparable to architecture; if you give a rookie professional tooling, they can maybe build a safe shack or tree house. But you wouldn’t want to visit a skyscraper they’ve built.
Except that architecture has safety codes written in blood. And AI is only good in building lots of walls. - 8 hours
I note that even job offers are written by AI. Every advertisement for, say, embedded developers, seems to use the same generic keywords and interfaces, sprinkled in with words that sound good (like “platform thinking”) but just don’t make sense.
- 8 hours
Interesting analogy. The future is hard to predict. Hopefully things turn out better than this prediction.






