Note: Article’s actual headline, by the way. It is The Register.
Before ChatGPT kicked off the AI boom in late 2022, you may recall Zuckerberg was convinced virtual reality would take over the world. As of Q1, the company’s Reality Labs team has burned some $60 billion trying to make the Metaverse a thing.
Absolutely hilarious.
You think any of this ends with superintelligence for us all? Is that why you’re building an underground doomsday bunker and tunnel in Hawaii?
The Register is a great reliable indepth IT/tech news publication I value for the quality of its information, the headlines and general editorial tone drenched in a refreshing icecold sarcasm towards silicon valley is a definite bonus to the experience though.
Theregister is a top notch technology news source. I don’t work in enterprise IT and I find their enterprise coverage very insightful both from a business and a tech perspective.
The irreverent and playful attitude is the cherry on top. :)
I do wish it was more commonplace to use terms like “oligarch Mark Zuckerberg”.
They love AI because it’s a data vacuum. They suck up everything anyone asks.
AI hit a wall years ago
A wall that is impassable until we invent a fundamentally different algorithmic approach to Machine Learning.
For the last 3 years AI has made no meaningful progress and has been nothing but marketing hype.
I really wish this guy could be kicked out.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
The whole exponential improvement hypothesis assumes that the marginal cost of each improvement stays the same. Which is a huge assumption.
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.