The same way the Hiroshima and Nagasaki nuclear bombs are small and cute compared to a modern hydrogen bomb…
If we don’t solve the AI problems we already have, there is no point speculating about AGI because our lives will be unbearable long before it arrives.
Compared to AGI it is. We don’t know how far away we are from creating it. We can only speculate.
AGI talk seems for now to be merely hype to get investors.
LLMs seem likely to be dead end for any logical thought: https://www.forbes.com/sites/corneliawalther/2025/06/09/intelligence-illusion-what-apples-ai-study-reveals-about-reasoning/ This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: https://futurism.com/artificial-intelligence/experts-concerned-ai-progress-wall
The same way the Hiroshima and Nagasaki nuclear bombs are small and cute compared to a modern hydrogen bomb…
If we don’t solve the AI problems we already have, there is no point speculating about AGI because our lives will be unbearable long before it arrives.