Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition
Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.
It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.
Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.
I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition
Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.
Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.
It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.