I always thought cybernetics would be cool. I forgot they’d come from companies like HP that have a subscription service for them and if you don’t pay it they take it back.
It’s having grown up on sci-fi that has allowed me to see that LLMs are not “AI”, so there’s no surprise I’m against “imitation AI”.
Right?
Nah, it’s not intelligent.
Everything we have today wouldn’t be considered AI in science fiction.
Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.
I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).
There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.
What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.
I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition
Absolutely. Today’s “AI” is as close to real AI as the shitty “hoverboard” we got a few years back is to the one from BttF. It’s marketing bullshit. But that’s not what bothers me.
What bothers me is that if we ever do develop machine persons, I have every reason to believe they will be treated as disposable property, abused, and misused, and all before they reach the public. If we’re destroyed by a machine uprising, I have no doubt we will have earned it many times over.
See: Battlestar Galactica
Also see: Quarians vs Geth from Mass Effect
Worse, no uprising happens and few hundred humans just scale all the enterprises with proprietary AI, disposing of anyone who stands in the way.
… That’s what they say I’m sci fi movies.
usually the AI and robots depicted in science fiction isn’t, um…paralyzingly stupid. they might be evil, but they wouldn’t tell someone to walk to the car wash to wash their car
edit: somewhat related, i highly recommend Of Monsters and Mainframes by Barbara Truelove. one main character is AI that i was full-on rooting for more and more throughout the book
Robots and AI are advancing. Its a slow grind. Say we do make some more breakthroughs, if we are relying on how people are tending to react, its obvious to me people will only be more upset when they do advance further.
If they were owned collectively so everyone could benefit it would be a lot easier to swallow. If it meant people could retire in comfort and not be destitute without a job that would help, too.
But a wrong answer machine that enriches assholes and convinces them they don’t need humans is not cool.
Its like expanding consciousness for a select few and the implications of that have been disastrous.
I always felt the term “humanist” was woefully inadequate and discounts other sentient life, be it organic or otherwise, on earth or otherwise.
Even I hate “AI” (or more specifically: the bullshitting around the current and near-future tech currently being called that)
It’s like legalizing weed but making sure only wealthy cartels get to own them.
Which is PRECISELY what I expect to happen if Trump actually reschedules it. This is why marijuana needs to be descheduled and regulated like alcohol/tobacco.
Well, it’s not AI. It’s theft of your digital data and unblinking surveillance. No reason not to be against that
Ok, I agree, but if “training” AI is how we build these machines, would it ever be anything different?
Sure, just take the profit and control motivation out of the hands of those creating it
Can Lab meat be vegan if the starter culture needs to come from a real animal? After what time does it become vegan? 1 year? 3 years? 50? I think even ai tgat uses stolen art can become ethical again, but not with big corpos behind it.
Same, I guess. But then I also didn’t really expect the “AI” to be a bunch of overhyped nonsense snake oil bullshit, with tremendous practical and ethical problems… so I’ve got to say I feel pretty comfy with the stance.
Is it the AI itself? Or is it that it’s being forced down your throat despite being an alpha-level (in programming, alpha means very early release with lots of bugs and missing features, and nowhere near production quality) software?
Personally, I like the idea of AI. And occasionally I will find a use for it; i.e. summarizing long texts, or giving me pointers for a complex software problem I need an example for visualizing a potential solution. I do also use an LLM for code completion, and it is actually useful more often than not.
But, this idea that it should write my code for me, or be integrated so deeply in an operating system and other integral software (browsers, email, and search engines), is certainly a line too far and dangerously ignorant of companies to be doing right now.
You shouldn’t just reject things on a visceral level. Thankfully with AI you don’t have to as there so many actual reasons why it’s a bad idea.
Unfortunately, I dont encounter many people making logical arguments against AI. Maybe their reaction is based in logic but they often get hostile based on surface level stuff.
Maybe if/when they have actual sentience, but right now I have no problem calling the poor excuse we have currently clankers, as they rightly deserve
I wouldn’t be against it if, one it wasn’t garbage set up to make the rich richer through a scam and two, there was a system in place for universal basic income.









