LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.
LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.
I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.
LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be
A hamster can’t generate a seahorse emoji either.
I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.
LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.
I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.
Does that make sense?
Yeah ask it about anything you know is false, but plausible, and watch it lie.