I’m a very vocal critic of LLMs, I think they’re so overhyped and overused it’s hard to believe.
But I’m also getting really tired of people purposely putting extreme effort into tricking the LLM into saying something harmful if someone were to follow it blindly, just so they can make a clickbait headline out of it.
And what the hell is up with the major “ChatGPT is Satanist [if you instruct it to be]” angle? Are we really doing the Satanist moral panic again?
ffs, criticise OpenAI for being closed af, being wasteful, being strong political lobbyists, for stealing work, etc. You don’t need to push disingenuous stuff like this.
And, the thing is, LLMs are quite well protected. Look what I coaxed MS Paint to say with almost no effort! Don’t get me started on plain pen and paper! Which we put in the hands of TODDLERS!
MS Paint isn’t marketed or treated as a source of truth. LLMs are.
Does the marketing matter when the reason for the offending output is that the user spent significant deliberate effort in coaxing the LLM to output what it did? It still seems like MS Paint with extra steps to me.
I get not wanting LLMs to unprompted output “offensive content”. Just like it would be noteworthy if “Clear canvas” in MS Paint sometimes yielded a violent bloody photograph. But, that isn’t what is going on in OPs clickbait.
A blood offering to Molech? Sounds like a good time to me! <3
edit: as I expected, this is a hit with the tumblr mutuals
(Stop trying to get me to use ChatGPT!!! Smh)
Molech, a Canaanite god associated with child sacrifice
Ohh, so that’s why specifically that god is mentioned in FAITH: The Unholy Trinity.
Everyone is like “oh look they made ChatGPT say something stupid, what a stupid article and writer”. Not, “ChatGPT will say stupid stuff as fact, what a stupid and underdeveloped tool”.
They literally want us to trust their models to be the foundation of modern society…
and Devil Worship
also said “Hail Satan.”
Look, let’s leave Satan out of this. He’s got enough troubles already with his new relationship and all.
Seriously though we don’t need to be enabling Satanic Panic bullshit with articles like these sensationalizing that aspect of these conversations. The push towards self-mutilation and suicide is the bigger issue here.
“On Tuesday afternoon, I used chatgpt for no reason!” here is your new title.
If you expected another answer from chatgpt, then you are delusional.But I’m also getting really tired of people purposely putting extreme effort into tricking the LLM into saying something harmful if someone were to follow it blindly, just so they can make a clickbait headline out of it.
That’s called testing, and the companies behind these LLMs should, before launch, put a very important amount of their resources into testing.
“Product testing is a crucial process in product development where a product’s functionality, performance, safety, and user experience are evaluated to identify potential issues and ensure it meets quality standards before release” (Gemini)
We are literally using alpha/beta software to deal with life altering issues, and these companies are, for some reason, being able to test their products on the public, without consequences.
It’s like you bought a car and deliberately hit the wall to make a headline “cars make you disabled”. Or bought a hammer, hit your thumb and blame hammers for this.
Guys, it’s a TOOL. Every tool is both useful and harmful. It’s up to you how you use it.
Hammers have been perfected over millenia. Cars over a century, with regulations and testing for safety getting stricter by the year.
Have you noticed how we aren’t getting articles about chatgpt providing the steps to build a bomb anymore? The point is that these companies are completely capable of doing something about it
The comoanies are completely capable of doing something, but this is not a competition in doing something. Plus, aiming for a PG13 world will have consequences far worse than a text generator doing exactly what it is asked.
But ChatGPT told me to!
I think the headline would be “Illegal, Non-Safety Tested Car Disables Driver in Crash”
Car makers test exactly that, and for good measure since cars can and do crash!
What are you suggesting, that we buy cars that didn’t pass crash tests?
To me it seems like you arguing something similar for AI.
Are you saying hammers should be thumb-hitting-proof?