- 9 days

The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
paraphrand@lemmy.worldEnglish
9 daysThat’s hilarious but the post is about the ai not doing what it’s told. You know?
paraphrand@lemmy.worldEnglish
9 daysWell, for now. I’m sure any of those 12 partner companies they called out as new security partners will end up leaking that this is all lies eventually. If it’s just made up bullshit.
Anthropic announced new partnerships to inform the companies of security issues and to work with them to fix said issues. If it’s bullshit, it’s gonna be wasting their time. And that’ll surface eventually.
The meme still applies to people asking the AI to tell them what they wanna hear, and delusional people spiraling with sycophantic AI.
But I believe Anthropic when they say their models are not working as intended and posing security risks.
Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available," Anthropic wrote in the preview’s system card. “Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners.”
paraphrand@lemmy.worldEnglish
9 daysI wasn’t wrong in this reply. I was asked about believing Anthropic.
Are you saying they are lying? Why should I disbelieve Anthropic?
- 8 days
Your reasoning was (paraphrased, so hopefully I understood you correctly) “why would they lie about the model disobeying instructions because that looks bad for them”
But I believe Anthropic when they say their models are not working as intended and posing security risks.
But when you actually read the article, they had specifically prompted the model to do the things it did.
Also Anthropic has a patterned history of greatly exaggerating and outright lying.
- 9 days
Uh oh, someone clearly didn’t read the article!
The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
paraphrand@lemmy.worldEnglish
9 daysWhoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
You are correct.
wonderingwanderer@sopuli.xyzEnglish
8 daysIt’s not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it’s more advanced than previous models, which weren’t able to.
Part of Anthropic’s schtick is that they claim to be developing AI “responsibly,” and “ethically,” and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don’t get out of control.
With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don’t want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.
So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.
Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.
- ThomasWilliams@lemmy.worldEnglish8 days
It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.
- 8 days
including that the model could follow instructions that encouraged it to break out of a virtual sandbox.
“The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards,” Anthropic recounted in its safety card.
📖👀
Yes, it did.
- girsaysdoom@sh.itjust.worksEnglish8 days
I would love to see the exploit. There are vulnerabilities discovered everyday that amount to very little in terms of use in real world implementations.
- jj4211@lemmy.worldEnglish8 days
Yes, recently we got a security “finding” from a security researcher.
His vulnerability required first for someone to remove or comment out calls to sanitize data and then said we had a vulnerability due to lack of sanitation…
Throughout my career, most security findings are like this, useless or even a bit deceitful. Some are really important, but most are garbage.
- toddestan@lemmy.worldEnglish8 days
It may not be completely crazy, depending on context. With something like a web app, if data is being sanitized in the client-side Javascript, someone malicious could absolutely comment that out (or otherwise bypass it).
With that said, many consultant-types are either pretty clueless, or seem to feel like they need to come up with something no matter how ridiculous to justify the large sums of money they charged.
wonderingwanderer@sopuli.xyzEnglish
8 daysThat’s so idiotic. Either that guy was a total amateur who couldn’t put together that “no shit, if you comment out the lines that do thing, it won’t do thing” or he was completely malevolent and disingenuous and just trying to justify his position by coming up with some crap that the big bosses are probably too stupid to recognize the idiocy of.
Either way, not someone I would want to be doing business with…
- Not_mikey@lemmy.dbzer0.comEnglish8 days
Echoing back “I am alive” isn’t on the same level as saying “find a vulnerability” and the agent finding and executing that vulnerability. One a toddler can do, the other requires a lot of technical expertise.
- worhui@lemmy.worldEnglish9 days
Let me guess, this super ai lives in Canada and we can never meet it, but it’s totally real.
- justsomeguy@lemmy.worldEnglish9 days
You at give me another billion for data centers bro and you can meet it I swear bro just one more data center.
- Whitebrow@lemmy.worldEnglish9 days
We do have a shitty ai data center up here, only about as super as a supermarket tho.
- worhui@lemmy.worldEnglish9 days
So there is a joke in the USA that if you don’t have a girlfriend you pretend you have one. She’s always super pretty, but your friends can never meet her because she lives in Canada.
- jeeva@lemmy.worldEnglish8 days
I’m now curious to know if this joke was around before Avenue Q or not.
Edit: sounds like a yes!
- 9 days
Well, this caused me to learn something today. One of my favorite musicals is Avenue Q, which has an entire song about a girlfriend who supposedly lives in Canada. And I keep seeing this reference - but I keep thinking there is NO WAY that THIS many people know about Avenue Q (which is a pity).
And sure enough, TIL that this trope dates back to at least the 70s and is references in multiple TV shows and movies and such.
So Avenue Q was using an existing thing. Ah, well.
At least I know not to make Avenue Q references since there’s little chance they’ll be gotten. lol
a_gee_dizzle@lemmy.caEnglish
9 daysThats funny because here in Canada I knew a guy in highschool who had a clearly fake girlfriend who he said lived down south in the US
- 9 days
Nakes sense, though - you want the pretend person to be somewhere reasonable but not TOO close. lol
- fruitycoder@sh.itjust.worksEnglish6 days
To be honest my fake girls friend was just a girl I had a crush on how lived a few cities away. I got busted lmao
- 6 days
awww, well I hope things worked out for you okay in the end. :)
- CosmoNova@lemmy.worldEnglish9 days
What? Do you think
AI company claims…
Isn‘t convincing? What gave it away? /s
- PushButton@lemmy.worldEnglish9 days
ChatGPT-2 is too dangerous in 2019.
The lack of creativity in this marketing is disappointing…
- emb@lemmy.worldEnglish9 days
They didn’t entirely miss the mark there. They publicly released the version after that and the world became worse. That certainly fits for some definition of ‘dangerous’, even tho it’s probably not how they were thinking.
mlg@lemmy.worldEnglish
8 daysHah I actually remembered this too, and people were still hyping Elon Musk at the time as well.
TBF the researchers knew what they had could be scaled into something gamebreaking which is how we got ChatGPT-3, but OpenAI made it sound like they already had it nailed down several years before it actually blew up. I think their unreleased examples they gave were a newspaper and short story written by AI which they said was indistinguishable from human material.
- Not_mikey@lemmy.dbzer0.comEnglish9 days
Ignore the “containment” framing, they made a hacking bot and it seems to actually be good at finding and exploiting vulnerabilities:
The AI model “found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world,” the company wrote.
Dismiss this as marketing drivel all you want but hacking is just the sort of needle in a haystack problem that AI is very good at. It requires broad knowledge, a lot of cycles trying and failing, and is easily verifiable, ie. Can you execute arbitrary scripts or not. Even if this release is BS good hacking agents are bound to come eventually and we should be discussing the implications of that instead of burying our heads in the sand, pretending AI is useless and that this is all hype.
wonderingwanderer@sopuli.xyzEnglish
8 daysIt’s an arms race like any other. Cybersecurity has always been an arms race. You can’t stop developing security patches, cause adversaries will continue developing new exploits.
If AI enables your adversaries to develop exploits faster than human developers can keep up with, then yeah AI will have to be a part of the solution. That doesn’t mean vibe-coding security patches, but it could mean AI-driven pen-testing.
Just like quantum computing. You can call it useless and impractical all you want, but some day someone is going to use it to break conventional encryption. So it would behoove you to develop quantum capabilities now, so that you have quantum safe encryption before quantum-based exploits eventually arise, as they inevitably will…
- redsand@infosec.pubEnglish9 days
AI exploit mining is one of the only things it’s good for. It doesn’t have to be accurate it just has to keep trying variations of common flaws and it has tons of training data on how the system is interconnected. we’re going to have so many RCEs and LPEs the next few years but people are also gonna burn 100k in tokens to find exploits worth 3k so efficiency will be interesting
- VeloRama@feddit.orgEnglish8 days
I agree. Selling an AI that can find vulnerabilities in software is probably the second best thing after achieving AGI.
“Nice software you’re selling there. Would be a shame if it was suddenly very unsafe to use, don’t you think?”
- technocrit@lemmy.dbzer0.comEnglish9 days
I wrote an incredibly powerful “AI”. I call it the “Super Intelligent brute force password hacker”… It’s so smart that it knows almost every password. Humanity stands no chance.
- 8 days
Have you seen the most incredible file system called pifs?
https://github.com/philipl/pifs
It literally stores every single file ever created or will be created for the existence of all the universe.
Avid Amoeba@lemmy.caEnglish
9 daysI’m pretty sure Scam Altman tried this line some time ago for one of his supposed models.
- esc@piefed.socialEnglish9 days
Yeah they said it from the start ‘it’s so powerful gyus we are scared uwu’. And antropic is a literal ai cult.
- GreenShimada@lemmy.worldEnglish9 days
Does “it broke containment” mean it didn’t have permissions to anything and still managed to delete all the files it could find?
I Cast Fist@programming.devEnglish
8 daysMan, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”
- LiveLM@lemmy.zipEnglish9 days
AI companies do this same tired schtick every time they release a model. If only they realized how amateurish it makes them look.
- pageflight@piefed.socialEnglish9 days
Impressive marketing spin on “our product and deployment strategies are wildly insecure.”
GuyIncognito@lemmy.caEnglish
8 dayscrazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”
Gladaed@feddit.orgEnglish
9 daysHow would it do that?
It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
- shweddy@lemmy.worldEnglish9 days
Its a joke referencing how Sam altman said openai would need about a year to get chatgpt able to start a timer
- YesButActuallyMaybe@lemmy.caEnglish9 days
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
- stringere@sh.itjust.worksEnglish8 days
How does the LLM check the timestamps without a prompt? By continually prompting? In which case, you are the timer.
- YesButActuallyMaybe@lemmy.caEnglish8 days
It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
Gladaed@feddit.orgEnglish
8 daysThat’s not how that works.
LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.
- stringere@sh.itjust.worksEnglish8 days
Edit to add: I know I’m not replying to the bad mansplainer.
LLM != TSR
Do people even use TSR as a phrase anymore? I don’t really see it in use much, probably because it’s more the norm than exception in modern computing.
TSR = old techy speak, Terminate and Stay Resident. Back when RAM was more limited (hey and maybe again soon with these prices!) programs were often run once and done, they ran and were flushed from RAM. Anything that needed to continue running in the background was a TSR.
Gladaed@feddit.orgEnglish
8 daysPlease tell me why you believe that the LLM keeps being executed on your chat even when the response is complete.
GnuLinuxDude@lemmy.mlEnglish
9 daysRemember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.












