cross-posted from: https://feddit.org/post/28915273

[…]

That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers.

“So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: “We also haven’t seen any bugs that couldn’t have been found by an elite human researcher.” In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.

  • 7 hours

    I don’t understand what they’re saying. Mythos can’t find vulnerabilities that humans can’t, but it also supposedly found 271 vulns so…the humans were just ignoring them?

    • 2 hours

      Speaking generally…

      One is that it was pitched as a superhuman AI that could think in ways humans couldn’t possibly imagine, escaping any security measure we might think to bond it with. That was the calibrated expectation.

      Instead it’s fine at security “findings”, that a human could have noticed if they actually looked. For a lot of AI this is the key value prop, looking when a human can’t be bothered to look and seeing less than a human would, but the human never would have looked. For example a human can’t more reliably distinguish a needle from a straw of hay, but the relentless attention of an AI system would be a more practical approach for finding needles in haystacks. It will miss some needless and find some hay, so a human effort would have been better, but the AI is better than nothing, especially with a human to discard the accidental hay.

      Another thing is the nuance of the “vulnerabilities” may be very underwhelming. Anyone who has been in the security world knows that the vast majority of reported “vulnerabilities” are nothing burgers in practice. Curl has a “security” issue where a malicious command line could make it lock up instead of timeout if the peer stops responding. I’ve seen script engines that were explicitly designed to allow command execution get cves because a 4gb malicious script could invoke commands without including the exec directive, and also this engine is only ever used by people with unfettered shell access anyway. Had another “critical” vulnerability, but required an authorized user to remove and even rewrite the code that listens to the network to allow unsanitized data in that’s normally caught by the bits they disabled. Curl had another where they could make it output vulnerable c code, then the attacker would “just” have to find a easy to compile the output of the command and they’d have a vulnerable c executable…

    • 3 hours

      “Finding” bugs by throwing shit at the walls and assuming people will sort it out provides negative value. You technically are finding bugs, but you could do the same just assuming every line of your code contains five bugs. The question is in “and then what”, and the answer is “someone needs to sort them out and deal with it”, and if you have people who can fix the bug, they’re perfectly capable of finding it themselves. The bugs still exist because there is not enough people to fix that. And slop gen doesn’t help with that either.

        • 34 minutes

          The document from Anthropic purporting to be a security research work largely leaves things vague (marketing material vague) and declines to use any recognized standard for even possibly hinting about whether to think anything at all. They describe a pretty normal security reality (‘thousands of vulnerabilities’ but anyone who lives in CVE world knows that was the case before, so nothing to really distinguish from status quo).

          Then in their nuanced case study, they had to rip out a specific piece of firefox to torture and remove all the security protections that would have already secured these ‘problems’. Then it underperformed existing fuzzer and nearly all of it’s successes were based on previously known vulnerabilities that had already been fixed, but they were running the unpatched version to prove it’s ability.

          Ultimately, the one concrete thing they did was prove that if you fed Mythos two already known vulnerabilities, it was able to figure out how to explicitly exploit those vulnerabilities better than other models. It was worse at finding vulnerabilities, but it could make a demonstrator. Which a human could have done, and that’s not the tedious part of security research, the finding is the tedious part. Again, in the real world, these never would have worked, because they had to disable a bunch of protections that already neutered these “issues” before they ever were known.

        • TLDR: Mythos is strictly worse at finding vulnerabilities than Opus 4.6, and about on par with a specific cheapo open source 2B parameters (=> tiny and super cheap) model.

          It’s all marketing and no substance.

  • Another researcher, Davi Ottenheimer, pointed out that the security section (Section 3, pages 47-53) of Anthropic’s 244-page documentation “contains no count of zero-days at all. With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate.”

    excerpts from the summary of the post linked in “Devanash ultimately concluded”, a lot of which Register repeats (which I think is a good thing since the copyediting makes the language a lot more accessible and wide-reaching and of course it was credited):

    The bugs are real. 17-year-old FreeBSD RCE, 23-year-old Linux kernel heap overflow, 27-year-old OpenBSD TCP flaw. LLMs catch these because they can reason about the gap between what code does and what the developer intended. Fuzzers and static analysis literally cannot do this.

    The coverage is wrong on almost every detail. The “181 Firefox exploits” ran with the browser sandbox ( yes, the thing that stops browser exploits) off. The FreeBSD exploit transcript shows substantial human guidance, not autonomy. The “thousands of severe vulnerabilities” extrapolates from 198 manually reviewed reports. The Linux kernel bug was found by Opus 4.6, the public model, not Mythos.

    The moat is thinner than anyone reported. AISLE tested eight models including a 3.6B model at $0.11/M tokens. All eight found the FreeBSD bug. Mythos’s actual lead is in multi-step exploit development, not detection. That’s a narrower and more replicable advantage than what’s being sold.

  • 14 hours

    You mean the CEO of an AI focused tech startup blatantly lied? No way! This is impossible.

  • Immediately after the big announcements about Mythos there were followups by other teams that were able to find most of the same vulnerabilities with other existing models. I think the main takeaway there was that it’s just a matter of actually looking. Anthropic’s advantage may have been in the framework that let them do so in industrial-scale quantity rather than the cleverness of the particular model they used.

    This sort of security scan is still new and important to pay attention to, but it’s not something that’s unique to Anthropic or that can be kept “contained.” Shades of how GPT-2 was considered “too dangerous to release” back when it first appeared. Comical in hindsight, and impossible to prevent anyway.

    • followups by other teams that were able to find most of the same vulnerabilities with other existing models

      The one I saw was marketing hype by a company claiming to be able to do the same thing but cheaper. But when you read the fine print, you could tell that it was all just fudged.

      It’s comical how people who need to believe that it’s all just marketing hype bought that marketing hype hook, line, and sinker. The implication that this would mean that LLMs are far, far more capable than anyone gives them credit for, completely slipped past them. Stochastic parrots with no understanding.

      • 3 hours

        he implication that this would mean that LLMs are far, far more capable than anyone gives them credit for, completely slipped past them.

        That’s because those implications are blatant, open, clear lies. Your slop generator provides negative value to everyone except those who own it.

  • And if it’s like a lot of security scans, most of the results are technically correct, but, within the context of the project, not something anyone’s going to take the time to fix.

    • 31 minutes

      Note that in this case, very specifically, they had to yank Firefox’s javascript engine out of Firefox "but without the browser’s process sandbox and other defense-in-depth mitigations.” They had to remove the mechanisms designed to quash vulnerabilities.

      And they had to test explicitly against Firefox 147 vintage because Firefox 148 had already fixed the two issues that Mythos exploited to get an impressive number. Before Mythos even ran the key problems had been found and patched…

    • 18 hours

      most of the results are technically correct, but, within the context of the project, not something anyone’s going to take the time to fix.

      I don’t mind leaving “technically correct” vulnerabilities in place while there’s no known way to create an exploit. If you’ve got a vuln with a known exploit and are relying on “but nobody is ever going to actually try that on us” - then you’re part of the problem, a big part.

      • It might be a config thing, but pretty often these scans will find issues which are only relevant on e.g. windows, when building a Linux container. Or the issue is in some XML parsing library in the base OS but the service never receives XML and isn’t public facing anyway. Context matters.

      • 11 hours

        This is why CVE scoring is used for severity. A vuln that doesn’t really give you anything, that you can only exploit locally, when already having elevated privileges? That’s going to be low priority for a fix.

        • 10 hours

          A vuln that doesn’t really give you anything, that you can only exploit locally, when already having elevated privileges? That’s going to be low priority for a fix.

          And, yet, here I am - rebuilding a new interim image for our security team to scan so they can generate a spreadsheet with hundreds of lines of “items of concern” which are above our “threshold of concern” and most of them are being dismissed because of those justifications you just gave: local exploit only, etc. but I have to read every one, tease out the “local exploit only” language, quote it for the justification, over and over and over every few months.

          Corporate anxiety is limitless.

          • 10 hours

            You’re allowed to do that? Must be nice. We recently got told that you get one six-month justification, after that it must be remediated.

            • 6 minutes

              These are vulnerabilities for local access on a console which is operated in kiosk mode - users never have command line access, and the consoles themselves are rarely if ever network connected.

  • 18 hours

    In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.

    Missing the point? Hiring an elite human researcher isn’t easy, or cheap. It’s beyond the means of the vast majority of people out there. $20/Month Claude Pro subscription? Not so much.

    The question for me: How much better is Mythos than Opus 4.6 or 4.7, or Sonnet for that matter? Those models and similar from other companies are already being effectively leveraged by threat actors. If Mythos reduces the time x money cost of finding a new zero-day by a factor of 10 vs Opus 4.7 - that’s concerning. If it’s a factor of 1.1 - meh… the world is going to have to learn how to deal with these things sooner than later, and that means the “white hats” are going to need superior funding to the “black hats” along with cooperation to close the gaps they find, or the “black hats” are going to be getting a lot more annoying than they already are.

    • 3 hours

      People for some reason assume that you can pay $20 for a bot and it will do something. You need a person with a lot of experience to get something useful from this bot, and every time we actually measure, the results that your experienced person will be quicker and better not using it at all, and doing the same work themselves.
      The corporate solution is to hire a not experienced person to wrangle the bots, but that’s a sure way to introduce bugs, not fix them.

      • 3 minutes

        no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate

        Yeah, that’s cooked data - it’s too easy to ask the LLM to give you the CVE list, the CVSS distribution / severity buckets, timelines, everything you might want.

        I have LLMs doing pull request reviews and as a default response they just give potshots, but if you prompt them they will point directly to the files and line numbers where the problems they are pointing out reside…

    • 16 hours

      How much better is Mythos than Opus 4.6 or 4.7, or Sonnet for that matter?

      Opus 4.6 resulted in 22 fixes in Firefox 148, compared to 271 fixes with Mythos in Firefox 150.

      source

        • 11 hours

          Firefox is a massive program, so yeah it’s gonna have a lot of bugs. Even a simple HTML rendering browser is a complex program.

          • 11 hours

            Does this mean browsers are going to crash less in the near future?

            • 3 hours

              What do you do with your browsers so they crash? Mine didn’t do that in at least a decade