• 3 hours

    Great news overall, but I feel a bit alarmed about the wording, and whether the post itself is made by a human. This doesn’t look quite like the normal Mozilla writing style. Might just be false trigger, though.

    • Sure. Thanks.

      I guess, future will tell. The author is very focussed on the system card. Maybe, we get a CVE list in a few weeks. Who knows…

      The Firefox test is not Firefox. It’s a SpiderMonkey JavaScript engine shell in a container, with “a testing harness mimicking a Firefox 147 content process, but without the browser’s process sandbox and other defense-in-depth mitigations.”

      I’m for sure not an expert in this field, but I recently saw videos Form LiveOverflow about FireFox bug bounties and I think, it was the same setup. Finding a bug in one of the components was enough for them to take it very serious.

      • Finding a bug in one of the components was enough for them to take it very serious.

        I think the point is that Claude Opus and other open source models found the same 2 major bugs. Mythos isn’t (demonstrably) better than existing.

    • 1 day

      Comment score: 10. Impact: An unhandled double meaning can lead to remote laugh initiation.

  • I’m very torn on Mozilla collaborating with not only slop conductors, but crypto bros as well.

    • 7 hours

      I like to think of it as exploit mining or smarter fuzzing and auto chaining. Unlike most of the bullshit uses for AI a high false positive rate really doesn’t matter. A shell is a shell and sorting through a haystack is easier than baling it then sorting through it.

    • I get the issues with image generation and using text generation in scams etc. but as a professional coding tool (not just vibe coding slop) AI can be extremely helpful certain tasks, and this use case, where organizations just don’t have the resources to have a security expert pore through millions of lines of code for bugs, is a net positive.

      I think this is a case of “don’t throw the baby out with the bathwater” we can absolutely still criticize the industry and specific companies for IP, societal, and environmental concerns but lets not turn away a win just because they’re causing harm elsewhere.

        • 15 hours

          So you code strictly in assembly? If you do, good for you. If you don’t, the I can’t believe you would intentional deskill yourself.

        • There are more skills to learn in the world than I possibly have time for in my lifetime. If AI or some other tool means I don’t have to learn one skill, great, I can learn some other skill.

    • The AI will exist either way, and people who use that AI will discover these exploits with it. I’d rather it be Mozilla.

        • 15 hours

          No? Things just will exist once they are discovered. Once nuclear warheads were discovered, there was no going back. Once the internet was established, it was going to be around. Same goes for a million other things. And now it applies to AI as well. Even if every big tech fascist stopped making AI, it is still going to be around, and it will be used maliciously. Our best bet is to use it defensibly before it can be used against us.

      • 18 hours

        Facedeer, can we at least agree it’s a bad look for Mozilla to promote a company that helped kill Iranian children and desperately wants to build weapons to kill more?

        That’s without even touching on whether your “inevitability” claim is total BS or not.

        • 15 hours

          As part of our continued collaboration with Anthropic

          Anthropic is literally the one that refused to let them make autonomous weapons with their AI. There is a whole wikipedia page about it. They explicitly don’t want their AI used for weapons. Of course, that wouldn’t stop governments/militaries from doing so anyway. It would be different if Mozilla was working with OpenAI, but of the two Anthropic is currently the better one.

          And yes, the AI is out of the box. Just like once nuclear warheads were created, there is no going back.

          • 9 hours

            They explicitly don’t want their AI used for weapons.

            This is a blatant lie, unsupported by your source. Because they explicitly do. In Dario’s own bloodthirsty words:

            Our strong preference is to continue to serve the Department and our warfighters.

            Don’t believe and regurgitate these lies about “red lines” when they are worse than meaningless.

            Dario practically salivates with the desire to build weapons with their AI. They provided the AI for bombing Venezuelan boats, they provided the AI for killing Iranian children. Your own article says he works with Palantir. He is a child murderer and you don’t need to whitewash him.

    • 2 days

      That’s why Servo and Ladybird need to be vastly built up

        • Why is it slop? “This was human-directed, not autonomous code generation.” can’t you read the entire post before calling this instance of AI-assisted code slop? Every programmer and their mother uses code assisting tools since their very first iterations, AI is just another tool for us, if we implement it responsibly and deliberately and not just “vibe” code it, then it’s a perfectly fair use of AI without having to add the term slop to it.

          • 1 day

            “Human directed” is a euphamism for someone pushing a button to generate a result. Huh, sounds people vibe coding.

            “If” is doing a lot of heavy lifting in your statement. What makes you think vibe coders will use their new drug responsibly?

            • I guarantee every new tool will be used irresponsibly. However, that doesn’t mean the tool itself is bad. You can use the tool responsibly and if you do that you can get great benefits. I regularly use AI for my coding. However, I have to review everything closely and I regularly find things that there is no way this would be correct, no way human would write that, and otherwise code is unacceptable. However, I can easily fix those problems and they are often much easier to fix than trying to write everything by hand.

    • 2 days

      Hopefully not, but it makes you wonder how many vulnerabilities they might be introducing by fixing others…

        • 16 hours

          Huh, I hadn’t heard about that! Honestly seems useful, and if it’s only the engine, I don’t see how crypto bros are relevant.

          If there’s some “pay to unblock” scheme, that’s a different story.