- Allero@lemmy.todayEnglish6 hours
Great news overall, but I feel a bit alarmed about the wording, and whether the post itself is made by a human. This doesn’t look quite like the normal Mozilla writing style. Might just be false trigger, though.
Kissaki@feddit.orgEnglish
19 hoursIf you’re interested in a counter argumentation/expert analysis of previous posts, The Boy That Cried Mythos: Verification is Collapsing Trust in Anthropic is very critical about the whole press and marketing around Anthropic’s Mythos. (lemmy post link)
- 10 hours
Sure. Thanks.
I guess, future will tell. The author is very focussed on the system card. Maybe, we get a CVE list in a few weeks. Who knows…
The Firefox test is not Firefox. It’s a SpiderMonkey JavaScript engine shell in a container, with “a testing harness mimicking a Firefox 147 content process, but without the browser’s process sandbox and other defense-in-depth mitigations.”
I’m for sure not an expert in this field, but I recently saw videos Form LiveOverflow about FireFox bug bounties and I think, it was the same setup. Finding a bug in one of the components was enough for them to take it very serious.
- 8 hours
Finding a bug in one of the components was enough for them to take it very serious.
I think the point is that Claude Opus and other open source models found the same 2 major bugs. Mythos isn’t (demonstrably) better than existing.
- gnufuu@infosec.pubEnglish1 day
Comment score: 10. Impact: An unhandled double meaning can lead to remote laugh initiation.
- Anarki_@lemmy.blahaj.zoneEnglish2 days
I’m very torn on Mozilla collaborating with not only slop conductors, but crypto bros as well.
- redsand@infosec.pubEnglish10 hours
I like to think of it as exploit mining or smarter fuzzing and auto chaining. Unlike most of the bullshit uses for AI a high false positive rate really doesn’t matter. A shell is a shell and sorting through a haystack is easier than baling it then sorting through it.
- nandeEbisu@lemmy.worldEnglish2 days
I get the issues with image generation and using text generation in scams etc. but as a professional coding tool (not just vibe coding slop) AI can be extremely helpful certain tasks, and this use case, where organizations just don’t have the resources to have a security expert pore through millions of lines of code for bugs, is a net positive.
I think this is a case of “don’t throw the baby out with the bathwater” we can absolutely still criticize the industry and specific companies for IP, societal, and environmental concerns but lets not turn away a win just because they’re causing harm elsewhere.
- 1 day
admitting to intentionally deskilling yourself has to be humiliating. Ouch.
- Bazoogle@lemmy.worldEnglish18 hours
So you code strictly in assembly? If you do, good for you. If you don’t, the I can’t believe you would intentional deskill yourself.
- 1 day
There are more skills to learn in the world than I possibly have time for in my lifetime. If AI or some other tool means I don’t have to learn one skill, great, I can learn some other skill.
- 2 days
The AI will exist either way, and people who use that AI will discover these exploits with it. I’d rather it be Mozilla.
- 1 day
AI bros love to normalize their fascist technology by saying that it’s inevitable.
- Bazoogle@lemmy.worldEnglish18 hours
No? Things just will exist once they are discovered. Once nuclear warheads were discovered, there was no going back. Once the internet was established, it was going to be around. Same goes for a million other things. And now it applies to AI as well. Even if every big tech fascist stopped making AI, it is still going to be around, and it will be used maliciously. Our best bet is to use it defensibly before it can be used against us.
- XLE@piefed.socialEnglish1 hour
Thing is inevitable because… Different thing? Brilliant rhetoric!
Same goes for cryptocurrency. Same goes for NFTs. Same goes for Metaverse. Hell, why not say same goes for fascism? We must embrace them all, because other thing!
- XLE@piefed.socialEnglish21 hours
Facedeer, can we at least agree it’s a bad look for Mozilla to promote a company that helped kill Iranian children and desperately wants to build weapons to kill more?
That’s without even touching on whether your “inevitability” claim is total BS or not.
- Bazoogle@lemmy.worldEnglish18 hours
As part of our continued collaboration with Anthropic
Anthropic is literally the one that refused to let them make autonomous weapons with their AI. There is a whole wikipedia page about it. They explicitly don’t want their AI used for weapons. Of course, that wouldn’t stop governments/militaries from doing so anyway. It would be different if Mozilla was working with OpenAI, but of the two Anthropic is currently the better one.
And yes, the AI is out of the box. Just like once nuclear warheads were created, there is no going back.
- XLE@piefed.socialEnglish12 hours
They explicitly don’t want their AI used for weapons.
This is a blatant lie, unsupported by your source. Because they explicitly do. In Dario’s own bloodthirsty words:
Our strong preference is to continue to serve the Department and our warfighters.
Don’t believe and regurgitate these lies about “red lines” when they are worse than meaningless.
Dario practically salivates with the desire to build weapons with their AI. They provided the AI for bombing Venezuelan boats, they provided the AI for killing Iranian children. Your own article says he works with Palantir. He is a child murderer and you don’t need to whitewash him.
- XLE@piefed.socialEnglish1 hour
@[email protected] pretty horrifying, isn’t it? Seeing people whitewash AI CEOs who help kill children with glee?
Chloé 🥕@lemmy.blahaj.zoneEnglish
2 daysbad news, ladybird is all in on slop too
but servo should be fine, in fact right now they have an explicit anti-ai policy!
Solaris@lemmy.worldEnglish
1 dayWhy is it slop? “This was human-directed, not autonomous code generation.” can’t you read the entire post before calling this instance of AI-assisted code slop? Every programmer and their mother uses code assisting tools since their very first iterations, AI is just another tool for us, if we implement it responsibly and deliberately and not just “vibe” code it, then it’s a perfectly fair use of AI without having to add the term slop to it.
- XLE@piefed.socialEnglish1 day
“Human directed” is a euphamism for someone pushing a button to generate a result. Huh, sounds people vibe coding.
“If” is doing a lot of heavy lifting in your statement. What makes you think vibe coders will use their new drug responsibly?
- XLE@piefed.socialEnglish3 hours
@[email protected] would you like to respond here instead of downvoting me in threads you didn’t participate in?
- 1 day
I guarantee every new tool will be used irresponsibly. However, that doesn’t mean the tool itself is bad. You can use the tool responsibly and if you do that you can get great benefits. I regularly use AI for my coding. However, I have to review everything closely and I regularly find things that there is no way this would be correct, no way human would write that, and otherwise code is unacceptable. However, I can easily fix those problems and they are often much easier to fix than trying to write everything by hand.
- kungen@feddit.nuEnglish2 days
Hopefully not, but it makes you wonder how many vulnerabilities they might be introducing by fixing others…
- 1 day
Probably a reference to Mozilla adding Brave’s adblocking system to Firefox.
- cadekat@pawb.socialEnglish19 hours
Huh, I hadn’t heard about that! Honestly seems useful, and if it’s only the engine, I don’t see how crypto bros are relevant.
If there’s some “pay to unblock” scheme, that’s a different story.






