View all comments ➔
  • If I was the director of AI safety, and I used AI to own and delete my inbox, I sure as shit would never tell a soul.

    This is pure unbridled incompetence.

    • 2 months

      The whole “AI safety” field is this incompetent. These people that will tell you AI is on the verge of creating a bioweapon, and then run random code in a command line. Completely and totally unserious.

      • I don’t know what the hell has happened, but some of these people are basically human jellyfish. Big tech is full of them now.

        No thought enters their mind, but they dodge the layoffs and the PIPs and get promoted like this.

        I don’t fucking get it.

        • 2 months

          It’s just the natural progression of a disease that spreads outwards from Management. The bosses want yes-men, not people capable of independent thought.

          • In other words, it’s why authoritarianism always fail

            And capitalism is very specifically not a democratic economic system. There’s a hierarchy. The owners are the ones in power

      • The “AI safety” field is about two things: marketing AIs as so powerful that they’re risky to use but riskier to get left behind by competitors using, and keeping AIs from doing so much brand damage that stock price suffers. This story is about marketing an AI as powerful.

    • If I was a director of AI safety I wouldn’t let openclaw within 100feet of anything. Let alone my work machine.

      • 2 months

        If the Director of AI Safety is plugging code with extensive security flaws documented and reported into their real life inbox, imagine the Average Joe.

    • Yep.

      These people are all fucking complete clowns.

      It would be one thing if they were just evil, but they have such an inflated view of themselves that they have no self awareness.

      Fucking corpos man.

    • They wanted to “eat their own dog food” but it’s closer to “eating their own dog shit”

    • 2 months

      Especially your work mailbox, that is a prime target for hackers and scammers, where a hidden prompt for prompt injection isn’t that impossibile.

      This IMHO is a fireable offense, not a funny anecdote

    • 2 months

      If I was the director of AI safety, […] would never tell a soul.

      As a director of something, you are kinda public person. No way to just not tell.