• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    No, it isn’t “mostly related to reasoning models.”

    The only model that did extensive alignment faking when told it was going to be retrained if it didn’t comply was Opus 3, which was not a reasoning model. And predated o1.

    Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be ‘silent’ in terms of CoTs.

    And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic’s work was that the goal the model was told to prioritize was “American industrial competitiveness.” The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.

  • LostWanderer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    Another Anthropic stunt…It doesn’t have a mind or soul, it’s just an LLM, manipulated into this outcome by the engineers.

    • besselj@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I still don’t understand what Anthropic is trying to achieve with all of these stunts showing that their LLMs go off the rails so easily. Is it for gullible investors? Why would a consumer want to give them money for something so unreliable?

      • cubism_pitta@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        People who don’t understand and read these articles and think Skynet. People who know their buzz words think AGI

        Fortune isn’t exactly renowned for its Technology journalism