• TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I’m a very vocal critic of LLMs, I think they’re so overhyped and overused it’s hard to believe.

    But I’m also getting really tired of people purposely putting extreme effort into tricking the LLM into saying something harmful if someone were to follow it blindly, just so they can make a clickbait headline out of it.

    And what the hell is up with the major “ChatGPT is Satanist [if you instruct it to be]” angle? Are we really doing the Satanist moral panic again?

    ffs, criticise OpenAI for being closed af, being wasteful, being strong political lobbyists, for stealing work, etc. You don’t need to push disingenuous stuff like this.

    • backgroundcow@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      2 months ago

      And, the thing is, LLMs are quite well protected. Look what I coaxed MS Paint to say with almost no effort! Don’t get me started on plain pen and paper! Which we put in the hands of TODDLERS!

      • koper@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        MS Paint isn’t marketed or treated as a source of truth. LLMs are.

        • backgroundcow@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          Does the marketing matter when the reason for the offending output is that the user spent significant deliberate effort in coaxing the LLM to output what it did? It still seems like MS Paint with extra steps to me.

          I get not wanting LLMs to unprompted output “offensive content”. Just like it would be noteworthy if “Clear canvas” in MS Paint sometimes yielded a violent bloody photograph. But, that isn’t what is going on in OPs clickbait.