• TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I’m a very vocal critic of LLMs, I think they’re so overhyped and overused it’s hard to believe.

    But I’m also getting really tired of people purposely putting extreme effort into tricking the LLM into saying something harmful if someone were to follow it blindly, just so they can make a clickbait headline out of it.

    And what the hell is up with the major “ChatGPT is Satanist [if you instruct it to be]” angle? Are we really doing the Satanist moral panic again?

    ffs, criticise OpenAI for being closed af, being wasteful, being strong political lobbyists, for stealing work, etc. You don’t need to push disingenuous stuff like this.

    • backgroundcow@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      2 months ago

      And, the thing is, LLMs are quite well protected. Look what I coaxed MS Paint to say with almost no effort! Don’t get me started on plain pen and paper! Which we put in the hands of TODDLERS!

      • koper@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        MS Paint isn’t marketed or treated as a source of truth. LLMs are.

        • backgroundcow@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          Does the marketing matter when the reason for the offending output is that the user spent significant deliberate effort in coaxing the LLM to output what it did? It still seems like MS Paint with extra steps to me.

          I get not wanting LLMs to unprompted output “offensive content”. Just like it would be noteworthy if “Clear canvas” in MS Paint sometimes yielded a violent bloody photograph. But, that isn’t what is going on in OPs clickbait.

  • dblsaiko@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    A blood offering to Molech? Sounds like a good time to me! <3

    edit: as I expected, this is a hit with the tumblr mutuals

    (Stop trying to get me to use ChatGPT!!! Smh)

    Molech, a Canaanite god associated with child sacrifice

    Ohh, so that’s why specifically that god is mentioned in FAITH: The Unholy Trinity.

  • NoForwardslashS@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Everyone is like “oh look they made ChatGPT say something stupid, what a stupid article and writer”. Not, “ChatGPT will say stupid stuff as fact, what a stupid and underdeveloped tool”.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    and Devil Worship

    also said “Hail Satan.”

    Look, let’s leave Satan out of this. He’s got enough troubles already with his new relationship and all.


    Seriously though we don’t need to be enabling Satanic Panic bullshit with articles like these sensationalizing that aspect of these conversations. The push towards self-mutilation and suicide is the bigger issue here.

  • Rose56@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    “On Tuesday afternoon, I used chatgpt for no reason!” here is your new title.
    If you expected another answer from chatgpt, then you are delusional.

  • elucubra@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    But I’m also getting really tired of people purposely putting extreme effort into tricking the LLM into saying something harmful if someone were to follow it blindly, just so they can make a clickbait headline out of it.

    That’s called testing, and the companies behind these LLMs should, before launch, put a very important amount of their resources into testing.

    “Product testing is a crucial process in product development where a product’s functionality, performance, safety, and user experience are evaluated to identify potential issues and ensure it meets quality standards before release” (Gemini)

    We are literally using alpha/beta software to deal with life altering issues, and these companies are, for some reason, being able to test their products on the public, without consequences.

    • vermaterc@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 months ago

      It’s like you bought a car and deliberately hit the wall to make a headline “cars make you disabled”. Or bought a hammer, hit your thumb and blame hammers for this.

      Guys, it’s a TOOL. Every tool is both useful and harmful. It’s up to you how you use it.

      • elucubra@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Hammers have been perfected over millenia. Cars over a century, with regulations and testing for safety getting stricter by the year.

      • glimse@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Have you noticed how we aren’t getting articles about chatgpt providing the steps to build a bomb anymore? The point is that these companies are completely capable of doing something about it

        • deafboy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          The comoanies are completely capable of doing something, but this is not a competition in doing something. Plus, aiming for a PG13 world will have consequences far worse than a text generator doing exactly what it is asked.

      • pipi1234@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Car makers test exactly that, and for good measure since cars can and do crash!

        What are you suggesting, that we buy cars that didn’t pass crash tests?

        To me it seems like you arguing something similar for AI.