• scratchee@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Neither can humans, ergo nobody should ever be held liable for anything.

    Civilisation is a sham, QED.

    • Electricd@lemmybefree.net
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Glad to hear you are an LLM

      The more safeguards are added in LLMs, the dumber they get, and the more resource intensive they get to offset this. If you get convinced to kill yourself by an AI, I’m pretty sure your decision was already taken, or you’re a statistical blip

      • scratchee@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        “Safeguards and regulations make business less efficient” has always been true. They still avoid death and suffering.

        In this case, if they can’t figure out how to control LLMs without crippling them, that’s pretty absolute proof that LLMs should not be used. What good is a tool you can’t control?

        “I cannot regulate this nuclear plant without the power dropping, so I’ll just run it unregulated”.

        • Electricd@lemmybefree.net
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 months ago

          Some food additives are responsible for cancer yet are still allowed, because they are generally more useful than have negative effects. Where you draw the line is up to you, but if you’re strict, you should still let people choose for themselves

          LLMs are incredibly useful for a lot of things, and really bad at others. Why can’t people use the tool as intended, rather than stretching it to other unapproved usages, putting themselves at risk?