Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    10 hours ago

    I think it’s worse when they get it right only some of the time. It’s not a matter of opinion, it should not change its “mind”.

    The fucking things are useless for that reason, they’re all just guessing, literally.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      11
      ·
      7 hours ago

      Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.

      Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.

      • Urist@leminal.space
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 hours ago

        Language without meaning is garbage. Like, literal garbage, useful for nothing. Language is a tool used to express ideas, if there are no ideas being expressed then it’s just a combination of letters.

        Which is exactly why LLMs are useless.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          6 hours ago

          Which is exactly why LLMs are useless.

          800 million weekly ChatGPT users disagree with that.

          • RichardDegenne@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            5 hours ago

            And there are 1.3 billion smokers in the world according to the WHO.

            Does that make cigarettes useful?

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              edit-2
              5 hours ago

              Something being useful doesn’t imply it’s good or beneficial. Those terms are not synonymous. Usefulness describes whether a thing achieves a particular goal or serves a specific purpose effectively.

              A torture device is useful for extracting information. A landmine is useful for denying an area to enemy troops.

              • Urist@leminal.space
                link
                fedilink
                English
                arrow-up
                4
                ·
                4 hours ago

                A torture device is useful for extracting information.

                No it fucking isn’t! This is a great analogy, actually, thank you for bringing it up. A person being tortured will tell you literally anything that they believe will stop you from torturing them. They will confess to crimes that never happened, tell you about all their accomplices who don’t exist, and all their daily schedules that were made up on the spot. Torture is useless but morons think it is useful. Just like AI.

          • Urist@leminal.space
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            5 hours ago

            Those users are being harmed by it, not benefited. That isn’t useful, it’s a social disease.

      • tigeruppercut@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 hours ago

        But natural language in service of what? If they can’t produce answers that are correct, what’s the point of using them? I can get wrong answers anywhere.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Some of them can produce the correct answer. Of we do the test next year and they do better than humans then, isn’t it progress?

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          I’m not here defending the practical value of these models. I’m just explaining what they are and what they’re not.

        • Threeme2189@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          6 hours ago

          As OP said, LLMs are really good at generating text that is fluid and looks natural to us. So if you want that kind of output, LLMs are the way to go.
          Not all LLM prompts ask factual questions and not all of the generated answers need to be correct.
          Are poems, songs, stories or movie scripts ‘correct’?

          I’m totally against shoving LLMs everywhere, but they do have their uses. They are really good at this one thing.

          • tigeruppercut@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 hours ago

            Are poems, songs, stories or movie scripts ‘correct’?

            It’s a valid point that they can produce natural language. The Turing Test has been a thing for awhile after all. But while the language sounds natural, can they create anything meaningful? Are the poems or stories they make worth anything? It’s not like humans don’t create shitty art, so I guess generating random soulless crap is similar to that.

            The value of language produced by something that can’t understand the reason for language is an interesting question I suppose.

            • iopq@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              4 hours ago

              There are people out there whose job is to format promotional emails for companies. AIs can replace this kind of soulless work completely. We should applaud that.

    • Tetragrade@leminal.space
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      edit-2
      9 hours ago

      Same takeaway as the article (everyone read the article, right?).

      Applying it to yourself, can you recall instances when you were asked the same question at different points in time? How did you respond?

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      15
      ·
      9 hours ago

      they’re all just guessing, literally

      They’re literally not.

      • m0darn@lemmy.ca
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        9 hours ago

        Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?

        • vii@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          edit-2
          7 hours ago

          It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

          It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

          So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

          • vii@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

            I know some humans that applies to

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            Yes it guesstimates what is wrong with you to argue like that about semantics?