• jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.

    • outhouseperilous@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

            • Knock_Knock_Lemmy_In@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.

              • jsomae@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.

                • Knock_Knock_Lemmy_In@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  Very fair comment. In my experience even increasing the temperature you get stuck in local minimums

                  I was just trying to illustrate how 70% failure rates can still be useful.

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                I have actually been doing this lately: iteratively prompting AI to write software and fix its errors until something useful comes out. It’s a lot like machine translation. I speak fluent C++, but I don’t speak Rust, but I can hammer away on the AI (with English language prompts) until it produces passable Rust for something I could write for myself in C++ in half the time and effort.

                I also don’t speak Finnish, but Google Translate can take what I say in English and put it into at least somewhat comprehensible Finnish without egregious translation errors most of the time.

                Is this useful? When C++ is getting banned for “security concerns” and Rust is the required language, it’s at least a little helpful.

                • jsomae@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 months ago

                  I’m impressed you can make strides with Rust with AI. I am in a similar boat, except I’ve found LLMs are terrible with Rust.