I thought of this recently (anti llm content within)

The reason a lot of companies/people are obsessed with llms and the like, is that it can solve some of their problems (so they think). The thing I noticed, is a LOT of the things they try to force the LLM to fix, could be solved with relatively simple programming.

Things like better searches (seo destroyed this by design, and kagi is about the only usable search engine with easy access), organization (use a database), document management, etc.

People dont fully understand how it all works, so they try to shoehorn the llm to do the work for them (poorly), while learning nothing of value.

  • danzania@infosec.pub
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    Devil’s advocate: if the problems were could be solved with relatively simple programming, why aren’t why they solved already?

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Because companies don’t understand how that works, and dont want to pay for it. Easier to generate llm slop to band aid fix a problem and create new problems.

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    See how it’s apparently newsworthy that a simple chess engine on the C64 can beat ShitSkibidi. It was fucking obvious, to us. Like that random.randint(0, 10) is much worse at figuring out the sum of 2 and 4 than just calculating 2+4. However, it was not as obvious to the people that don’t understand how ML/DL fundamentally works.

    Similarly, it’s sad to see a lot of projects that have to do with Machine Leaning being essentially killed and made worthless by people just throwing everything at ShitSkibidi instead of generating/collecting training data themselves and training a purpose built model, not text based. I see that in private as well as at work. They want to use “AI” in risk management now. Will that mean they’ll use all their historical data on customers, the risks they identified and the final result to build two or more specific models? Most likely, no. They’ll just throw all data at the internal ShitSkibidi wrapper, expect the resulting data to be usable at all, and then ask it how they should proceed. And then expect humans to actually fact check everything it returned.

  • HelloRoot@lemy.lol
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    LLMs are great. You can tell them a problem with words and they figure out what you mean and solve it. You can not ignore the value of it for normal people.

    Some recent examples for me:


    I was playing a factory building game and didn’t want to do a spreadsheet by hand for figuring out the optimal amount of which building I have to place to get a wanted output. I told the LLM, copy pasted the wiki for each building. It did some differential equasions and gave me a result and a spreadsheet all in under a minute.

    I had to do some math, without knowing the underlying concepts. Describing the situation and problem and giving it all known values was much easier than reading 5 wikipedia articles, figuring out how to break it down, which formulas to use for each step and how to chain them all.

    I recently googled for half an hour, crawling through shit articles, reading 50page PDFs, none of which contained the detail I wanted, before giving up asking an AI and clicking on the source it quoted to get my reply. Maybe my search terms sucked, maybe I can’t ask the right question, because I don’t know what I don’t know, but the LLM was able to get it.


    Are the problems I described already “solved” more computationally efficiently by other means? Absolutely yes!

    Will it be faster and easier for me to throw it at an LLM? Also yes!

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Its a good tool in some cases. But I think general lack of understanding of how it works and its shortcomings is going to cause many issues in coming years.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        That’s been true ever since the first graduates came out knowing COBOL instead of assembly. Everything keeps getting more bloated and buggy.

    • And how do you know that the LLM was accurate and gave you the correct information, instead of just making up something entirely novel and telling you what you wanted to hear? Maybe the detail you were searching for could not be found, because it did not actually exist.

      • HelloRoot@lemy.lol
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        First, read my text fully before replying.

        But additionally I have a brain and can use it to double check:

        In example 1. I just build it blindly because it’s a game and it doesn’t matter if it’s wrong. But it ended up being correct and I ended up having more fun instead of doing excel for an hour.

        In 2. the math result was not far off from my guesstimate and I confirmed later, it was correct.

        In 3. it gave me a source and I read the source. Google did not lead me to that source.

        When I let LLM write code, I read the code, then I test the code. Here is where I get the most faults. Not in spreadsheets or math or research.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          It’s weird how there is such a knee jerk hate for a turbo charged word predictor. You’d think there would have been similar mouth frothing at on screen keyboards predicting words.

          I see it as a tool that helps sometimes. It’s like an electric drill and craftsmen are screaming, “BUT YOU COULD DRILL OFF CENTER!!!”