Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    36
    ·
    edit-2
    2 days ago

    I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed

    I’m not saying you’re full of crap, but I smell a lot of crap. Who talks like this unironically? This is like hearing someone call somebody else a “rockstar” or “ninja”.

    If you really are breaking necks with how fast you’re coding, surely you must have used this newfound ability to finally work on those side projects everyone has been meaning to work. Those wouldn’t be covered under NDA.

    Edit: just to be clear, I’m not anti-LLMs. I’ve used them myself in a few different forms, and although I didn’t find them useful for my work, I can see how they could be helpful for certain types of work. I definitely don’t see them replacing human engineers.

    • XM34@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Idk, there’s a lot of people at my job talking like this. LLMs really do help speed things up. They do so at a massive cost in code and software quality, but they do speed things up. In my experience, coding right now isn’t about writing legible and maintainable code. It’s about deciding which parts of your codebase you want to be legible and maintainable and therefore LLM free.

      I for one let AI write pretty much all of my unit tests. They’re not pretty, but they get the job done and still indicate when I’m accidentally changing behaviour in a part of the codebase I didn’t mean to. But I keep the service layer as AI free as possible. Because that’s where the important code is located.