• kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 months ago

    Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.

    And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.

    • billwashere@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

      In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.

        The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

        On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

        Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

        It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          3 months ago

          The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

          Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then “knows” how things tie together and work together, so it can get more context for its suggestions and created code.

    • CabbageRelish@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      On that last note, important thing they left out here being general news reporting tech stuff is that this was specifically bug fixing tasks. It can typically only provide the broadest of advice on that, and it’s largely incapable of tackling problems holistically when you often need to be thinking big picture while tackling a bug.

      Interesting that the AI devs thought they were being quicker though.

    • lIlIlIlIlIlIl@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Exactly what you would expect from a junior engineer.

      Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

      Something something craftsmen don’t blame their tools

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          3 months ago

          AI tools are actually improving at a rate faster than most junior engineers I have worked with, and about 30% of junior engineers I have worked with never really “graduated” to a level that I would trust them to do anything independently, even after 5 years in the job. Those engineers “find their niche” doing something other than engineering with their engineering job titles, and that’s great, but don’t ever trust them to build you a bridge or whatever it is they seem to have been hired to do.

          Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

          Many things in tech seem to have an exponential improvement phase, followed by a plateau. CPU clock speed is a good example of that. Storage density/cost is one that doesn’t seem to have hit a plateau yet. Software quality/power is much harder to gauge, but it definitely is still growing more powerful / capable even as it struggles with bloat and vulnerabilities.

          The question I have is: will AI continue to write “human compatible” software, or is it going to start writing code that only AI understands, but people rely on anyway? After all, the code that humans write is incomprehensible to 90%+ of the humans that use it.

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Now, as for AI, it’s currently as good or “better” than about 40% of brand-new fresh from the BS program software engineers I have worked with. A year ago that number probably would have been 20%. So far it’s improving relatively quickly. The question is: will it plateau, or will it improve exponentially?

            LOL sure

          • AA5B@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            I’m seeing exactly the opposite. It used to be the junior engineers understood they had a lot to learn. However with AI they confidently try entirely wrong changes. They don’t understand how to tell when the ai goes down the wrong path, don’t know how to fix it, and it takes me longer to fix.

            So far ai overall creates more mess faster.

            Don’t get me wrong, it can be a useful tool you have to think of it like autocomplete or internet search. Just like those tools it provides results but the human needs judgement and needs to figure out how to apply the appropriate results.

            My company wants metrics on how much time we’re saving with ai, but

            • I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative
            • it’s just another tool. There’s not really a defined task or set time. If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?
            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              I’ve always had problems with junior engineers (self included) going down bad paths, since before there was Google search - let alone AI.

              So far ai overall creates more mess faster.

              Maybe it is moving faster, maybe they do bother the senior engineers less often than they used to, but for throw-away proof of concept and similar stuff, the juniors+AI are getting better than the juniors without senior support used to be… Is that a good direction? No. When the seniors are over-tasked with “Priority 1” deadlines (nothing new) does this mean the juniors can get a little further on their own and some of them learn from their own mistakes? I think so.

              Where I started, it was actually the case that the PhD senior engineers needed help from me fresh out of school - maybe that was a rare circumstance, but the shop was trying to use cutting edge stuff that I knew more about than the seniors. Basically, everything in 1991 was cutting edge and it made the difference between getting something that worked or having nothing if you didn’t use it. My mentor was expert in another field, so we were complimentary that way.

              My company (now) wants metrics on a lot of things, but they also understand how meaningless those metrics can be.

              I have to spend more time helping the junior guys out of the holes dug by ai, making it net negative

              https://clip.cafe/monsters-inc-2001/all-right-mr-bile-it/

              Shame. There was a time that people dug out of their own messes, I think you learn more, faster that way. Still, I agree - since 2005 I have spend a lot of time taking piles of Matlab, Fortran, Python that have been developed over years to reach critical mass - add anything else to them and they’ll go BOOM - and translating those into commercially salable / maintainable / extensible Qt/C++ apps, and I don’t think I ever had one “mentee” through that process who was learning how to follow in my footsteps, the organizations were always just interested in having one thing they could sell, not really a team that could build more like it in the future.

              it’s just another tool.

              Yep.

              If you had to answer how much time autocomplete saved you, could you provide any sort of meaningful answer?

              Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?___

              • AA5B@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Shame. There was a time that people dug out of their own messes, I think you learn more, faster

                Yes, that’s how we became senior guys. But when you have deadlines that you’re both on the hook for and they’re just floundering, you can only give them so much opportunity. I’ve had too many arguments with management about letting them merge and I’m not letting that ruin my code base

                Speaking of meaningless metrics, how many people ask you for Lines Of Code counts, even today?

                We have a new VP collecting metrics on everyone, including lines of code, number of merge requests, times per day using ai, days per week in the office vs at home

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  I’ve had too many arguments with management about letting them merge and I’m not letting that ruin my code base

                  I guess I’m lucky, before here I always had 100% control of the code I was responsible for. Here (last 12 years) we have a big team, but nobody merges to master/main without a review and screwups in the section of the repository I am primarily responsible for have been rare.

                  We have a new VP collecting metrics on everyone, including lines of code, number of merge requests, times per day using ai, days per week in the office vs at home

                  I have been getting actively recruited - six figures+ - for multiple openings right here in town (not a huge market here, either…) this may be the time…

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          3 months ago

          They’re tools that can help a junior engineer and a senior engineer with their job.

          Given a database, AI can probably write a data access layer in whatever language you want quicker than a junior developer could.

  • Feyd@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    Fun how the article concludes that AI tools are still good anyway, actually.

    This AI hype is a sickness

  • xep@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    Code reviews take up a lot of time, and if I know a lot of code in a review is AI generated I feel like I’m obliged to go through it with greater rigour, making it take up more time. LLM code is unaware of fundamental things such as quirks due to tech debt and existing conventions. It’s not great.

  • neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Explain this too me AI. Reads back exactly what’s on the screen including comments somehow with more words but less information Ok…

    Ok, this is tricky. AI, can you do this refactoring so I don’t have to keep track of everything. No… Thats all wrong… Yeah I know it’s complicated, that’s why I wanted it refactored. No you can’t do that… fuck now I can either toss all your changes and do it myself or spend the next 3 hours rewriting it.

    Yeah I struggle to find how anyone finds this garbage useful.

    • Damaskox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I have asked questions, had conversations for company and generated images for role playing with AI.

      I’ve been happy with it, so far.

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s kind of outside the software development discussion but glad you’re enjoying it.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          As a developer

          • I can jot down a bunch of notes and have ai turn it into a reasonable presentation or documentation or proposal
          • zoom has an ai agent which is pretty good about summarizing a meeting. It usually just needs minor corrections and you can send it out much faster than taking notes
          • for coding I mostly use ai like autocomplete. Sometimes it’s able to autocomplete entire code blocks
          • for something new I might have ai generate a class or something, and use it as a first draft where I then make it work
          • Em Adespoton@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            I’ve had success with:

            • dumping email threads into it to generate user stories,
            • generating requirements documentation templates so that everyone has to fill out the exact details needed to make the project a success
            • generating quick one-off scripts
            • suggesting a consistent way to refactor a block of code (I’m not crazy enough to let it actually do all the refactoring)
            • summarize the work done for a slide deck and generate appropriate infographics

            Essentially, all the stuff that I’d need to review anyway, but use of AI means that actually generating the content can be done in a consistent manner that I don’t have to think about. I don’t let it create anything, just transform things in blocks that I can quickly review for correctness and appropriateness. Kind of like getting a junior programmer to do something for me.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      This was the case a year or two ago but now if you have an MCP server for docs and your project and goals outlined properly it’s pretty good.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      If you give it the right task, it’s super helpful. But you can’t ask it to write anything with any real complexity.

      Where it thrives is being given pseudo code for something simple and asking for the specific language code for it. Or translate between two languages.

      That’s… about it. And even that it fucks up.

      • whoisearth@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        I bet it slows down the idiot software developers more than anything.

        Everything can be broken into smaller easily defined chunks and for that AI is amazing.

        Give me a function in Python that if I provide it a string of XYZ it will provide me an array of ABC.

        The trick is knowing how it fits in your larger codebase. That’s where your developer skill is. It’s no different now than it was when coding was offshored to India. We replaced Ravinder with ChatGPT.

  • astronaut_sloth@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.

    This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      They aren’t detail oriented enough to write full applications or complicated scripts.

      I’m not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    • 5too@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      My fear for the software industry is that we’ll end up replacing junior devs with AI assistance, and then in a decade or two, we’ll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Couldn’t have said it better myself. The amount of pure hatred for AI that’s already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.

      Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.

        This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren’t selling (at a massive loss) the hot garbage they’re selling now and focused on research.

        As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.

  • FancyPantsFIRE@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.

    My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.

    I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.

    That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

  • (des)mosthenes@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently