• astronaut_sloth@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren’t detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I’ll give its output a once over to check it with an eye to the details of implementation. It’s nice to get the boilerplate out of the way quickly.

    Don’t get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution–a silver bullet–and it’s not.

    This leads to my biggest fear for the AI field of Computer Science: reality won’t live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      They aren’t detail oriented enough to write full applications or complicated scripts.

      I’m not sure I agree with that. I wrote a full Laravel webapp using nothing but ChatGPT, very rarely did I have to step in and do things myself.

    • 5too@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      My fear for the software industry is that we’ll end up replacing junior devs with AI assistance, and then in a decade or two, we’ll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Couldn’t have said it better myself. The amount of pure hatred for AI that’s already spreading is pretty unnerving when we consider future/continued research. Rather than direct the anger towards the companies misusing and/or irresponsibly hyping the tech, they direct it at the tech itself. And the C Suites will of course never accept the blame for their poor judgment so they, too, will blame the tech.

      Ultimately, I think there are still lots of folks with money that understand the reality and hope to continue investing in further research. I just hope that workers across all spectrums use this as a wake up call to advocate for protections. If we have another leap like this in another 10 years, then lots of jobs really will be in trouble without proper social safety nets in place.

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        People specifically hate having tools they find more frustrating than useful shoved down their throat, having the internet filled with generative ai slop, and melting glaciers in the context of climate change.

        This is all specifically directed at LLMs in their current state and will have absolutely zero effect on any research funding. Additionally, openAI etc would be losing less money if they weren’t selling (at a massive loss) the hot garbage they’re selling now and focused on research.

        As far as worker protections, what we need actually has nothing to do with AI in the first place and has everything to do with workers/society at large being entitled to the benefits of increased productivity that has been vacuumed up by greedy capitalists for decades.