• blarghly@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      I mean, I recently/am in the middle of setting up a personal Home Assistant instance. I worked in software for years, but always in a windows shop, and never much on the networking side. Chatgpt walked me through installing a linux distro on a lenovo laptop, configuring bios and OS settings to make it a passable server, installing and configuring VM software, installing the HA os in a virtual machine and troubleshooting that installation when it didn’t work.

      This is the sort of computer thing that has always been unbearably frustrating for me, and without chatgpt, I would have probably got bogged down somewhere between installing kvm and getting the HA os up and running, worked on it in my spare time for a week, and then given up and put a curse on the whole business.

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I’m anti-AI, essentially, but I think this touches on what may be an important arc in all this (very speculatively at least).

        Namely, maybe humanity had ~20 years to make tech “good” (or not bad), from 1990 to 2010 say, and failed. Or maybe missed the mark.

        What that would look like, I’m not sure exactly, but I wonder how much your general sentiments are distributed amongst tech people — how much the average person who’s substantially touched tech is just over all of the minutiae, yak shaving, boilerplate, poor documentation, inconsistencies, backwards incompatibilities … etc etc. Just how much we’ve all been burnt out on the idea of this as a skill and now just feel it’s more like herding cats.

        All such that AI isn’t just making up for all the ways tech is bad, but a big wake up call on what we even want it to be.

        • blarghly@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          I can see the point you are making. But at the same time, a lot of the tech I touched is already quite mature, and is probably decently documented.

          I totally understand the feeling you are describing of just hearding cats. Without an LLM, this project would have taken 10x as long, with 9/10s of that time being spent reading forum posts and github bug reports and stack overflow questions which I think might solve the problem but which actually don’t.

          But at the same time, I’m in a pretty common position in software where I don’t know anything about a mature and well designed tool, but I don’t want to really learn how it works because odds are, I will only use it once - or at least, by the time I use it again, I will have forgotten everything about it. And the LLM was able to do my googling for me and tell me “do this”, which was far faster and more pleasant. So I think this use case is quite reasonable.

    • jeffw@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      5 hours ago

      Bruh do you know how long it would take me to write an Excel macro? The m code for a power query? Fuck yeah it’s helping

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      And even if there is some productivity positive, there’s also the question of whether there’s a negative that’s hidden, not understood or not spoken about. Eg - thinking you’ve done your job but it’s actually sloppy and forcing someone else to clean up after you.

      • apfelwoiSchoppen@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        And this is as good as it will ever get. The cycle of enshitification will occur with AI. It will not be free forever. They will introduce or are introducing placement ads into generative applications. Generated output will degrade as the models begin to mix AI generated content into the corpus of user generated data, etc etc.

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      I find it helps. Not enough to pay what they want, or even what they need to break even, but it’s not useless. It’s not in any way intelligent, but it’s good at tidying up notes and summarizing conversations and tests. It needs thorough manual review, which is annoying, but still better than doing the summary manually.