• blarghly@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    I mean, I recently/am in the middle of setting up a personal Home Assistant instance. I worked in software for years, but always in a windows shop, and never much on the networking side. Chatgpt walked me through installing a linux distro on a lenovo laptop, configuring bios and OS settings to make it a passable server, installing and configuring VM software, installing the HA os in a virtual machine and troubleshooting that installation when it didn’t work.

    This is the sort of computer thing that has always been unbearably frustrating for me, and without chatgpt, I would have probably got bogged down somewhere between installing kvm and getting the HA os up and running, worked on it in my spare time for a week, and then given up and put a curse on the whole business.

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      I’m anti-AI, essentially, but I think this touches on what may be an important arc in all this (very speculatively at least).

      Namely, maybe humanity had ~20 years to make tech “good” (or not bad), from 1990 to 2010 say, and failed. Or maybe missed the mark.

      What that would look like, I’m not sure exactly, but I wonder how much your general sentiments are distributed amongst tech people — how much the average person who’s substantially touched tech is just over all of the minutiae, yak shaving, boilerplate, poor documentation, inconsistencies, backwards incompatibilities … etc etc. Just how much we’ve all been burnt out on the idea of this as a skill and now just feel it’s more like herding cats.

      All such that AI isn’t just making up for all the ways tech is bad, but a big wake up call on what we even want it to be.

      • blarghly@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        I can see the point you are making. But at the same time, a lot of the tech I touched is already quite mature, and is probably decently documented.

        I totally understand the feeling you are describing of just hearding cats. Without an LLM, this project would have taken 10x as long, with 9/10s of that time being spent reading forum posts and github bug reports and stack overflow questions which I think might solve the problem but which actually don’t.

        But at the same time, I’m in a pretty common position in software where I don’t know anything about a mature and well designed tool, but I don’t want to really learn how it works because odds are, I will only use it once - or at least, by the time I use it again, I will have forgotten everything about it. And the LLM was able to do my googling for me and tell me “do this”, which was far faster and more pleasant. So I think this use case is quite reasonable.