Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

  • codeinabox@programming.dev
    link
    fedilink
    English
    arrow-up
    21
    ·
    3 days ago

    Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

    Perhaps the biggest reason AI won’t take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you’d be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

    “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

    This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.

    • melfie@lemy.lol
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 day ago

      This article / talk is quite illuminating. I’ve seen studies indicating that AI coding agents improve productivity by 15-20% in the aggregate, which tracks with my own experience. It’s a solid productivity boost when used correctly, clearly falling in the “centaur”category in my own experience at least. However, all the hate around it, my own included, stems from the “reverse-centaur” aspirations around it. The companies developing these tools aren’t in it to make a reasonable profit while delivering modest productivity gains. They are in it to spin a false narrative that these tools can replace 9/10 engineers in order to drive their own overly inflated valuations, knowing damn well this is not the case, but not caring because they don’t plan to be the ones holding the bag in the end (taxpayers will be the bag-holders when they get bailed out).