• Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      There’s literally nothing wrong with the technology. The problem is the application.

      • Trouble@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        The technology is NOT DOING WHAT ITS MEANT TO DO - it is IDENTIFYING DAMAGE WHERE THERE IS NONE - the TECHNOLOGY is NOT working as it should

        • papertowels@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Do you hold everything to such a standard?

          Stop lights are meant to direct traffic. If someone runs a red light, is the technology not working as it should?

          The technology here, using computer vision to automatically flag potential damage, needed to be implemented alongside human supervision - an employee should be able to walk by the car, see that the flagged damage doesn’t actually exist, and override the algorithm.

          The technology itself isn’t bad, it’s how hertz is using it that is.

          I believe the unfortunate miscommunication here is that when @[email protected] said the solution was brilliant, they were referring to the technology as the “solution”, and others are referring to the implementation as a whole as the “solution”

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            The stop light analogy would require the stop light be doing something wrong not the human element doing something wrong because.

            There is no human element to this implantation, it is the technology itself malfunctioning. There was no damage but the system thinks there is damage.

            • papertowels@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              There is no human element to this implantation, it is the technology itself malfunctioning. There was no damage but the system thinks there is damage.

              Let’s make sure we’re building up from the same foundation. My assumptions are:

              1. Algorithms will make mistakes.
              2. There’s an acceptable level of error for all algorithms.
              3. If an algorithm is making too many mistakes, that can be mitigated with human supervision and overrides.

              In this case, the lack of human override discussed in point 3 is, itself, a human-made decision that I am claiming is an error in implementing this technology. That is the human element.

              I work with machine learning algorithms. You will not, ever, find a practical machine learning algorithm that gets something right 100% of the time and is never wrong. But we don’t say “the technology is malfunctioning” when it gets something wrong, otherwise there’s a ton of invisible technology that we all rely on in our day to day lives that is “malfunctioning”.