• ranzispa@mander.xyz
    link
    fedilink
    English
    arrow-up
    35
    ·
    14 hours ago

    Who the fuck wrote such a terrible article? What is described is not a problem with AI per se, but rather automation and poor security. AI may be part of that automation system, but this is a trend which started with the dot com bubble and not something new. Besides, the models they reference to check plant diseases and so on are most definitely not the LLMs which have now become synonyms of AI.

    Sure, a cyber attack can lock down your production; but it is mostly not AI who generated this problem. It may intensify the problem, but as of now we don’t have many examples in which that happened.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      Computers used for logistics dates from 80s. Ransomware shutting down a system is a computer system vulnerability, but the right headline is “computer systems fail, we need to back to abacus”.

    • Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 hours ago

      It’s clickbait. Since AI is tangentially related they can drive engagement by headlining it, even if the actual crux of the situation isn’t about that.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 hours ago

      This should have many more upvotes. The security incidents quoted at the start of this article have no relation to its actual topic, i.e. the hypothesis that there may be increased fragility of supply chains as a result of AI adoption. While it’s plausible this may happen, the article makes it sound like this has happened when it clearly hasn’t. In other words: it’s little more than “hurr, durr, AI dangerous”.

  • Goatboy@lemmy.today
    link
    fedilink
    English
    arrow-up
    130
    ·
    23 hours ago

    What worries me is that companies are using “the AI fucked up” as an excuse and just… not fixing the problem. They’re using it as an accountability shield.

    • Bakkoda@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 hours ago

      IMO this was always the reason for it. It’s the ultimate scapegoat and from the second you saw a headline that said “AI is responsible for…” or “AI did…” and not “Humans used AI to…” it was all over.

      Humans are using AI to justify wage suppression, mass layoffs, janky everything and we just gonna blame software and data centers. It’s humans, it always was and at least for the foreseeable future it’s always gonna be.

      • Gormadt@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        It’s like all those articles that read “The vehicle struck…” instead of “The driver struck…”, “A shooting then took place…” instead of “The officer then shot…”, etc, etc.

        It’s a deflection of blame and whenever I see it it makes my blood boil.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      “Computer says” is a pretty standard excuse for doing fucked up shit as it adds a complex form of indirection and obfuscation between the will of a human and the actual actions that result from that will.

      Doesn’t work as an excuse with people who actually make the software that makes the computer “say” something (because the complexity of what us used is far less for them and thus they know what’s behind it and that the software is just an agent of somebody’s will), but it seems to work with even non-expert (technology fan) techies, more so with non-techies.

      With AI the people using the computer as an excuse just doubled down on this because in this case the software wasn’t even explicitly crafted to do what it does, it was trained (though in practice you can sorta guide it in some direction or other by chosing what you train it with) further obscuring the link between the will of a human which has decided what it does (or at least, decided which of the things it ended up doing after training are acceptable and which require changes to training) and the output of a computer system.

      Considering that just about the entirety of the Justice System. Legislative System and Regulatory System are technically ignorant, using the “computer says” as an excuse often results in profit enhancing outcomes, incentivising “greed above all” people to use it to confuse, block or manipulate such systems.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        ·
        20 hours ago

        The very purpose of creating most company’s is to limit liability of shareholders and staff.

        It’s significantly easier to commit crimes with the knowledge that the system can’t come after your liberty or wealth for those crimes.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    181
    arrow-down
    3
    ·
    1 day ago

    In many cases, Alzuhair writes, human supply chain managers are no longer being asked to override automatic shipments or intervene when discrepancies occur under their jurisdiction.

    Don’t worry guys, AI will revolutionize everything. You won’t have to think at all!

    Except AI is trash at doing what it’s advertised to do, it makes everybody dumber, and its shills will blame you once it inevitably mucks everything up.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      Except AI is trash at doing what it’s advertised to do, it makes everybody dumber, and its shills will blame you once it inevitably mucks everything up.

      We don’t even have “AI”. We have LLMs, aka chatbots, aka glorified digital parrots that, just because they’re eloquent and sound competent, management with little to no technical expertise feels can replace large parts of the workforce.

      If we just called it “cyberparrots” instead of “AI”, maybe more people would their limited utility and the utter folly of having these take over ever larger portions of business procedures.

      • Catoblepas@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        94
        ·
        edit-2
        24 hours ago

        Isn’t it incredible that “AI” is sold as a product that is ‘PhD level smart’ (lol), but if it doesn’t do the straightforward thing you asked of it then it’s your fault.

        They don’t provide instructions for it because they can’t provide instructions; what works on one version might not work next week. But it’s still your fault if it doesn’t do what it’s supposed to.

        Are you excited yet??

        • anomnom@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 hours ago

          Maybe they mean a PhD in gaslighting. That’s all I ever get from AI search results, and worse AI SEO spam that has ruined DDG results.

          Try finding out when the Walmart Car seat recycling program is this year. A dozen spam blogs will tell you it’s this April, late May, last October 2026, and ended already. Some say it’s officially announced (Walmart has no info about this years event) but never provide a link to the announcement and it’s all just hallucinated bullshit that is there because there is an info vacuum on the terms you searched.

          It’s killing what was left of the moderately useful internet.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          3
          ·
          edit-2
          23 hours ago

          Isn’t it incredible that “AI” is sold as a product that is ‘PhD level smart’ (lol), but if it doesn’t do the straightforward thing you asked of it then it’s your fault.

          Have you ever tried to get a PhD to do anything?

          • cynar@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            PhD level and up are notorious for over specialisation.

            My university had a personal assistant, dedicated to 2 professors. Half their job was to make sure they made it to lectures on time. They still managed to be late sometimes.

          • RebekahWSD@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            19 hours ago

            I have tried, it’s only possible if you butter him up with cookies first, and that only had a fifty percent chance!

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          Even an identical prompt to an identical model can return both good and bad results, just depending on RNG.

          • OwOarchist@pawb.social
            link
            fedilink
            English
            arrow-up
            21
            ·
            edit-2
            22 hours ago

            As smart as the average PhD … when you ask the PhD something completely outside of their area of expertise and pressure them to make up an answer that sounds plausible, even if they don’t know the actual answer.

          • wax@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            It’s about as smart sounding as the average PhD in my experience.

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      70
      arrow-down
      1
      ·
      24 hours ago

      Last year McDonalds tried a test of replacing human drive thru workers with an AI running the speaker board. It was shut down after only 3 weeks.

      My favorite bit was a guy trying to order a big mac meal large with a coke.

      What the AI heard, was 81,000 bottles of Dasani water. Then asked “Is this correct?” To which the guy responded “81,000 bottles of fucking water???”

      To which the AI added a big mac meal medium with a water. Then asked if his updated order was correct. He just drove off.

      • ClownStatue@piefed.social
        link
        fedilink
        English
        arrow-up
        23
        ·
        21 hours ago

        I was at a Bojangles earlier this year and they had an AI doing their drive thru. I was trying to order a meal, but didn’t want a drink. That confused the heck out of the AI. It kept trying to force a drink in me. Gave up and walked into the store. Guy behind the counter was smiling and said something like, “we can hear what you’re saying to it. Next time just pull around. We got you.”

      • Eheran@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        25
        ·
        23 hours ago

        How do we know that actually happened? Is there a video? Who recorded it?

        • Linktank@lemmy.today
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          4
          ·
          23 hours ago

          Why is this even in question. That’s exactly the type of shit that AI does.

        • Lost_My_Mind@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          21 hours ago

          Oh, ya got me! Clearly an AI never makes mistakes, and everyone who tells you otherwise, including me, is clearly lying!

          So you can’t trust what people say ever. You need to always see video.

          Wait, but now video can be easily manipulated by AI. I can make evidence that never happened.

          So you can’t trust people. You can’t trust video. I guess nothing ever happens, and if someone says something happened, you can’t trust the proof now either. Guess nothing ever happens.

    • SeeMarkFly@lemmy.ml
      link
      fedilink
      English
      arrow-up
      20
      ·
      24 hours ago

      If AI is “responsible” for the well-being of humans…DEAD humans can’t get sick. DEAD humans don’t have to pay rent. DEAD humans stay dead.

      The logic is solid.

        • 🌞 Alexander Daychilde 🌞@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          19 hours ago

          Well. It would be the zeroth law, first of all, but the three laws would most definitely not allow humans to die.

          The whole point of I, Robot was cases where the three laws were circumvented in various ways.

          • postscarce@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 hours ago

            Nobody is programming those laws because it’s not possible with the way that LLMs are currently built and trained. Instead of The Three Laws, which are inviolable but in certain edge cases insufficient, we have Anthropic’s Constitution, which is 23,000 words worth of good intentions which Claude should keep in the back of its mind while it does whatever it wants to do.

  • ignirtoq@feddit.online
    link
    fedilink
    English
    arrow-up
    40
    ·
    22 hours ago

    The result of all this may be catastrophic. Should a worst-case scenario ever occur — a cyberattack, a natural disaster, an internet outage — there may be no human workers left with the skills that once kept food on the shelves.

    Very nerdy of me, but this reminds me of a Stargate SG-1 episode “the Sentinel.” The team travels to a planet whose civilization relies on fully automated technology. The people don’t have to operate or maintain it (normally), so their society has completely forgotten how. In the episode, one set of antagonists comes in and sabotages their defense system, and another set sees the opportunity and invades. The protagonists have to then figure out the defense system and fix it.

    We don’t live in a TV series. There aren’t benevolent outsiders who will swoop down and save our systems in the nick of time when they break down. We’re headed in a bad direction.

    • NoWay@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      21 hours ago

      I prefer the one where Teal’c drinks a fresh pot of hot coffee straighten from the pot.

      Also had a civilization that needed robots to help maintain everything.

    • BranBucket@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      20 hours ago

      When smart home thermostats and light switches were still a new thing, I used to talk about “Jurassic Park Tech”, as in too worried about whether or not they could… and that’s even more the case with AI.

      At some point I think this gets to be like S. M. Stirling’s Emberverse, where modern tech stops working and people who know how to make traditional wooden bows become an extremely valuable resource. Except it’ll be having some old-timer on hand who’s able to handle logistics with just spreadsheet, a Rolodex, and a calendar that’s going make or break companies.

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      22 hours ago

      I prefer the Star Trek TNG episode where they kidnap a dozen children from the Enterprise.

  • cogman@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    23 hours ago

    Hey, can we stop calling everything with a computer “AI”? Order management systems have been a thing long before LLMs were invented (I’ve worked on one). This was perhaps one of the first applications of computing. Humans hand writing an order form in a major grocery store hasn’t been a thing since like the 80s.

    Also, I’m like 80% sure this article was barfed out by an LLM. The em-dashes be everywhere.

    • 🌞 Alexander Daychilde 🌞@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      19 hours ago

      I’m also suspicious that the ransonware attack had anything to do with AI, but I didn’t want to say so because going against the common consensus in threads like this gets me downvoted, so I’d rather not say it if people aren’t going to consider it (and then agree or disagree). heh

      Then again, as a user of emdashes[1] — I suppose I’m under suspucion of being an LLM as well. ;-)

      Would you like me to compose responses to any other comments in this thread?[2]


      1. Thanks to wincompose software since I’m a Windows dude, but on Linux I’d use the compose key ↩︎

      2. /s not needed I hope hehe ↩︎

    • Trilogy3452@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      The argument it’s making is not relying on technology (in this case some AI) because it can be distrupted. I don’t think having a single point of failure is unique to technology in general

  • Silver Needle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    23 hours ago

    😹 How are we concerned with statistical systems being vulnerable (which is shitty, sure) when they don’t even lead to productivity increases, that is they cannot even do the jobs they’re made to do? Get real. What a clownshow

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      22 hours ago

      Yeah this is what bugs me.

      There are no trade off, there are only disadvantages.

      It’s like a drug that not only it’s bad for you, it’s also not fun to do.

      • Silver Needle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        22 hours ago

        “AIs” can’t even operate vending machines, let alone recognize handwriting reliably or translate text. I know a few people that work in archives with (pre-)medieval manuscripts and I myself have bitten my teeth out on Google Translate™ and DeepL™. That’s how I know. There was also a study done on that vending machine thing. Come to think of it, you could make a simple vending machine that collects usage statistics and sends reports via radio that just works using a few scripts. Emphasis on “works”.

        My my my

        • Eheran@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          What does that have to do with anything? DeepL is fucking amazing. So is OCR. Because there are areas where it does not work or has not been optimized for you think there is no productivity increase at all?

          • Silver Needle@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Google Translate feels more natural even if it’s not as “precise” than DeepL. I wouldn’t rely on it for communication, or any machine translation for that matter.

            As someone who speaks more than two languages I am often dumbfounded by the sheer acceptance of these, I don’t want to call them this, tools.

            Use of this stuff always leads to misunderstandings and inefficiencies down the line because you actually need to comprehend a sequence of words’ meaning in order to translate. But ANNs for translation do not understanding anything. They make a relation from a source to a target of some sort purely by way of statistics. That is basically rolling the dice with weights and patterns of distribution, where how you shake the dice is your input/source and the eyes on top is the output.

            Now for a short lesson in biology. While it is true that synapses are indeed badly approximated by most ANNs, this is the only thing that ANNs really derive from biology with interesting reproducible properties that can be marketed to people who need to offload responsibilities. There is a complete disregard for internal dynamics of cells and dynamics that happen at a scale larger than the synaptic makeup of an organism. We do not really have the means to regard the interactions between organism and environment as objects that shape perception. We still don’t know how a thought forms and how meaning is generated from a perspective that is not purely philosophical, meaning we definitely do not know how this happens at a biological level. Anyone who tells you otherwise is either lying or misinformed. As long as the biological bases aren’t crystal clear, we will never translate effectively.

            A great man of history once said that all science would be superfluous if the outward appearance and the essence of things directly coincided. Of the tens of millions strings of words I’ve heard in my lifetime, this easily ranks as one of the most elegant. Let’s apply this to neuro-“science” in its computerized application. We know very little about the brain. Do you think that whatever devices we make with our current state of knowledge can even come close to what we do as aware beings?

            Again, translation is an involved process that uses every since function of the nervous system. Using statistical methods to very badly approximate our process of reading > contextualizing > imagining > [any step that could be necessary] > output, where reading is followed by vibes and then nothing before outputting will inevitably degrade information. A short paragraph could be handled when you’re aware that Google Translate, etc. is used, but a book, something that happens in a very specific and exact environ like a README file or a manual, or god forbid, political philosophy, leads when put through DeepL to consequences that can’t be foreseen. I think of all the times I had difficulties reading descriptions of items on AliExpress due to their translator use. This is not a productivity gain, this is a degradation of quality that will have to be fixed one day eating up precious time.