The number of paying subscribers for Copilot has leaked, and it is a disaster. Now even reshaping Satya Nadella’s CEO role into tech leadership rather than delivering commercial results.

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    I don’t think 2% of M365 is necessarily bad numbers. Office is prevalent, for all kinds of and even the simplest of office work. Not everyone needs AI or has the technical expertise or awareness of what this offer even means. Some people may not have launched their Office for one or two years but still have a paid license.

    There’s also a free copilot for GitHub users, which may be necessary as a teaser and testing, and adoption. That may also offset “adoption” by measure of commercial licenses instead of active users.

    I didn’t like the initial focus on that number of sold licenses in the article. Of course, they expand upon it and draw a broader picture afterwards.

  • BenVimes@lemmy.ca
    link
    fedilink
    arrow-up
    39
    ·
    20 hours ago

    This article only talks about the number of Copilot 365 licences that are active. It doesn’t even consider the situations like my workplace, where everyone was given a licence but hardly anyone uses it.

    I wouldn’t be surprised if the actual usage rate for these licences is also very low, meaning the situation could be even more dire than the article makes out.

    • Mikina@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      10 hours ago

      If I got a Copilot license, I’d definitely make sure to expend my quota every single day and use it as much as possible. Especially if I was “highly encouraged” to use it.

      Just run a markov chain and let it talk to the thing. It’s expensive to make the queries, for MS.

    • Pyr@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      13 hours ago

      Well, Microsoft doesn’t care if you use it, only if the business buys the license.

  • bizarroland@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    20 hours ago

    I can’t believe that the company famous for not listening to its users and forcing things on them that they did not ask for can’t quite understand why its users don’t want to use the thing that they didn’t ask for that they forced on them.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    ·
    20 hours ago

    Neural networks will inevitably be a big deal for a wide variety of industries.

    LLMs are the wrong approach to basically all of them.

    There’s five decades of what-ifs, waiting to be defictionalized, now that we can actually do neural networks. Training them became practical, and ‘just train more’ was proven effective. Immense scale is useful but not necessary.

    But all hype has been forced into spicy autocomplete and a denoiser, and only the denoiser is doing the witchcraft people want.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Just to add that Neural Networks have already been used for ages.

      For example, early automated mail sorting systems in the 90s used them to recognized postal codes.

      For literally decades, slowly and steadilly they’ve been finding more niches were they add value and then somebody comes up with NN styles of model for natural language text generators and “good enough to deceive non-expert” image generation - so with interfaces which are accessible to MBAs - and suddenly all the Venture Capitalist and Oversized Tech Company CEO types latch on to the thing and pump up what seems to be the biggest Tech bubble ever.

      I expect that after the bubble bursts and the massive pain of unwinding the gigantic resource misallocation due to it is over, NNs will be back on track at slowly and steadily finding more niches were they add value.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        10 minutes ago

        Right, should say deep neural networks. Perceptrons hit a brick wall because there’s some problems they cannot handle. Multi-layer networks stalled because nobody went ‘what if we just pretend there’s a gradient?’ until twenty-goddamn-twelve.

        Broad applications will emerge and succeed. LLMs kinda-sorta-almost work for nearly anything. What current grifters have proven is that billions of dollars won’t overcome fundamental problems in network design. “What’s the next word?” is simply the wrong question, for a combination chatbot / editor / search engine / code generator / puzzle solver / chess engine / air fryer. But it’s obviously possible for one program to do all those things. (Assuming you place your frozen shrimp directly atop the video card.) Developing that program will closely resemble efforts to uplift LLMs. We’re just never gonna get there from LLMs specifically.

  • hactar42@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    21 hours ago

    I have 20 years experience in IT process automation, with the last 15 spent in consulting. The number one thing I’ve learned is businesses don’t care about the technology. I could write the coolest automation that covers 99% of potential issues, but if it costs more to run than having a person in India push a button, they won’t buy it.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      18 hours ago

      In terms of long-term costs, yeah, probably. But I work in software development, so investment budgets, and we definitely have the problem that investments into anything tangibly related to AI are encouraged.

      We’ve genuinely been told by customers that they’d rather have the more expensive, worse solution that uses AI, because they will not get investment money, if it does not use AI. They want to be scammed, because their bosses have targets that say x% of all investments need to be towards AI. And those targets come straight from the investors.

  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    1 day ago

    Malware is what it is. I have a hard time getting rid of it on my machines.

  • TomMasz@piefed.social
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 day ago

    Microsoft learns that most people don’t want AI, only tech companies do. If they have a choice, they’re not going to use it, let alone pay for it.

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    4
    ·
    1 day ago

    Companies and workers are both scared of these systems, trying to figure them out, and yet completely uneducated on how to use them.

    If you want to sell it at $30 a seat, you need to teach every single seat how to make $30 or more in gains a month by using it.

    And a 1 hour lunch and learn isn’t going to fix that.

    These systems shouldn’t be priced per seat, and regular users shouldn’t be doing almost anything with them until they get trained.

    • Windex007@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      24 hours ago

      If ANYONE had reproducible guidance on how to get positive value out of these systems… they’d be booming like NVIDIA. It’s another “during the gold rush, sell shovels” model.

      Raises an eyebrow that we’re not seeing it.

      I think these companies are sitting, waiting, and praying for an emergent use-case to reveal itself. They’re spending money to be prepared to corner a market that as-of-yet doesn’t exist.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        22 hours ago

        My work indicated that they would start expecting people to make use of Copilot. There’s been small errors in every answer Copilot has given me, but it has surfaced information and been able to accurately answer a few questions that would have taken me hours with Microsoft docs to find without knowing it in advance (I always confirm the data).

        I can see the value in a natural language search engine. In being able to ask questions about documentation and software/system capabilities in natural language and get natural language answers.

        But it makes too many errors to be reliable because it tries to be generalist instead of organizing concepts and tokens properly for the specific domain. It costs way too damn much for the not super impressive thing it actually does, and it only does that at a barely passable level.

        I hate that me needing to use it for work for the sake of appearances only serves to normalize it to me and others, while adding to the inflated count of users.

        • BCsven@lemmy.ca
          link
          fedilink
          arrow-up
          4
          ·
          20 hours ago

          I found it often gives garbage results, so you have to know the subject well enough to weed through the nonsense. So it can be helpful if you already know what you are trying to do and just need a bump.

          • shalafi@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            Exactly what I’ve been saying. LLMs are solid if you know the subject matter enough to discern valid results from bullshit.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        21 hours ago

        I mean, the same could have been said for computers when they first came out. Most people had no idea how to improve their workflow by using one, and only as training and new software was developed did it manage to get reproducible results across the population.

        The AI companies are definitely a bit ahead of where they should be right now, these last couple of years have happened too quickly for people to adapt their thinking.

        There are specialists (myself included) that are implementing some absolutely transformational automations using these things. That being said, my job for the last 15 years has been automating and streamlining business processes, so this is just an extra tool in my kit to boost those automations to new levels.

        I built a simple one the other day using a basic prompt integrated into an existing longer work automation process that’s probably going to eliminate an entire FTE worth of admin work for that task, and it only took about 3 hours to implement.

        The question then becomes, are the remaining staff on this task “using” co-pilot because the process they support has it integrated? They’re not typing or pasting things into co-pilot themselves, they’re not developing prompts, but if you removed it, the workload would go up.

        • Windex007@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          21 hours ago

          I think that’s fair comparison.

          The difference was that investment followed realizable value for PCs. Or cell phones. Or iPods. Or “the cloud”. The horse and carriage were in a sane order.

          The internet itself might be an even better comparison, with VC dumping money into anything without an understanding of how to get a return.

          • shalafi@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            15 hours ago

            Been out the IT game for a little over a year. Aren’t AI using companies betting on a realizable return? My take is that the vendors are betting big time, with no return I can see.

            VCs aren’t idiots. They’re obviously wanting to come out on top, I get that. But how does AI make the investment back? Can’t see how the survivors make out with a profit. They can only charge so much, and the product isn’t near ready. 🤷🏻

            One afternoon I was standing in a man’s attic, wiring his satellite dish, we talked stock market.

            “Google’s about to IPO. I’d suggest you go all in.”

            “Yeah, they’re the best search engine out there, and their speed is impressive, but I don’t see how they ever make any money.”

            Wonder if he remembers that conversation, thinks on it 2o-years later. That sort of conversation is what investors are afraid of losing out on.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      21 hours ago

      That lunch hour is with the CEO, who thinks he can cut 30% of the workforce for this “cheap” AI.

  • idriss@lemmy.ml
    link
    fedilink
    arrow-up
    14
    arrow-down
    2
    ·
    23 hours ago

    People aren’t impressed anymore when you slap AI to everything.

    I pay for Claude and the CLI agent helps me a lot with boring stuff and that’s the only AI thing I will be ever paying for. I predict price increases, and if it will exceed 50 usd, I will be out and do the boring stuff myself.

  • Skibbidi@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    Not surprising, these services are largely useless and there’s so many free/self-hosting copies already.