A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    The problem really isn’t the exact percentage, it’s the way it behaves.

    It’s trained to never say no. It’s trained to never be unsure. In many cases an answer of “You can’t do that” or “I don’t know how to do that” would be extremely useful. But, instead, it’s like an improv performer always saying “yes, and” then maybe just inventing some bullshit.

    I don’t know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I’m looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that’s fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google’s “helpful” AI will helpfully generate some completely believable bullshit. It’s able to take what I’m searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.

    I’m knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I’m sure there are a lot of other more gullible optimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.

    To me, the best way to explain LLMs is to say that they’re these absolutely amazing devices that can be used to generate movie props. You’re directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It’s so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you’d see in a hospital.

    But, just like you’d never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that’s hard, because it’s so convincing.