• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Claude eventually resolved its existential crisis by convincing itself the whole episode had been an elaborate April Fool’s joke, which it wasn’t. The AI essentially gaslit itself back to functionality, which is either impressive or deeply concerning, depending on your perspective.

    Now THAT’S some I, Robot shit. And I’m not talking about the Will Smith movie, I’m talking about the original book.

    • Havoc8154@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      This is by far the most interesting part. I want to know more about this, like why the author is so certain this wasn’t a joke.

      • silence7@slrpnk.netOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        For what its worth, Anthropic posted this in their corporate blog. So if its a joke, its coming out of vetted corporate PR.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    That anyone would even attempt such an experiment shows a profound misunderstanding of what this tech is. It’s depressing how stupid people are.

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Current AI systems can perform sophisticated analysis, engage in complex reasoning, and execute multi-step plans.

    No, not really

    • SerotoninSwells@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Claude’s month as a shopkeeper offers a preview of our AI-augmented future that’s simultaneously promising and deeply weird.

      Did the author have a stroke by the time they reached the end of writing the article? The mental gymnastics would be funny if it wasn’t terrifying.

    • TexasDrunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Depends on what you’re calling AI. LLMs (and generative AI in general) are garbage for all those things, and most things in general (all things if you take their cost into account). Machine Learning and expert systems can do at least some of that.

      I absolutely hate that generative AI is being marketed as though it’s deep learning instead of a fancy Markov chain. But I think I’ve lost the battle over that nomenclature.

      • TheBeege@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        This. I work at a medical computer vision company, and our system performs better, on average, than radiologists.

        It still needs a human to catch the weird edge cases, but studies show humans plus our model have a super high accuracy rate and speed. It’s perfect because there’s a global radiologist shortage, so helping the radiologists we have go faster can save a lot of lives.

        But people are bad at nuance. All AI is like LLMs -_-

    • 14th_cylon@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I mean really, where do these legends come from? I have tried to make chatgpt sort through single document and present clear organized data, present in the document, into sorted table. It can’t reliably do that. How would it do any kind of complex task? That is just laughable.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I’m convinced that people who are fascinated by llm chatbots are those who usually aren’t better than a chatbot at whatever they do. That is to say, they can’t do shit.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    This is how I know AI doesn’t really work. Give it a real use case in the physical world, it can’t be almost there, either it passes or fails.

    People should really appreciate deterministic algorithm cause they could automate things in the real world

  • strawberry@kbin.earth
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    the only real use case I’ve found for ai (not including science and stuff, I’m talking more LLM for consumer use) is when I have a very niche issue,and even then rarely does it solve the issue, just gives me a better idea of what I can go looking for

    • burgerpocalyse@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      anything a chatbot can do, a person can do better. like you could just ask another person and you would get something more useful off the top of their head

      • Mnemnosyne@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        There’s a difference between ‘a person’ and ‘every person’. A person can definitely do things better than any chat bot. But not every person can. And depending on the situation, a person who can may not be available.

        Even then, there is a place where the AI beats all persons and is better in one way: speed. If the task at hand does not require a better result than what the AI outputs, then the time savings is big, because there are no situations in which any human will work faster.