Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • BeigeAgenda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    26
    ·
    5 hours ago

    Anyone who have knowledge about a specific subject says the same: LLM’S are constantly incorrect and hallucinate.

    Everyone else thinks it looks right.

    • IratePirate@feddit.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      4 hours ago

      A talk on LLMs I was listening to recently put it this way:

      If we hear the words of a five-year-old, we assume the knowledge of a five-year-old behind those words, and treat the content with due suspicion.

      We’re not adapted to something with the “mind” of a five-year-old speaking to us in the words of a fifty-year-old, and thus are more likely to assume competence just based on language.

    • zewm@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.