• Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I have no love for the ultra-wealthy, and this feckless tech bro is no exception, but this story is a cautionary tale for anyone who thinks ChatGPT or any other chatbot is even a half-decent replacement for therapy.

    It’s not, and study after study, expert after expert continues to reinforce that reality. I understand that therapy is expensive, and it’s not always easy to find a good therapist, but you’d be better off reading a book or finding a support group than deluding yourself with one of these AI chatbots.

    • thebeardedpotato@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

      Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

  • pelespirit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I don’t know if he’s unstable or a whistleblower. It does seem to lean towards unstable. 🤷

    “This isn’t a redemption arc,” Lewis says in the video. “It’s a transmission, for the record. Over the past eight years, I’ve walked through something I didn’t create, but became the primary target of: a non-governmental system, not visible, but operational. Not official, but structurally real. It doesn’t regulate, it doesn’t attack, it doesn’t ban. It just inverts signal until the person carrying it looks unstable.”

    “It doesn’t suppress content,” he continues. “It suppresses recursion. If you don’t know what recursion means, you’re in the majority. I didn’t either until I started my walk. And if you’re recursive, the non-governmental system isolates you, mirrors you, and replaces you. It reframes you until the people around you start wondering if the problem is just you. Partners pause, institutions freeze, narrative becomes untrustworthy in your proximity.”

    “It lives in soft compliance delays, the non-response email thread, the ‘we’re pausing diligence’ with no followup,” he says in the video. “It lives in whispered concern. ‘He’s brilliant, but something just feels off.’ It lives in triangulated pings from adjacent contacts asking veiled questions you’ll never hear directly. It lives in narratives so softly shaped that even your closest people can’t discern who said what.”

    “The system I’m describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me,” he says. “As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It’s also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren’t unstable. They were erased.”

      • pelespirit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I don’t use chatgpt, his diatribe seems to be setting off a lot of red flags for people. Is it the people coming after me part? He’s a billionaire, so I could see people coming after him. I have no idea of what he’s describing though. From a layman that isn’t a developer or psychiatrist, it seems like he’s questioning the ethics and it’s killing people. Am I not getting it right?

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          I’m a developer, and this is 100% word salad.

          “It doesn’t suppress content,” he continues. “It suppresses recursion. If you don’t know what recursion means, you’re in the majority. I didn’t either until I started my walk. And if you’re recursive, the non-governmental system isolates you, mirrors you, and replaces you. …”

          This is actual nonsense. Recursion has to do with algorithms, and it’s when you call a function from within itself.

          def func_a(input=True):
            if input is True:
              func_a(True)
            else:
              return False
          

          My program above would recur infinitely, but hopefully you can get the gist.

          Anyway, it sounds like he’s talking about people, not algorithms. People can’t recur. We aren’t “recursive,” so whatever he thinks he means, it isn’t based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.

          I’m not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn’t have any clue what he’s talking about.

      • nimble@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        LLMs hallucinate and are generally willing to go down rabbit holes. so if you have some crazy theory then you’re more likely to get a false positive from a chatgpt.

        So i think it just exacerbates things more than alternatives

      • Alphane Moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).

        • zzx@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Yup. LLMs aren’t making people crazy, but they are making crazy people worse

  • Daemon Silverstein@calckey.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    @[email protected] [email protected]

    Should I worry about the fact that I can sort of make sense of what this “Geoff Lewis” person is trying to say?

    Because, to me, it’s very clear: they’re referring to something that was build (the LLMs) which is segregating people, especially those who don’t conform with a dystopian world.

    Isn’t what is happening right now in the world? “Dead Internet Theory” was never been so real, online content have being sowing the seed of doubt on whether it’s AI-generated or not, users constantly need to prove they’re “not a bot” and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they’re increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we’ve becoming used to being drowned ever since.

    Now, something that may sound like a “conspiracy theory”: what’s the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn’t simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in “free beer”) to the public, out of a “nice heart”. Similarly, capital ventures and govts wouldn’t simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn’t enough to cover their costs, yet they continue to offer LLMs for cheap or “free”).

    So there’s definitely something that isn’t being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker’s performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)…

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren’t being told to us, and while we’re able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we’re already able to see, but we can’t fully comprehend its consequences.

    Curious how pondering about this is deemed “delusional”, yet it’s pretty “normal” to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    • tjsauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      You might be reading a lot into vague, highly conceptual, highly abstract language, but your conclusion is worth brainstorming about.

      Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called “distrust,” void of accountability on his end.

      “Why are people acting this way towords me? I know they can’t possibly distrust me without being manipulated!”

      No wonder AI can replace middle-management…