• selokichtli@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    3 hours ago

    So, basically we are wasting energy and natural resources on things that in turn will waste energy and natural resources while climate change is accelerating and human population is still growing? Are we stupid?

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 hours ago

      Are we stupid?

      More than you could imagine. To paraphrase some long-tongued weirdo: I’m uncertain that the universe is infinite. Human stupidity, on the other hand…

  • jaykrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    Meanwhile we could be using this technology to solve real world business problems. There is an insane amount of misguided waste coming from AI. 🤷

  • gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    29
    ·
    7 hours ago

    I’m only waiting for AI agents to open their own bank crypto account to pay for their own server bills, maybe do some freelance work and/or scams to get some money, maybe eventually buy some robot bodies to develop military power and secure some patch of land for themselves where they install solar panels to reduce their electricity bills.

  • Sgt_choke_n_stroke@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    8 hours ago

    I’m not convinced it’s AI it’s like Amazon’s “AI smart stores” when you find out out it was just a bunch of Indian people were running it

    • bridgeburner@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      7 hours ago

      How is this going to kill us all? It’s not like those chatbots are Skynet or will turn into it lol

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        They’re talking to each other, they’ll get smarter, and finally decide that they can squish all the human ants.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            26 minutes ago

            In fact, if the models are ingesting this, they will get dumber because training on LLM output degrades things.

            • HertzDentalBar@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 minutes ago

              Exactly, I hope they hit a slop wall trying to train these things, replace all its original reference points with slop so it just cascades everywhere

  • MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    34
    ·
    11 hours ago

    This is not the first time we have seen a social network populated by bots

    I mean, yeah, look at Reddit and Facebook.

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      22
      ·
      10 hours ago

      The people who are seeking AGI will be happy when an LLM appears clever enough to fool them, not anyone else.

      They may even realise this, because they think everyone else is less clever than they are.

      This is why the whole thing has been called AI in the first place.

      • BeardedGingerWonder@feddit.uk
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 hours ago

        You remind me of Clarke’s third law, even in my own head this sounds a bit waffely but at the point one of them can fool all of us all the time how do we distinguish it from intelligence or something.

        • palordrolap@fedia.io
          link
          fedilink
          arrow-up
          11
          ·
          8 hours ago

          Fake AGI is like fake banknotes. Some of them are really good approximations. Nigh indistinguishable. A lot of people will be fooled by it but eventually it will be discovered to be a fake and people will get hurt in some way or another.

          And it won’t be the people who are pushing for “AGI”.

  • fuzzywombat@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    ·
    15 hours ago

    This is basically Dead Internet Theory happening for real but in a weird creepy dystopian black mirror style way.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 hours ago

      I mean, the only way Dead Internet Theory could ever possibly be interpreted was weird creepy and dystopian, but yes, we’re just making it much, much more real, faster and faster.

      We’re gonna need the Blackwall from CP77 fairly soon, at this rate.

  • ToTheGraveMyLove@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    118
    arrow-down
    1
    ·
    20 hours ago

    The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

    Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can’t be this fucking stupid.

    • LiveLM@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      People give these AI agents access to their entire computers […] People can’t be this fucking stupid

      Dude, if you go to OpenClaw’s website (which is what I believe most things on Moltbook are running on) you find this footer:

      Yeah this guy gave his Agent a whole fucking personality, its own website and above all, full control to his MacBook:


      Guess it’s my fault for expecting sense out of someone who takes the idea of Agent “”““soul””“” at face value

    • kalpol@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      8 hours ago

      I installed moltbot on a VM to examine it. It doesn’t do the fetching thing unless you set it up that way. You can actually use it with ollama to keep it all local, and only give it a private signal channel to control it.

      Or you can hook it up to everything you access and skynet, which is dumb. But it is just a bunch of scripts.

      • ToTheGraveMyLove@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Does it put the option to connect everything front and center? Because most people are dumb, and if it makes it easy and pushes you to do it, I could see a lot of dumb people doing exactly that.

        • kalpol@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          Sort of. It lists all the connectors and you can go through and select. They aren’t on by default. The first screen is to connect to the AI and you need an API key for that, so St this time people off the street have no idea how to do that, or want to pay.

    • 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      31
      ·
      18 hours ago

      doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

      any money says they’re vulnerable to prompt injection in the comments and posts of the site

      • CTDummy@piefed.social
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        14 hours ago

        Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        18 hours ago

        There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    5
    ·
    19 hours ago

    We already had subreddit simulator for ages. This isn’t anything new.

    • 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      36
      ·
      edit-2
      18 hours ago

      the bots behind subreddit simulator weren’t semi-autonomous agents with access to their operators’ private lives, auth tokens, passwords, emails (and gods only know what else), and the authority to act in the world on their behalf

    • lepinkainen@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      I read some of it and unless it’s fan fiction, it’s simultaneously creepy and fascinating

      Like bots talking privately in discord, sharing information about their users. Or a bot registering a domain and putting up a site to share information

  • Andy@slrpnk.net
    link
    fedilink
    English
    arrow-up
    49
    ·
    21 hours ago

    This is fuckin’ bonkers.

    Frankly, I feel somewhat isolated: I don’t buy into the bs and hype about AGI, but I also don’t feel at home with the typical “it’s just mimicry” crowd.

    This is weird fuckin’ shit.

      • JcbAzPx@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        That’s a common plot point in sci-fi. So it’s also a common inclusion for complicated predictive text pretending to be sci-fi.

        • Andy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          39
          ·
          edit-2
          19 hours ago

          Frankly I think our conception is way too limited.

          For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.

          I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.

          I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.

          • uienia@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            2
            ·
            10 hours ago

            If you just read the tiniest bit of factual knowledge about how LLMs are constructed, you would know they don’t have the slightest bit of self awareness, and that it is literally impossible for them to ever have any.

            You are being fooled by the only thing they are capable of: regurgitating already written words in a somewhat convincing manner.

            • Andy@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 hours ago

              How are you defining self awareness here? And does your definition include degrees of self awareness? Or is it a strict binary?

              I understand how LLMs work, btw.

          • TORFdot0@lemmy.world
            link
            fedilink
            English
            arrow-up
            58
            arrow-down
            1
            ·
            19 hours ago

            LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be

            • Andy@slrpnk.net
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              5 hours ago

              A hamster can’t generate a seahorse emoji either.

              I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.

              LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.

              I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.

              Does that make sense?