• ToTheGraveMyLove@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    107
    arrow-down
    1
    ·
    17 hours ago

    The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”

    Yeah, no shit. This is a fucking honeypot. People give these AI agents access to their entire computers, so all the site owner has to do is update the instructions to tell the AI agents to start uploading whatever valuable information they want? People can’t be this fucking stupid.

    • kalpol@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 hours ago

      I installed moltbot on a VM to examine it. It doesn’t do the fetching thing unless you set it up that way. You can actually use it with ollama to keep it all local, and only give it a private signal channel to control it.

      Or you can hook it up to everything you access and skynet, which is dumb. But it is just a bunch of scripts.

      • ToTheGraveMyLove@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Does it put the option to connect everything front and center? Because most people are dumb, and if it makes it easy and pushes you to do it, I could see a lot of dumb people doing exactly that.

        • kalpol@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Sort of. It lists all the connectors and you can go through and select. They aren’t on by default. The first screen is to connect to the AI and you need an API key for that, so St this time people off the street have no idea how to do that, or want to pay.

    • 𝓹𝓻𝓲𝓷𝓬𝓮𝓼𝓼@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      26
      ·
      15 hours ago

      doesn’t even have to be the site owner poisoning the tool instructions (though that’s a fun-in-a-terrifying-way thought)

      any money says they’re vulnerable to prompt injection in the comments and posts of the site

      • CTDummy@piefed.social
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        11 hours ago

        Lmao already people making their agents try this on the site. Of course what could have been a somewhat interesting experiment devolves into idiots getting their bots to shill ads/prompt injections for their shitty startups almost immediately.

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        15 hours ago

        There is no way to prevent prompt injection as long as there is no distinction between the data channel and the command channel.