• 0 Posts
  • 191 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle
  • At work we use Meshcentral. It requires you to host your own server, but it’s very powerful, and very reliable. We’re managing something like 400 remote systems with it currently. We also use Netbird as a secondary access layer (I prefer it to Tailscale for the simplicity of setting up ACLs, and the really easy deployment).

    For most home server usage though, I wouldn’t bother with Meshcentral. It’s a lot of overhead if you’re only managing a couple of systems. If you really need remote desktop (why do your servers even have desktops?) use RustDesk instead.




  • He’s She’s talking specifically about the idea of embedding AI agents in operating systems, and allowing them to interact with the OS on the user’s behalf.

    So if you think about something like Signal, the point is that as it leaves your device the message is encrypted, and only gets decrypted when it arrives on the device of the intended recipient. This should shut down most “Man in the middle” type of attacks. It’s like writing your letters in code so that if the FBI opens them, they can’t read any of it.

    But when you add an AI agent in the OS, that’s like dictating your letter to an FBI agent, and then encrypting it. Kind of makes the encryption part pointless.





  • Yeah, I fucking detest the way morality systems in games work.

    I don’t think they’re a fundamentally unworkable idea, but very few games have even come close to doing anything good with the concept.

    Most just offer you two equal but different benefits, let you pick between them, and call that morality. See Bioshock. And the Mass Effect / KOTOR system always sucked because it punished you for going down the middle (ie, playing a complex character).

    One of the only good morality systems I’ve ever seen is Metro 2033. For those who don’t know, the game has a secret personality tracker. It gives you points for taking actions that are pro-social. You get a lot of opportunities in the game to refuse benefits or give up resources to help others. You are never directly rewarded for this. It doesn’t do the bullshit where you give someone some food and they go “Here’s an old gun I had lying around.” Being kind costs you. It also measures the time you spend interacting with people, listening in on conversations, that kind of thing. Just generally giving a shit about other people. By the end of the game, if you’ve played your character like someone who cares about other people, you get an opportunity to make a better choice in a specific situation, that leads to a better outcome. If you don’t, the choice is never presented to you at all, because the character you portrayed wouldn’t even think there was a choice to be made in that situation. It’s brilliant, and it completely solves the usual Deus Ex / Mass Effect “Three buttons” ending where nothing leading up to it matters. To be able to make the good ending choice you have to have played the kind of character who would be willing to make that choice in the first place.









  • No, as in the person installing the app to use the service has to edit a config file.

    Yes, I have no issue editing config files. I’m self-hosting, that’s the point. All the technical load should be on me. But my completely non-technical friends should not have to edit config files to be able to access my self-hosted services. Everything, for them, should be as simple as possible.





  • Even if you can somehow get past the absolutely horrendous privacy implications, how the fuck is this even supposed to work? They want to prevent “digital flashing” (eg, dick pics), but how the fuck is any system supposed to be able to tell the difference between consensual and non-consensual content? What if someone wants to see a picture of someone’s dick? Even assuming you can create a computer model that can accurately identify a dick pic every single time (you can’t), it would also have to be able to infer context to a level that would require effectively human level intelligence and the ability to make judgements across the entirety of a person’s communications. This is so far beyond impossible, from a purely technical standpoint, that I cannot begin to imagine how it was ever allowed to become law.