

Okay but your best masons are. Architects and developers are different.


Okay but your best masons are. Architects and developers are different.


It’s a mixed bag.


Maybe I came off as dismissive or just stupid, but I really did mean to be helpful. Of course you don’t want users experience bad interactions. I meant if those interactions were for an actual intended reason. So yeah, never mind.
Bummer you’ve had a hard time. I think they and the free software community are trying to put together a good solution.


What server are you using and with which client most recently? It sounds like your device is unverified so untrusted or the key isn’t present.
To expand, “unable to decrypt” would affect a lot of users. That’s a good thing and exactly what you want it to do when not correctly trusted.


I use it all the time. There are many mature clients, and matrix is a protocol, so I don’t know what you mean. Since the sliding sync implementations, I have found it really nice to use.
The sandboxes are different. The embeddable Java plugin sandbox was a bit different and susceptible to confused deputy and other attacks. So yeah, I guess you can say it is iterative but they’re kind of worlds apart. You can run thousands of wasm modules in a single process and have them all be completely isolated. Its performance and security gains, portability, and usability are all superb.
I guess I can’t really defend it well, but I think it is interesting and important.
It isn’t interesting for being bytecode. Rather for being the first universal sandboxed runtime for the browser and elsewhere. Being able to write in many languages and compile to wasm targets is awesome. Safety guarantees and performance are both great too. And it can run in tiny environments.
Great article. I think wasm is one of the more interesting things to happen in the last few decades in computer science, though there are many. I think it’s here to stay for sure, but am always curious where the adoption curve will go.
Not at all. Tongue was firmly in cheek. I work with jvm professionally. I was specifically trying to clarify that I find the beam vm exciting but not jvm and was therefore just kidding around when I made the first comment. Not gate keeping at all. Like whatever you please.
I won’t argue that isn’t true. I’m just saying beam is a value prop that speaks to me. Jvm isn’t, but objectively is for sure.
At least there is a good reason to use elixir. Beam.


It also means the people operating them will have a high threshold for consequences and maybe not care so much about the community.


What funny is it is Mozilla giving away free cookies with a sign that says, “Only accept my cookies”.


Actually going this year. Have wanted to for many. Really looking forward to it.
Modelling how you want to handle trust in your architecture doesn’t have a best answer really. Many ways to pet a cat, and all that jazz. Some prefer to trust only end to end, meaning not just establishing trust at the API entry, but all the way to the backend. There are arguments to be made for doing it either way. As long as your services behind the API gateway are in a private network, it is maybe okay to establish complete trust here and you could even terminate TLS and use clear communications. Another more secure pattern is to authenticate the call to the API, authorize which backends can be called, then verify the source caller in the backend as well.


There are many public sector organizations that need programming done. There are also organizations that back FOSS work. However, if it can’t involve devops, cloud, or containers, I don’t know how much will be left for you to do. There are tasks that don’t involve those, but they’re few and far between. And anybody who said those aren’t part of “REAL programming” wouldn’t get a second listen from me in a hiring scenario.


Actually great questions. Yes and no. There are vulnerabilities if the private key leaks, but public keys are just that; perfectly okay public in any hands. You only encrypt data with it.
What makes the Signal protocol so awesome, and other algorithms like it, is that it reduces the threat surface area further by using onetime keys. So even if your key is leaked, it cannot be used to decrypt old or forthcoming messages as the keys have already ratcheted to the next pair.


They share it with you. Their public key is generated by them. You encrypt a message to them with their public key. They use their private key to decrypt it.
I want to add before I get completely roasted here, that this is intentionally reductive. Signal actually uses a much more interesting multikey sharing algorithm, double ratchet. This uses onetime keypairs, and really is worth reading about.


I’m not following. In the WhatsApp case, yes, because we can’t see how those keys are managed. In the Signal case, we can. So the centralized server has zero impact on the privacy of the message. If we trust the keys are possessed only by the generating device, then how does the encrypted message become compromised?
I’m not talking about anonymity, only message privacy. No different than any of the other proxies or routers along the way. If they don’t have the key, the message is not readable.
Ha. I still have an open PR on that.