• 1 Post
  • 2 Comments
Joined 2 years ago
cake
Cake day: May 8th, 2023

help-circle
  • As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.

    It got the interface right, but it got some details really wrong:

    • It stored way more information than it needed in the structure tracking state, some of it very sensitive.
    • It repeatedly converted back and forth between byte arrays and the proper types unnecessarily - reducing type safety and making things slower.
    • Instead of using type safe enums it defined integer constants for no good reason.
    • It logged information about failures as variable length strings, creating a possible timing side channel attack.
    • Despite having a 96 bit nonce to work with (-1 bit to identify client->server and server->client), it used a 32 bit integer to represent the sequence number.
    • And it “helpfully” used wrapping_add to increment the 32 sequence number! For those who don’t know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages). wrapping_add explicitly means when you get up to the maximum number (and remember, it’s only 32 bits, so there’s only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits.
    • It also rolled its own bespoke hash-based key extension function instead of using HKDF (which was available right there in the library, and callable with far less code than it generated).

    To be fair, I didn’t really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).

    But right now, I’d not put “vibe coded” output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.


  • The FBI pressured Apple to create an encryption backdoor to bypass their security features

    This was more like a hardware security device backdoor - the key was in a hardware security device, that would only release it after receiving the PIN (without too many wrong attempts). But the hardware accepts signed firmware from Apple - and the firmware decides the rules like when to release the key. So this was effectively a backdoor only for Apple, and the FBI wanted to use it.

    Systems would create a public audit trail whenever a backdoor is used, allowing independent auditors to monitor and report misuse of backdoors.

    This has limits. If there is a trusted central party who makes sure there is an audit log before allowing the backdoor (e.g. the vendor), they could be pressured to allow access without the audit log.

    If it is a non-interactive protocol in a decentralised system, someone can create all the records to prove the audit logs have been created, use the backdoor, but then just delete the audit logs and never submit them to anyone else.

    The only possibility without a trusted central party is an interactive protocol. This could work as: For a message (chat message, cryptocurrency transaction etc…) to be accepted by the other participants, they must submit a zero-knowledge proof that the transaction includes an escrow key divided into 12 parts (such that any 8 of 12 participants can combine their shares to decrypt the key), encrypted with the public keys of 12 enrolled ‘jury’ members - who would need to be selected based on something like the hash of all messages up to that point. The jury members would be secret in that the protocol could be designed so the jury keys are not publicly linked to specific users. The authority could decrypt data by broadcasting a signed audit log requesting decryption of certain data, and jury members would receive credits for submitting a share of the escrow key (encrypted so only the authority could read it) along with a zero-knowledge proof that it is a valid and non-duplicate escrow key. Of course, the person sending the message could jury shop by waiting until the next message will have the desired jury, and only sending it then. But only 8/12 jurors need to be honest. There is also a risk jurors would drop out and not care about credits, or be forced to collude with the authority.

    Cryptographic Enforcement: Technical solutions could ensure that the master key is unusable if certain conditions—such as an invalid warrant or missing audit trail—are not met.

    Without a trusted central party (or trusted hardware playing the same role), this seems like it would require something like Blackbox Obfuscation, which has been proven to be impossible. The best possibility would be an interactive protocol that would need enough people to collude to break it.