OpenSSL vs Rustls

I think Aux should take some inspiration from SerpentOS which has dropped OpenSSL in favor of Rustls, to be honest, Aux should try to promote Rust-built solutions over C-based one for memory safety and minimizing memory exploits. SerpentOS will also use curl with a hyper backend, which is interesting to me as well.


Considering this is the first time I’m hearing of Rustls, I am very concerned about replacing a critical security component that has been battle-tested in the field for decades with something more up-and-coming. I imagine this falls into the Security Committee ballpark - maybe this should be moved there?


+1 for memory safe libraries when possible, especially for code exposed to the network or that need to parse file formats. Digging into rustls specifically, it’s a mixed bag today, but with a good future:

Rustls does not expose the OpenSSL API afaict, it exposes rustls’s unique API. Curl only works because they explicitly added support. But! An OpenSSL API shim is on the rustls roadmap :crossed_fingers:

Rustls only implements TLS 1.2/1.3, so people doing niche stuff with old protocols will break. IMO that’s fine, supporting legacy sadness is what RHEL is for :stuck_out_tongue: and the web was forced onto >=1.2 by chromium and firefox a few years ago.

Rustls requires extra decisions for how to load root certs: Suggestion: Use rustls/webpki-roots rather than rustls/native-tls for flexibility · Issue #3400 · rust-lang/rustup · GitHub has some context. I think if we’re targeting only linux and darwin, rustls-platform-verifier might match the OpenSSL behavior 1:1? Or rustls-native-certs, not sure.

Rustls still uses C :frowning: aws-lc and ring both use BoringSSL C/asm for low level crypto. Still a win for safety though, the usual scary thing is x509 parsing and validation logic, and that’s all in safe rust!

According to the latest benchmark, performance is mostly comparable with OpenSSL, except one specific case: mobile device (ChaCha20 cipher) talking to a rustls server on a machine with AVX-512 support, 45% slower due to worse AVX-512 impl. IMO it’s okay to ignore that, it’s an edge case and very fixable upstream.

This does move Rust and LLVM further down the source bootstrap chain, which might make world rebuilds a tiny bit worse… But rust becoming more load bearing is inevitable IMO, and :crossed_fingers: ca-derivations someday.

1 Like

Very reasonable worry! In this case though, rustls has a very good pedigree: it’s funded by the ISRG (of Let’s Encrypt fame) and its Prossimo memory safety initiative.

It’s being implemented by some TLS/WebPKI veterans, by default it uses aws-lc for core crypto which is similarly built by very good applied cryptographers and heavily scrutinized (also fuzzed and partially formally verified!). It also got a code audit by Cure53 (albeit in 2020, they should refresh that…) and passed with flying colors.

Normally I’m extremely suspicious of new crypto libraries, but in the particular case of rustls it’s actual TLS and cryptography experts building it, with enough funding and support to do a really good job. And honestly, that’s way more scrutiny than OpenSSL gets even after a bunch of major scares :cry:

(I’m technically on the security committee, but I’m just speaking for myself as a protocol, crypto and crypto protocol nerd :stuck_out_tongue:)


We should also use coreutils-rs imo as well.

To put it mildly: TLS is complicated, very complicated!

Don’t get me wrong, I’m generally in favor of using languages that makes it harder to f*** the code up and Rust is a great candidate in that field.

My past corporate job was doing PKI for one of the largest TLS deployments there is, right when Snowden came along. So I got a bit of experience applying it in the field at scale :wink:

In practice you need to test changes to your TLS deployment carefully because the combination of libraries that might communicate with each other on client & server side is nearly unpredictable.

While you can test popular combinations in a lab to get some sense there’s a bunch more problems that come with TLS interception devices, QUIC, difference in cert chain construction, … the outcome still has some variance in it.

This aspect troubles me personally quite a bit. Combining a change of how certs are loaded and validated with a change of the TLS library itself makes for a larger set of possible errors combinations. Ideally those change should happen separately.

All-in-all I’m saying:
Playing with TLS on the code side is much more than just making it compile. While TLS (especially the newer 1.3 standard) is fairly well and explicitly designed, getting connections to work in practice with a low tail-end percentage of failing connections are 2 problems.

The 2 possible path forwards that I’ve seen is

  • Routing sub-percentages into the new stack for a limited amount of time with extensive logging enabled
  • Starting with connections where you control code on the server and client side of the connection.