> Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.
> The fact that we are able to achieve better performance doing our own parsing makes clear that doing better is practical. And indeed, our performance is not a result of clever SIMD micro-optimizations, it’s the result of doing simple things that work: we avoid copies, allocations, hash tables, indirect calls, and locks — none of which should be required for parsing basic DER structures.
I was involved in the design/implementation of the X.509 path validation library that PyCA cryptography now uses, and it was nuts to see how much performance was left on the ground by OpenSSL. We went into the design prioritizing ergonomics and safety, and left with a path validation implementation that's both faster and more conformant[1] than what PyCA would have gotten had it bound to OpenSSL's APIs instead.
> It is extremely common that a correct implementation also has excellent performance.
I think that's true in general, but in the case of X.509 path validation it's not a given: the path construction algorithm is non-trivial, and requires quadratic searches (e.g. of name constraints against subjects/SANs). An incorrect implementation could be faster by just not doing those things, which is often fine (for example, nothing really explodes if an EE doesn't have a SAN[1]). I think one of the things that's interesting in the PyCA case is that it commits to doing a lot of cross-checking/policy work that is "extra" on paper but stills comes out on top of OpenSSL.
I’d say correct common path. OpenSSL due to hand waving deals with a lot of edge cases the correct path doesn’t handle. Even libraries like libnss suffers from this.
By the way, pyca/cryptography is a really excellent cryptography library, and I have confidence that they're making the right decisions here. The python-level APIs are well thought-out and well documented. I've made a few minor contributions myself and it was a pleasant experience.
> I set out to remove deprecated calls to SHA256_xxx to replace them with the EVP_Digestxxx equivalent in my code. However it seems the EVP code is slow. So I did a quick test (test case B vs C below), and it is indeed about 5x slower.
Since that Haproxy has effectively abandoned OpenSSL in favor or AWS-LC. Packages Re still built with both, but AWS-LC is clearly the path forward for them.
I'm glad that they're considering getting rid of OpenSSL as a hard dependency. I've built parts of pyca/cryptography with OpenSSL replaced or stripped out for better debugging. OpenSSL's errors just suck tremendously. It shouldn't be tremendously difficult for them to do it for the entire package.
Though I'd also love to see parts of pyca/cryptography being usable outside of the context of Python, like the X.509 path validation mentioned in other comments here.
It is honestly surprising that OpenSSL has been the standard for so long given how difficult it is to work with. I think moving the backend to Rust is probably the right move for long term stability.
Note that all cryptographic primitives are still going to be in C via an OpenSSL-like API for the next while; the current proposal is to migrate from OpenSSL to one of its forks. Various bits of backend logic that aren't cryptographic primitives (e.g., parsing) have been rewritten in Rust; additionally, https://github.com/ctz/graviola is mentioned near the end as a possible implementation of cryptographic primitives in a combination of Rust and assembly (without any C), but it's not especially mature yet.
> Finally, taking an OpenSSL public API and attempting to trace the implementation to see how it is implemented has become an exercise in self-flagellation. Being able to read the source to understand how something works is important both as part of self-improvement in software engineering, but also because as sophisticated consumers there are inevitably things about how an implementation works that aren’t documented, and reading the source gives you ground truth. The number of indirect calls, optional paths, #ifdef, and other obstacles to comprehension is astounding. We cannot overstate the extent to which just reading the OpenSSL source code has become miserable — in a way that both wasn’t true previously, and isn’t true in LibreSSL, BoringSSL, or AWS-LC.
OpenSSL code was not pleasant or easy to read even in v1 though and figuring out what calls into where under which circumstances when e.g. many optimized implementations exist (or will exist, once the many huge perl scripts have generated them) was always a headache with only the code itself. I haven't done this since 3.0 but if it regressed so hard on this as well then it has to be really quite bad.
I have a hacky piece of code that I used with OpenSSL 1.x to inspect the state of digest objects. This was removed from the public API in 3.0 but in the process of finding that out I took a deep dive in the digests API and I can confirm it's incomprehensible. I imagined there must be some deep reason for the indirection but it's good to know the Cryptography maintainers don't think so.
Speaking of which, as a library developer relying on both long established and new Cryptography APIs (like x.509 path validation), I want to say Alex Gaynor and team have done an absolutely terrific job building and maintaining Cryptography. I trust the API design and test methodology of Cryptography and use it as a model to emulate, and I know their work has prevented many vulnerabilities, upleveled the Python ecosystem, and enabled applications that would otherwise be impossible. That's why, when they express an opinion as strong as this one, I'm inclined to trust their judgment.
> Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.
> The fact that we are able to achieve better performance doing our own parsing makes clear that doing better is practical. And indeed, our performance is not a result of clever SIMD micro-optimizations, it’s the result of doing simple things that work: we avoid copies, allocations, hash tables, indirect calls, and locks — none of which should be required for parsing basic DER structures.
I was involved in the design/implementation of the X.509 path validation library that PyCA cryptography now uses, and it was nuts to see how much performance was left on the ground by OpenSSL. We went into the design prioritizing ergonomics and safety, and left with a path validation implementation that's both faster and more conformant[1] than what PyCA would have gotten had it bound to OpenSSL's APIs instead.
[1]: https://x509-limbo.com
Also, even if somebody else can go faster by not being correct, what use is the wrong answer? https://nitter.net/magdraws/status/1551612747569299458
I think that's true in general, but in the case of X.509 path validation it's not a given: the path construction algorithm is non-trivial, and requires quadratic searches (e.g. of name constraints against subjects/SANs). An incorrect implementation could be faster by just not doing those things, which is often fine (for example, nothing really explodes if an EE doesn't have a SAN[1]). I think one of the things that's interesting in the PyCA case is that it commits to doing a lot of cross-checking/policy work that is "extra" on paper but stills comes out on top of OpenSSL.
[1]: https://x509-limbo.com/testcases/webpki/#webpkisanno-san
And my personal "new OpenSSL APIs suck" anecdote: https://github.com/openssl/openssl/issues/19612 (not my gh issue but I ran into the exact same thing myself)
> I set out to remove deprecated calls to SHA256_xxx to replace them with the EVP_Digestxxx equivalent in my code. However it seems the EVP code is slow. So I did a quick test (test case B vs C below), and it is indeed about 5x slower.
Since that Haproxy has effectively abandoned OpenSSL in favor or AWS-LC. Packages Re still built with both, but AWS-LC is clearly the path forward for them.
Though I'd also love to see parts of pyca/cryptography being usable outside of the context of Python, like the X.509 path validation mentioned in other comments here.
OpenSSL code was not pleasant or easy to read even in v1 though and figuring out what calls into where under which circumstances when e.g. many optimized implementations exist (or will exist, once the many huge perl scripts have generated them) was always a headache with only the code itself. I haven't done this since 3.0 but if it regressed so hard on this as well then it has to be really quite bad.
Speaking of which, as a library developer relying on both long established and new Cryptography APIs (like x.509 path validation), I want to say Alex Gaynor and team have done an absolutely terrific job building and maintaining Cryptography. I trust the API design and test methodology of Cryptography and use it as a model to emulate, and I know their work has prevented many vulnerabilities, upleveled the Python ecosystem, and enabled applications that would otherwise be impossible. That's why, when they express an opinion as strong as this one, I'm inclined to trust their judgment.