Rust Crypto: How to specific the Hash/Digest algorithm to be used when computing ECDSA signature on P-256 curve?

How do I specify the hash function to be used, i.e. parameter T, with the ECDSA (P-256 curve)?

This is how ECDSA signature is computed:

use p256::{SecretKey as EccPrivateKey, ecdsa::{SigningKey as EccSigningKey, Signature as EccSignature, signature::RandomizedSigner as EccRandomizedSigner}, pkcs8::DecodePrivateKey};

fn _create_signature_ecc<T>(private_key: &EccPrivateKey, message: &[u8]) -> Vec<u8> {
    EccRandomizedSigner::<EccSignature>::sign_with_rng(&EccSigningKey::from(private_key), &mut thread_rng(), message).to_vec()
}

As you can see, the type parameter T (Hash algorithm) is not currently used in the function!


For comparison, here is the RSA signature implementation:

fn _create_signature_rsa<T>(private_key: &RsaPrivateKey, message: &[u8]) -> Vec<u8>
where
    T: HashMarker + FixedOutputReset + Default
{
    RsaSigningKey::<T>::from(private_key.to_owned()).sign_with_rng(&mut thread_rng(), message).to_vec()
}

It is used like this:

_create_signature_rsa::<Sha256>(private_key, message)
                        ^^^^^^

How to do the same/similar thing with ECDSA?

I cannot seem to find any parameter for the hash function in p256::ecdsa::SigningKey :confused:

(...neither in p256::ecdsa::signature::RandomizedSigner)

There is no hash parameter, because ECDSA for p256 is a fully specified algorithm with a specific choice of all parameters, including the hash function. As you can see in the definition:

pub type SigningKey = SigningKey<NistP256>;

There are no parameters on p25k::SignningKey, but there was a generic parameter on ecdsa::SigningKey. This parameter includes the definition of the used hash function:

impl<C> SigningKey<C>
where
    C: PrimeCurve + CurveArithmetic + DigestPrimitive,
    Scalar<C>: Invert<Output = CtOption<Scalar<C>>> + SignPrimitive<C>,
    SignatureSize<C>: ArrayLength<u8>,
{
    pub fn sign_recoverable(&self, msg: &[u8]) -> Result<(Signature<C>, RecoveryId)>
}

Notice C: DigestPrimitive. This trait is exactly what controls the specified hash function. The following impl is provided:

impl ecdsa_core::hazmat::DigestPrimitive for NistP256 {
    type Digest = sha2::Sha256;
}

If you want to use a different hash function, you need to provide your own type with PrimeCurve + CurveArithmetic + DigestPrimitive impls. You can also use SigningKey::sign_digest_recoverable if you want just to change the hash function in a one-off way, but the hash digest algorithm won't be encoded properly in the signature type. This makes it more likely that someone later will incorrectly use the default digest for signature verification, rather than the custom digest that you used when creating the signature.

2 Likes

Thank you for the response! This kind of clarifies the missing parameter.

Still trying to grasp this, as from all that I have learned so far, the signature algorithm (e.g. ECDSA) is independent from the digest algorithm (e.g. SHA-256). The signature algorithm just signs the given digest value, and it doesn't (and shouldn't) need to care how it was computed. Also, the EC curve (e.g. P-256) is just a parameter, or set of parameters, for the ECDSA or ECDH algorithms. So how does the mere selection of the curve parameters determine which digest algorithm has to be used in the course of for signature (ECDSA) computation?

What if, one day, SHA-256 is no longer considered "secure" (like SHA-1 is already broken) and we would like to use, e.g., SHA-3 or BLAKE2? This would be a big problem for ECDSA with its fixed and curve-specific digest algorithm, whereas RSA-SSA signature could easily be switched to SHA-3, because the digest algorithm is simply just a parameter there...


BTW: In my use case (protocol), the "interface" is defined in such a way, that I get the message to be signed, an identifier of the hash-algorithm (digest) to be used, and the public-key that will later be used for signature verification. So I have to compute the signature using the corresponding private key and the given hash-algorithm. The signature algorithm to be used is implied by the key, e.g. RSA-SSA for RSA key or ECDSA for EC (e.g. P-256) key. What do I do if I get an EC key as input, but the hash-algorithm ID doesn't happen to be SHA-256?

(Indeed, this always seems to be SHA-256 for now, but what if that might change in the future?)

Now trying with RandomizedDigestSigner.

Why this does work:

use p256::{SecretKey as EccPrivateKey, ecdsa::{SigningKey as EccSigningKey, Signature as EccSignature, signature::{RandomizedSigner as EccRandomizedSigner, RandomizedDigestSigner as EccRandomizedDigestSigner}}, pkcs8::DecodePrivateKey};

fn _create_signature_ecc(private_key: &EccPrivateKey, message: &[u8]) -> Vec<u8> {
    let sign_key = EccSigningKey::from(private_key);
    EccRandomizedDigestSigner::<Sha256, EccSignature>::sign_digest_with_rng(&sign_key, &mut thread_rng(), Sha256::new_with_prefix(message)).to_der().to_vec()
}

...and this works:

use rsa::{RsaPrivateKey, pss::SigningKey as RsaSigningKey, pss::Signature as RsaSignature, signature::{SignatureEncoding, RandomizedDigestSigner as RsaRandomizedDigestSigner}};

fn _create_signature_rsa<D>(private_key: &RsaPrivateKey, digest: D) -> Vec<u8>
where
    D: Digest + FixedOutputReset
{
    let sign_key = RsaSigningKey::<D>::from(private_key.to_owned());
    RsaRandomizedDigestSigner::<D, RsaSignature>::sign_digest_with_rng(&sign_key, &mut thread_rng(), digest).to_vec()
}

...but this does not:

fn _create_signature_ecc<D>(private_key: &EccPrivateKey, digest: D) -> Vec<u8>
where
    D: Digest + FixedOutputReset
{
    let sign_key = EccSigningKey::from(private_key);
    EccRandomizedDigestSigner::<D, EccSignature>::sign_digest_with_rng(&sign_key, &mut thread_rng(), digest).to_der().to_vec()
}

Error:

type mismatch resolving <H as OutputSizeUser>::OutputSize == UInt<UInt<UInt<UInt<UInt<UInt<UTerm, B1>, B0>, B0>, B0>, ...>, ...>
expected struct UInt<UInt<UInt<UInt<UInt<UInt<UTerm, B1>, B0>, B0>, B0>, B0>, B0>
found associated type <H as OutputSizeUser>::OutputSize
consider constraining the associated type <H as OutputSizeUser>::OutputSize to UInt<UInt<UInt<UInt<UInt<UInt<UTerm, B1>, B0>, B0>, B0>, B0>, B0>

What exactly am I supposed to do here? :thinking:

More modern crypto protocols tend to forgo cryprographic agility in favor of specifying a single combination of primitives to make it harder to do the wrong thing. This includes ECDSA + P-256 whose FIPS standard mandates that SHA-256 is used and Ed25519 which always uses SHA-512: RFC 8032 - Edwards-Curve Digital Signature Algorithm (EdDSA)

2 Likes

That's because the algorithm in the p256 crate has already made the full set of choices for you. It fixes not only the curve, but also the digest, and whatever other free parameters are in the signature algorithm (e.g. the precise method of generation for keys and blinding factors). You can't produce a signature without making a specific choice for all of those free parameters, and you can't validate a signature without reproducing that exact set of choices.

One should distinguish an abstract family of signature algorithms (e.g. ECDSA) from the specific instantiation as a specific algorithm. The latter is what is provided by p256 and what you should use if you intend to interoperate with actual real world systems. The former is what is defined in the ecdsa crate, and it allows you a lot of freedom in the choice of parameters, but each choice will give you an entirely different algorithm, and they are not interoperable.

By the way, if you don't intend to interoperate with existing systems but are considering a signature algorithm for an entirely green field application, I advise you to stay away from ECDSA entirely. It is a pretty bad algorithm in many ways, and you should use Ed25519 instead.

Considering the amount of critical systems which use ECDSA/secp256k1/SHA256, you'd have way worse things to worry about if that algorithm is broken.

In any case, cryptographic agility isn't and shouldn't be your concern. It is a concern for government bodies: standardizing a new algorithm takes years, then you need to wait years for quality implementations to appear, and even more years for the actual rollout to the industry. For them, standardizing a sufficiently wide family of algorithms is prudent. For you, it is entirely a non-issue. You don't set standards, you need to interoperate with other systems, and you must use whatever specific algorithms they use.

Cryptographic agility is also a dangerous attack vector. You don't need a dozen algorithms, you need one, and it must be absolutely secure. Having two such algorithms isn't a benefit, it's a cost: you need to maintain several implementations, and avoid vulnerabilities in all of them. If one of the algorithms becomes broken, any algorithm negotiation becomes an attack vector, because an attacker will try to trick you into downgrading to a vulnerable algorithm.

Practically speaking, a new algorithm is a new algorithm. What matters is that the old and new algorithm are not interoperable: neither can validate other's signatures. If the signatures in your system are long-lived, then any change of algorithm is a problem. There is no practical difference whether you just change the hash function, or change the curve, or use an entirely different algorithm, like RSA or SIKE. Some of those changes are more difficult from a library implementation perspective, but you biggest issue will always be backwards compatibility, and that is an issue for any change.

It's up to you. But if you ask me, don't do any of that, you're asking for critical vulnerabilities. Choose one well-implemented misuse-resistant popular well-audited algorithm, and stick to it.

You're supposed to not roll your own ad-hoc signature algorithm, and instead use the one provided by the p256 crate.

But if you want to understand the error message, the issue is that ECDSA signature expects a fixed digest size (256 for secp256k1). This is enforced by this impl:

impl<C, D> RandomizedDigestSigner<D, Signature<C>> for SigningKey<C>where
    C: PrimeCurve + CurveArithmetic + DigestPrimitive,
    D: Digest + FixedOutput<OutputSize = FieldBytesSize<C>>,
    Scalar<C>: Invert<Output = CtOption<Scalar<C>>> + SignPrimitive<C>,
    SignatureSize<C>: ArrayLength<u8>,

As you can see, D is not just an arbitrary digest, but a fixed-length digest with output size equal to FieldBytesSize<C> for the given curve C. For secp256k1, this means a 256-bit output, and likely excludes SHA3 or BLAKE2 as hash functions, since they are variable-output (but of course one can use them to define fixed-length hashes).

For RSA, the relevant impl of RandomizedSigner is this:

impl<D> RandomizedDigestSigner<D, Signature> for SigningKey<D>where
    D: Digest + FixedOutputReset,

You can see that it doesn't enforce any constraints on the length of digest's output.

The reason there are different restrictions is the different design of RSA and ECDSA. In ECDSA, the message digest is effectively reduced into a curve scalar. This reduction can potentially lose information about the signed message, allowing hash collisions and thus broken signatures. Of course, this is not an issue in practice, since a truncation of a cryptographic hash is still a cryptographic hash, but at least conceptually there is a possibility of error, and thus a reason to explicitly require you to declare that, yes, you really can produce a 256-bit hash.

RSA has no such restriction: an arbitrary-length message can be padded to a multiple of block size, and directly signed. In practice, you'd want to pre-hash the message and sign the digest, but there are no conceptual reasons to prefer one digest over the other. Both shorter and longer digests can still be signed, regardless of the properties of the hash function. I also imagine that enforcing specific digest size would be difficult in practice, because you'd have to somehow encode the RSA key length in the type system, and it's not easy to do without tanking your compile times. Also, practically speaking real-world implementations would likely be based on SHA1 or a variant of SHA2, so extra restrictions wouldn't give any real benefits, just extra complex boilerplate code.

4 Likes

Huh, I've always picked RSA when given the option since key size hasn't been a large concern and the general rule of thumb to pick the most popular option (that isn't known broken, which it otherwise sometimes is!), but I had been eyeing it for eg our JWT auth tokens for size.

Googling quickly, it doesn't seem like there's any explicit weakness, even theoretical, but maybe that it's just algorithmically too complicated, increasing risk, and that there are better options if you want smaller keys, is that essentially what you meant? And thus, if I'm just going shopping from options I shouldn't necessarily avoid ECDSA? (Eg JWT baseline algorithms are just RSA and a couple of EC variants)

1 Like

ECDSA has catastrophic failure in case of nonce reuse: You can recover the private key given only two different messages signed with the same nonce. EdDSA (like Ed25519) fixes this problem. RSA is harder to implement in constant time, is slower than ECDSA/EdDSA for any given security level and has pitfalls like choosing the wrong padding mode exposing you to attacks like Bleichenbacher (this specific one only applies to using RSA for encryption though, not for signatures).

5 Likes

You're supposed to not roll your own ad-hoc signature algorithm, and instead use the one provided by the p256 crate.

ECDSA has catastrophic failure in case of nonce reuse: You can recover the private key given only two different messages signed with the same nonce. EdDSA (like Ed25519) fixes this problem. RSA is harder to implement in constant time, is slower than ECDSA/EdDSA

I generally agree with that.

However, this is not a case where I can choose the crypto algorithm to be used. I need to implement a callback from an existing native library (which in turn is based on an existing standard, i.e. TPM 2.0). The input parameters that I get in the callback are: The "challenge" value to be signed, the ID of the hash algorithm to be used in the signature computation (e.g. SHA.256, but could be something else!) and the public key that will be used to verify the signature. I must sign the given "challenge" with the matching private key that I have. This means that the signature algorithm, e.g. RSA-PSS or ECDSA, is defined implicitly, by the type of the given public key (e.g. RSA or ECC), whereas the digest algorithm is defined explicitly, by algorithm ID.

Again, this is not "my" interface design, but the one that I have to work with.

TTBOMK, and as far as ECC keys are concerned, they only ever use NIST curves (e.g. P-256), and the signature algorithm to be used for this kind of key implicitly is ECDSA. Now, I could simply hope that they will only ever ask for signatures with SHA-256 digest, and just error out on other cases. But I'd rather be prepared for processing other algorithms too :+1:

The reason there are different restrictions is the different design of RSA and ECDSA. In ECDSA, the message digest is effectively reduced into a curve scalar. This reduction can potentially lose information about the signed message, allowing hash collisions and thus broken signatures. Of course, this is not an issue in practice, since a truncation of a cryptographic hash is still a cryptographic hash, but at least conceptually there is a possibility of error, and thus a reason to explicitly require you to declare that, yes, you really can produce a 256-bit hash.

Okay, I understand that using a digest algorithm with an output size bigger than 256 bits probably doesn't make sense for P-256 curve, because it would have to be truncated. Using one with output size smaller than 256-Bit probably is undesired too. But there should be no technical reason why we couldn't use SHA3-256 or BLAKE2s-256 as a drop-in replacement for SHA-256.

But how exactly would I enforce this with the RandomizedDigestSigner in Rust? :thinking:

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.