DSA and ECDSA signature format is incompatible with XMLDSig

Maarten Bodewes maarten.bodewes at xs4all.nl
Sat Jul 17 04:56:07 PDT 2010


On Thu, Jul 15, 2010 at 6:57 PM, Sean Mullan <sean.mullan at oracle.com> wrote:

> I would like to try to fix a long-standing XMLDSig issue with the current
> DSA and ECDSA signature bytes format.
>
> The format of the Signature bytes for these algorithms is an ASN.1 encoded
> sequence of the integers r and s:
>
>  SEQUENCE ::= { r INTEGER, s INTEGER }
>
> Unfortunately, this is not compatible with XMLDSig (and other signature
> formats like .NET), which doesn't ASN.1 encode them and simply base64
> encodes the raw bytes of r and s concatenated (the IEEE P1363 format).
>
>
There are more standards that use the P1363 format. Personally I'm involved
with the EAC specification for ePassports & Java. You'll find this kind of
signature if you look at the EAC certificates for the inspection systems
(and their CA's).



> So, our XMLDSig implementation always has to strip off, or decode the ASN.1
> stuff after calling Signature.sign() when generating signatures, and ASN.1
> encode the signature bytes before calling Signature.verify() when verifying
> signatures. I could live with this until now because it was limited to DSA
> which wasn't in wide use. But now the same problem comes up with ECDSA.
>
>
That is a very well known situation for me :). I don't directly remember
though if I had to do normalization on the integers as well (stripping of
00h bytes at the front or adding 00h bytes to get to the correct bit-size of
the signature elements), or if s & r were encoded as ASN.1 octet strings.


> I would really like to clean this up. There seems to be a couple of ways we
> could fix this:
>
> 1. Add new standard signature format strings that identify the format: ex:
>
>  SHA1withDSAandP1363
>  SHA1withECDSAandP1363
>  SHA256withECDSAandP1363
>  SHA384withECDSAandP1363
>  SHA512withECDSAandP1363
>
> I like this the best, but one issue with this is that the "and" extended
> format is reserved for MGF functions, ex: MD5withRSAandMGF1 and this is not
> a mask generation function. My suggestion is that we use a keyword (ex:
> Format) that clearly distinguishes it from an MGF:
>
>  <digest>with<encryption>and<format>Format
>
> ex:
>
>  SHA256withECDSAandP1363Format
>
>
I second this solution, since they would also be usable by other
applications. I've got a serious problem with the solution though: hardware
providers may not support it. And if HW providers do not support it then you
need to work around it. Fortunately, if I'm not mistaken, you can work
around this by creating a very simple provider that performs the
wrapping/unwrapping of the signature (as you don't need to sign).

Of course, by now the string build-up of the signature format is getting
really complicated (you could say it is starting to imitate life). In the
end it might be a good idea to replace it by something that can be
selected/verified at compile time (e.g. a list of signature parameters).
Currently it might be a good idea to create a constant somewhere for these
kind of strings.



> 2. Add a new AlgorithmParameterSpec subclass that specifies the format, and
> then call Signature.setParameter before generating/verifying the signature.
>
> I'm not thrilled by this option, because this isn't really a standard input
> parameter, and will cause problems if/when you want to use it with an
> algorithm that does require input parameters (like an RSA PSSParameterSpec)
>
>
I can see these problems as well. I would try to leave the parameter specs
alone, they are more difficult in use. Maybe restrict their use to those
places where changing integer input is required and such (against simple
choices for the algorithm).


> 3. Add a higher level DSA/ECDSA Signer API that returns the r and s as
> BigIntegers and leaves the encoding of those bytes to the application.
>
> This is a very clean solution, but is more of a significant API change as
> it would be introducing a new higher level API for generating/validating
> signatures.
>
>
Would that not be a *lower* level API, since it does not do the encoding? I
don't directly see the need. If people want to step outside those algorithms
that are standardized or generally used, they could choose a lower level API
like Bouncy.

Of course, in the end we might want to replace the current JCA with one that
uses the factory principle and immutable Signer and Verifier classes, but
that is an entirely different discussion :)



> 4. Do nothing
>
> Live with it :(
>
>
Nah, if you want to go for 1), then go for it. No current code would break,
it's a standardized algorithm you are implementing and other people like me
are using it.

Regards,
Maarten
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/security-dev/attachments/20100717/555882fe/attachment.html 


More information about the security-dev mailing list