Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JWT encoding] JWT claim names instead of or in addition to their verifiable credential counterparts #8

Closed
Sakurann opened this issue Jun 10, 2022 · 23 comments

Comments

@Sakurann
Copy link
Contributor

Sakurann commented Jun 10, 2022

All JWT-VC examples under a tab named "Verifiable Credential (as JWT)" are not compliant to the Proof Formats section 6.3.
Each property has to get transformed into a JWT claim without being duplicated inside a JWT vc claim.

  • issuanceDate -> nbf,
  • issuer -> iss,
  • credentialSubject.id -> sub,
  • id -> jti. Need to change.

For example, Example 4 should be as below:

{
  "vc": {
    "@context": [
      "https://www.w3.org/2018/credentials/v1",
      "https://www.w3.org/2018/credentials/examples/v1"
    ],
    "type": [
      "VerifiableCredential",
      "UniversityDegreeCredential"
    ],
    "credentialSubject": {
      "degree": {
        "type": "BachelorDegree",
        "name": "Bachelor of Science and Arts"
      }
    }
  },
  "iss": "https://example.edu/issuers/565049",
  "nbf": 1262304000,
  "jti": "http://example.edu/credentials/3732",
  "sub": "did:example:ebfeb1f712ebc6f1c276e12ec21"
}

This transformation rule might change in v2 but can we please change this in v1.1 because it is very misleading?

@Sakurann Sakurann changed the title Incorrect JWT-VC example All JWT-VC examples in tab Verifiable Credential (as JWT) are incorrect Jun 10, 2022
@msporny
Copy link
Member

msporny commented Jun 11, 2022

@Sakurann wrote:

Each property has to get transformed into a JWT claim without being duplicated inside a JWT vc claim.

Can you please provide the specification text that states the above?

One of the common misunderstandings of that section is missing the text that says "instead of or in addition to"... here's the full spec text:

For backward compatibility with JWT processors, the following registered JWT claim names MUST be used, instead of or in addition to, their respective standard verifiable credential counterparts

While I disagreed with that particular wording at the time (I thought it should only be "in addition to"), because it would create interoperability issues, there was insistence to leave it in to allow the implementers to decide the best path forward. This is an issue that I hope the people doing JWT implementations fix in v2.0. My preference would be that we do the "in addition to" language so that we remove translation that must be done/undone when going to/from JWTs. The examples in the specification use the "in addition to" approach.

... but perhaps I'm missing something. @Sakurann, did you see the specific text I'm referring to above on your initial read? I'll admit that it's very easy to miss.

@Sakurann
Copy link
Contributor Author

Sakurann commented Jun 13, 2022

ok, I see what you mean, and I now recall the conversation in Issue #4. I agree that both, the examples in tab Verifiable Credential (as JWT) and those in the proof` section would be considered following the specification text as it is written in v1.1.

I think what caused confusion is that, probably because the JWT-encoded example using "JWT claim names instead of their verifiable credential counterparts" existed in the specification for longer, I was not aware of any JWT-VC implementations that use in addition to interpretation (obviously I do not claim to know all JWT-VC implementations out there),

Guess a discussion point for V2 :) thanks for clarifying!

@Sakurann Sakurann changed the title All JWT-VC examples in tab Verifiable Credential (as JWT) are incorrect [JWT encoding] JWT claim names instead of or in addition to their verifiable credential counterparts Jun 13, 2022
@TallTed
Copy link
Member

TallTed commented Jun 13, 2022

Perhaps the title should instead be —

Should VCDM2 mandate that JWT claim names be used { instead of | in addition to | instead of or in addition to } their verifiable credential counterparts?

My opinion is that in most cases, it should be in addition to, but there was specific argument against only that, and I and others had arguments against strict instead of, so we settled on the imperfect and variously objectionable instead of or in addition to for VCDM1.

@David-Chadwick
Copy link
Contributor

This is an issue that I would like to see resolved as quickly as possible because we will soon be starting conformance testing for the NGI Atlantic project, and we would like to produce JWTs and tests that conform to the v2 data model.

My preferred approach is that for property values that can be directly copied from credential properties into JWT claims, we mandate instead of (i.e. no duplication) and for property values that cannot be directly copied (currently all the time/date properties) we mandate in addition to.

@TallTed
Copy link
Member

TallTed commented Jul 20, 2022

My preferred approach is that for property values that can be directly copied from credential properties into JWT claims, we mandate instead of (i.e. no duplication) and for property values that cannot be directly copied (currently all the time/date properties) we mandate in addition to.

The major argument I see against this is its inherent complexity for users and developers. "Is this property one that I MUST or MUST NOT duplicate?" Requiring duplication in all cases is the only way I see to avoid this confusion. (Forbidding duplication in all cases has other arguments against it, including but not limited to the fact that some values cannot be directly copied.)

@msporny
Copy link
Member

msporny commented Jul 20, 2022

This issue needs to be moved to the vc-jwt spec.

@msporny
Copy link
Member

msporny commented Jul 20, 2022

The major argument I see against this (modifying the original VC) is its inherent complexity for users and developers.

+1 -- I'll note that the "instead of" proposal transforms the VC into a non-conformant VC when paired with JWTs (removal of mandatory "issuer" field, for one... "issuanceDate" is another example).

@David-Chadwick
Copy link
Contributor

I can live with duplication. The problem I then see is if the "duplicates" are different, which one takes precedent? I too would not agree with replacement everytime.

Concerning the non-conformance issue regarding removal of values, I have already raised an issue on this wrt selective disclosure.

@TallTed
Copy link
Member

TallTed commented Jul 25, 2022

The problem I then see is if the "duplicates" are different, which one takes precedent?

This would seem to be addressable by specifying that conforming implementations MUST set both/all "duplicates" to the same value. Yes, non-conforming implementations (including humans) could violate this, but the next conforming tool in the stack should fire an error, so it shouldn't last long.

@OR13
Copy link
Contributor

OR13 commented Aug 3, 2022

Seems related to w3c/vc-data-model#844

@brentzundel brentzundel transferred this issue from w3c/vc-data-model Aug 3, 2022
@bellebaum
Copy link

One thing to mention:
The IANA registry for JWT Claims is neither static nor versioned. If it changes, how will this affect the processing of VCs?

@OR13
Copy link
Contributor

OR13 commented Sep 12, 2022

@bellebaum probably the mapping would have to be redefined, I imagine that would be the case for anything that relied on the new / old terms in the registry.

@OR13
Copy link
Contributor

OR13 commented Sep 26, 2022

For the record, I am very much against the "instead of path..." for mapping a credential to a "Verifiable Credential" in JWT format.

Having implemented both... the "in addition too" path is easier to implement, and leads to better interoperability, and less "translation burden" when processing the claims after verification on the Verifier / RP side.

@bellebaum
Copy link

@OR13 Can you explain how exactly there is "less translation burden"? You would still have to check the values for consistency if you do not want to ignore JWT semantics (If you do, than what's the point of using JWT in the first place rather than e.g. JWS directly?).
But it seems to me that this consistency check requires some form of translation. It may use the same direction for translation on both the issuer and verifier side, but otherwise there seems to be no difference. Please provide some insight as an implementor :)

@OR13
Copy link
Contributor

OR13 commented Sep 27, 2022

Can you explain how exactly there is "less translation burden"?

There is less implementation burden, because the "in addition to path" requires less code.

You would still have to check the values for consistency if you do not want to ignore JWT semantics (If you do, then what's the point of using JWT in the first place rather than e.g. JWS directly?).

Great question.

I prepared this draft proposing 2 new VC security formats:

VC-JWS and VC-JWE... both are much simpler than VC-JWT.

If you are going to use VC-JWT, you should do so as simply as possible, and the issuer should take responsibility for data validation before signing... so the verifier does not have to "check the issuers math"... the verifier trusts the issuer.

But it seems to me that this consistency check requires some form of translation.

Why? You verified the signature... you trust the issuer... In the generic case, you are done.

If you want to add additional schema validation, or business processing... thats probably out of scope for the WG IMO.

Adding normative requirements at this layer adds complexity, and implementation burden.

It is a thing I would expect some verifiers to do... but I don't think it needs to be normatively required to enable interoperability.

It may use the same direction for translation on both the issuer and verifier side, but otherwise there seems to be no difference.

Thats a pretty big burden... You are adding complexity to both the forward and backward mappings... and IMO, needlessly, since the issuer is trusted to make the claims about the subject... if they chose to sign... they approved the data they are delivering to the verifier... it is the issuer's fault if its wrong... It's not a thing that needs a super complex mapping to achieve interop and safety, imo.

Please provide some insight as an implementor :)

VC-JWT requires some mapping... the complexity of the mapping is a risk.

The current mapping is IMO completely broken, and I am proposing we start with "the simplest possible" mapping and expand from there... only when necessary.

instead of is equivalent to in addition to + delete the redundant terms.

You need to do in addition to before you sign.
You need to do delete the redundant terms before you sign.

Then after you verify, you need to map back.

IMO, that is really harmful complexity, and it's not necessary since the issuer should be trusted to produce high quality data, that is easy to consume, the verifier checks the signature and applies policies to the result... the verifier should not have to worry about if terms are mapped properly, or deleted or not... etc...

Relying on in addition to is simpler, it should be the starting point.

Based on, my experience with VC-JWT and RDF graphs derived from verification, the in addition to path is the best option, and the instead of path should be deprecated in v2.

@TallTed
Copy link
Member

TallTed commented Sep 27, 2022

I think this is the key bit --

instead of is equivalent to in addition to + delete the redundant terms.

You need to do in addition to before you sign.
You need to do delete the redundant terms before you sign.

Since in addition to is part of instead of and delivers broader functionality and compatibility, it makes sense to me to drop instead of.

@bellebaum
Copy link

There is less implementation burden, because the "in addition to path" requires less code.

That is my kind of feature. Nice to see other people are concerned about this :)

I prepared this draft proposing 2 new VC security formats:
https://github.com/OR13/draft-osteele-vc-jose

After briefly looking at this, it seems to want to achieve something similar to JWT. It is simpler from a standards perspective, but lacks sufficient implementations and corresponding security considerations compared to JWT. Besides this, it seems to suffer from some of the same problems as JWT. E.g. it still has to have an iss claim, just moved to the JOSE header instead of in the body. And it still has to match the issuer claim in the credential.

Why? You verified the signature... you trust the issuer... In the generic case, you are done.

I politely disagree because of the interplay of two reasons:

  1. The verifier's trust in the issuer should be limited to what is absolutely necessary. Does a verifier (in general) trust that it is making correct claims about the subject? Yes, if configured as trustworthy, that is what trustworthy means in this context. Should the verifier trust that the issuer is producing consistent credentials? No / Only to the point necessary.
  2. Most people using verifiable credentials and this format for transportation will not implement this stuff directly, but use ready-to-pickup libraries (as they should; Just like with JWT, you cannot expect everyone to know all possible security considerations and best practices by heart).

So let us look at an example application taking a vc-jwt (e.g. via some REST endpoint), verifying it, transforming it to the standard JSON-LD form and adding the verified credential to its knowledge database (or feeding it to some process which requires trustworthy data). This scenario in its generic form should not be too uncommon.
A malicious/compromised issuer which managed to get onto the list of trusted issuers for this application could construct a credential whose issuer claim is an entirely different issuer. An application developer would expect the malicious issuer to be unable to sign such a credential in a way that would not be detected by the vc-jwt library. Yet, with the in addition to approach, the malicious issuer can reference its own keys in the iss claim of a vc-jwt.
So what happens? The application receives the vc-jwt, passes it to its vc-jwt-parsing library (presumably along with a list of trusted issuers) and receives the credential in JSON-LD form along with the promise that "the signature for this trusted iss has been validated". The application will then add this credential to their knowledge base, even though the referenced issuer never really issued the credential.

This kind of vulnerability should IMO be at least mentioned in the security considerations to increase the chance of library designers users knowing about this when they need to. Of course, I would prefer it if this did not exist in the first place by making the library user's expectations in the normative part of the specification.

@OR13
Copy link
Contributor

OR13 commented Sep 28, 2022

It is simpler from a standards perspective, but lacks sufficient implementations and corresponding security considerations compared to JWT.

yes, its a draft to motivate interest / generate a reaction... its also possible that jws supports multi sig, whereas jwt does not... not clear to me yet:

Should the verifier trust that the issuer is producing consistent credentials? No / Only to the point necessary.

I am not sure what consistent credentials means, in the case of JWT, the issuer signs over a serialization, and the verifier trusts the entire serialization after verification... The issuer could flip a coin and change the content each time... and the verifier would be trusting that the issuer was using a fair coin... or not... either way, they trust the issuer.

In the case the issuer uses the coin flip to switch between instead of or in addition to... the verifier still trusts the issuer... this is something the issuer could do today, under spec v1.1.

Most people using verifiable credentials and this format for transportation will not implement this stuff directly, but use ready-to-pickup libraries (as they should; Just like with JWT, you cannot expect everyone to know all possible security considerations and best practices by heart).

Agreed, don't role your own library... use a trusted source... but that source will only ever be as good as the spec.. which is why the spec should be simple to implement and reduce optionality.

After you call verify in a "trusted library"... the payload you get back out should be a standard compliant payload without any additional transformations... because that is simpler than the alternative.

Yet, with the in addition to approach, the malicious issuer can reference its own keys in the iss claim of a vc-jwt.

malicious issuer... I am not sure that JOSE RFCs or the W3C VC Data Model can protect against this case... Its probably best addressed though the legal process in the appropriate jurisdiction. Cryptography isn't a solution to fraud, it is a means to detect it, and start a legal process.

@bellebaum
Copy link

Let me rephrase the goal another way:
The main objective of verifiable credentials seems to be this:

The verifier can (without/ with minimal trust) reliably verify which issuer X made which claim Y about which subject Z.

That is why we use cryptography. To minimize trust in this step.
Trust in the issuer only plays a role in the subsequent step: Determining to which degree the verifier is willing to believe claim Y.
(Note that these two steps may not happen one after the other. E.g. ruby-jwt requires you to provide the verification key to a JWT, and this is typically the place where you would look at the iss claim, decide whether to trust this issuer, and if so, fetch the key.)

This also seems to be the way people think about cryptographic signature verification (with known keys) in general: An automatic process to validate that the signed message really originates from the private key owner, with the step of determining trust in this same entity not being part of the process. Thus also not part of any cryptographic library.

After you call verify in a "trusted library"... the payload you get back out should be a standard compliant payload without any additional transformations... because that is simpler than the alternative.

We should be explicit about the "payload". In my opinion, it is the JSON-LD W3C VC which is contained in the JWT, not the RFC 7519 JWT payload.

But then: If the verification library at best looks at the iss claim, but the subsequent determine-trust step only has the issuer claim from the payload to work with, the two steps involved in determining whether to trust a claim by an entity no longer use the same data for their decision. Such a system cannot prove that an entity X really did make a claim Y.

As far as I can see, the only three solutions here are:

  • Clarify that issuer is never verified and thus any decision made by an application about trusting claim Y has to be based on the knowledge of the credential being issued by iss.
    • Note: This would break the semantics of W3C VCs
  • Mandate that iss is to be ignored and the verification key to use must be controlled by issuer instead
    • Might not work with off-the-shelf JWT libraries
  • Mandate that iss in the vc-jwt and issuer in the payload after verification must be consistent
    • Can be implemented using either instead of with transforming the claims or in addition to with a consistency check

malicious issuer... I am not sure that JOSE RFCs or the W3C VC Data Model can protect against this case... Its probably best addressed though the legal process in the appropriate jurisdiction. Cryptography isn't a solution to fraud, it is a means to detect it, and start a legal process.

You are right, they can not protect against malicious actors. However, any responsible verifier will need to consider compromise of its trusted issuers as a potential security thread. This is a fact independent of any in addition to or instead of paths taken by an underlying specification. It is simply a matter of the "least priviliges" paradigm. And the reaction to misuse may be a legal process in unforseeable circumstances, but any responsible verifier will have defences to prevent misuse in the first place.
Should a verifier have to take into consideration an issuer capable of issuing credentials which make it past the automated verification process with the issuer not being identifiable in the resulting payload? I would argue that no, this is what issuer in VCs is for: Identifying the issuer for use in cryptographic verification and trust decisions alike.

N.B. I am focussing on iss at the moment, but there may be other claims which might require consistency throughout the credential and transport format.

@OR13
Copy link
Contributor

OR13 commented Sep 28, 2022

(Note that these two steps may not happen one after the other. E.g. ruby-jwt requires you to provide the verification key to a JWT, and this is typically the place where you would look at the iss claim, decide whether to trust this issuer, and if so, fetch the key.)

Yes, its pretty common of JWT libraries to not decode content until after verification... but thats in conflict with the vc data model, since you need to decode the header to obtain the key to verify the content... This is a known issue in VC-JWT.

We should be explicit about the "payload". In my opinion, it is the JSON-LD W3C VC which is contained in the JWT, not the RFC 7519 JWT payload.

The current normative text describes the required header members (typ: JWT) and the required payload members (vc or vp) + instead of or in addition to rules...

It is the vc member of the payload that is expected to be a W3C TR defined data model... not the payload itself, at least for the current version of vc-jwt.

In my proposed draft for VC-JWS, the payload IS the W3C TR defined data model.

But then: If the verification library at best looks at the iss claim, but the subsequent determine-trust step only has the issuer claim from the payload to work with, the two steps involved in determining whether to trust a claim by an entity no longer use the same data for their decision.

There is no instruction on how the iss field is used to obtain the verification key in VC Data Model 1.1 for VC-JWT... let that sink in :)

Thats clearly something we need to fix.

I like your 3 proposal for addressing this... But I don't know that they are needed because:

  1. We MUST know how to obtain a public key to verify a JWT
  2. We trust the public key to be authentic and controlled by the issuer, based on 1.
  3. We trust the issuer not to sign obviously wrong data (aka be malicious)
  4. We verify the JWT with their key and the decoded result is imported directly into the claims data base / used for its intended purpose.

However, any responsible verifier will need to consider compromise of its trusted issuers as a potential security thread.

This is a binary decision, IMO. If the issuer is compromised, I don't think the attacker is going to be nice enough to make obvious data validation errors, I think they are going to make their malicious claims indistinguishable from previously legitimate and authentic ones.

This is a fact independent of any in addition to or instead of paths taken by an underlying specification. It is simply a matter of the "least privileges" paradigm.

I agree, compromise issuer => compromised root user... its game over.... nothing can be trusted, there is nothing to do but add them to a block list, or remove them from an allow list... we shouldn't be trying to filter their content... they are toast.

Should a verifier have to take into consideration an issuer capable of issuing credentials which make it past the automated verification process with the issuer not being identifiable in the resulting payload?

No, the verifier should use iss and kid to resolve a public key, and then trust the payload content of vc.

Or the verifier should treat the issuer as malicious for not following the spec.

N.B. I am focussing on iss at the moment, but there may be other claims which might require consistency throughout the credential and transport format.

Yep, the other claims that the verifier could be forced to check are also reserved per the JWT RFC.

They include:

  • sub
  • iat
  • nbf
  • exp
  • jti
  • nonce

IMO if an issuer makes even a single mistake mapping FROM the VC Data Model TO these JWT fields when applying the in addition to rule...

The issuer is malicious or indistinguishably incompetent and should not be trusted.

... because a data mapping like this, is trivial compared to implementing a digital signature properly, or protecting a private key.

As an analogy... If you were to see a soldier, take off his helmet on a live fire exercise, stick his head in front of his team member's firing line and say the words "trust me even though I am doing this"... I hope we would all know which list he/she belongs on.... no amount of "the field manual (spec) said I MUST NOT do this, but I am doing it anyway" should convince anyone to trust that soldier.

Verifiers have to have absolute trust in the issuer... Or they can't process claims.

There are a lot of things an Issuer could do to destroy that trust... The easiest way to destroy trust would be to map JSON members incorrectly... The more trivial the error, the less confident we all are in the issuer.

Option A

We can cover this case by normatively defining the mapping for in addition to, like so:

VC Data Model Credential -> { header, payload }, privateKey -> JWT
JWT -> { jwt, publicKey } -> {header, payload } -> VC Data Model Credential

Bi-directional and lossless.

It a Verifiable Credential when it's got an external proof, just like it says in 1.1.

This is the simplest solution, it builds on the absolute trust assumption, and it requires the least amount of code.

Option B

The next step up is data validation check, which assert equivalence, but we have to handle type conversions for date strings (as you rightly suggested!)

Option C

The next step up after that is instead of, with direct translation after verification.... This is basically an extra mapping step required on the verification side, to conform to the core data model... So the data you get after verifying is the data you had before signing.

If the WG thinks that "B" is "worth it" or "C" is "worth it", I expect the normative requirements of VC-JWT to reflect the complete set of steps necessary to achieve them... and for all implementations to cover those requirements in both positive and negative tests... it is doable... but it is, in my opinion "not worth it"... and "not a good idea".

@selfissued
Copy link
Collaborator

@OR13 wrote:

Yes, its pretty common of JWT libraries to not decode content until after verification... but thats in conflict with the vc data model, since you need to decode the header to obtain the key to verify the content... This is a known issue in VC-JWT.

Very interesting...

@OR13
Copy link
Contributor

OR13 commented Nov 16, 2022

Perhapse an object that handles sub better would be the approach taken here, called subject : https://github.com/notaryproject/notaryproject/pull/148/files#r1024340990

@OR13
Copy link
Contributor

OR13 commented Jun 11, 2023

This language has been removed, closing this issue

@OR13 OR13 closed this as completed Jun 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants