Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BOLT 2: Clarifying requirements for multiple TLV record occurrences of the same type #776

Closed
dr-orlovsky opened this issue May 16, 2020 · 3 comments · Fixed by #777
Closed

Comments

@dr-orlovsky
Copy link
Contributor

dr-orlovsky commented May 16, 2020

Currenlty nothing in the BOLT-2 specification https://github.com/lightningnetwork/lightning-rfc/blob/master/01-messaging.md#type-length-value-format prevents from using multiple TLV records of the same type inside message TLV stream. However, it is unclear whether this is an intentional feature – or just a bug of underspecification. I propose to clarify this point. I am prepared to write a PR for this, but not sure what is the original design intention so will wait for your clarification. Also, if this is the intentional feature, it is than unclear why Init message takes an array of chain_hash as a single TLV record.

@dr-orlovsky dr-orlovsky changed the title BOLT 2: Clarifying TLV record occurences BOLT 2: Clarifying requirements for multiple TLV record occurrences of the same type May 16, 2020
@cfromknecht
Copy link
Collaborator

@dr-orlovsky good catch, PR is definitely welcome! the intention is that duplicate tlv records are forbidden, reading this again the requirement should be corrected from

 - MUST order tlv_records in a tlv_stream by monotonically-increasing type.
... 
 - if decoded types are not monotonically-increasing:

to

 - MUST order tlv_records in a tlv_stream by strictly-increasing type.
...
 - if decoded types are not strictly-increasing:

This is actually what is asserted by the test vectors, and AFAIK implemented by
all implementations.

The main reason relates back the fact that if a decoder doesn't recognize a field, it can't be certain whether there should only be one, or if multiple are permitted. Asserting this would require some knowledge of what the field encodes or a scheme from which it can be inferred knowing only the type, e.g. (a silly example) only prime types can have multiple records. The first defeats the purpose of the proposal, and the latter is cumbersome in practice and would further fragment the type space beyond the existing even/odd rule.

Smaller wins: permitting multiple of the same type is less efficient on the wire due to the overhead of redundant type-length pairs. It is also less efficient when allocating memory for the decoded objects, e.g. one can't allocate the exact size of a vector upfront and will incur more copying overhead when appending each item individually.

Hopefully that clarifies some of the intention behind the uniqueness constraint :)

@dr-orlovsky
Copy link
Contributor Author

PR is there. I've used your working to provide both rationale and changes to the spec itself.

@t-bast
Copy link
Collaborator

t-bast commented May 18, 2020

The rationale already states that The monotonicity constraint ensures that all types are unique and can appear at most once..
But it's true that monotonically-increasing isn't that, it was an error from our part.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants