Replies: 17 comments 72 replies
-
The process of verifying an identifier requires message creation, routing, delivery, and interpretation. If a message originates in KERI, for example, it has to be routed and delivered to some endpoint somewhere. Routing and delivery are something that all layer one protocols share, and the mechanisms need to be shared as well (a good argument for the hourglass model). The differences between a KERI message and one from DWN or some other protocol matter only in the creation and interpretation operation which we can delegate to the end points. Intermediary systems need not, and should not, be aware of the contents of the payload, and that includes the protocol used to create it. The interface to the TSP can be very simple: give me a message with a destination and I'll make sure it all gets there. The answer to your question, then, is that (I believe) the TSP rides on top of the protocols you name in layer 1 and provides a very simple interface for a single operation, making the TSP layer very thin, but very flexible. IP, for example, doesn't even guarantee delivery, TCP in the next layer up does that. |
Beta Was this translation helpful? Give feedback.
-
Rather than just a packing and routing protocol, perhaps the trust spanning layer should comprise the two operations that everyone will need to do: create a new ID, and verify a presented ID (with options to indicate the far end of a trust chain -- more later). This will alleviate the need to validate these two operations and will allow users to "trust the network" to perform the operations correctly and know that the results returned can be trusted (this is the "trusting the mechanism" part of a trust decision. On issue, the issuer's wallet will determine the level 1 utility to use for creating new VIDs. On verification, the VID for the presented claim will determine which level 1 utility to use. So, putting these two operations into the spanning layer makes some sense. Really, the spanning layer is these two operations, since TCP or UDP will use the IP layer to do the actual packing and routing. |
Beta Was this translation helpful? Give feedback.
-
Neither of the diagrams above matches my mental model. I would say, rather, that each of the "forerunners" has that label because they have features that the final TSP needs. I also hope that each of these protocols evolves into TSP in the sense that people starting from one of those protocols ends up feeling like it was an easy, obvious evolution to arrive at the eventual TSP. |
Beta Was this translation helpful? Give feedback.
-
In terms of evaluating the "forerunner" protocols, I want to highly recommend that TSPTF members read this Medium article by @dhh1128 called Sentries, Confessionals, Vaults, and Envelopes. To quote from the opening paragraphs:
IMHO, Daniel's "extended metaphor" is a brilliant framework for understanding the different purposes and designs of these four protocols — and thus exceedingly helpful in helping us lay the conceptual foundation for the design of the TSP. |
Beta Was this translation helpful? Give feedback.
-
I have a second very strong recommendation for all TSPTF members: take a hour in the next few days (ideally before our next TSPTF meetings this coming Wednesday) to watch this presentation @SmithSamuelM made about the Hourglass Model at last Wednesday's APAC meeting of the Trust Registry TF. Sam uses a series of diagrams to walks through the history of the Hourglass Model and show precisely how the IP spanning layer works in the TCP/IP stack — with a special emphasis on how the IP layer is — and is not — involved with discovery and routing. Sam gave that talk to set the stage for the TSP proposal he will be presenting in this Wednesday's (Feb 8) TSPTF meetings. So it will be extremely helpful to anyone attending that meeting to have watched Sam's talk first. (Note: I will be cross-posting this message in GitHub, Slack, and email to make sure everyone sees it.) |
Beta Was this translation helpful? Give feedback.
-
@andorsk Your diagram showing trust registry protocol illustrates an important point about trust. I believe there are two fundamental types of trust that need to conveyed. I call these attributional trust and reputational trust. In my conception of a trust spanning layer the only trust in the spanning layer is attributional trust. A higher layer can convey reputational trust and a trust registry is a type of reputational trust. There are other mechanisms for conveying reputational trust. So these all would expand out above the spanning layer. This keeps the spanning layer thin while supporting reputational mechanisms. |
Beta Was this translation helpful? Give feedback.
-
@talltree your diagram showing KERI sitting above a TSP is incorrect. KERI is a concrete example of a TSP. Whereas DidComm is more about discovery and routing and transport than it is about a TSP. And DIDComm does many things and in my opinion is really protocols from different layers and different purposes. For example a routing stack has routing speciific protocols that sit above the TSP but only for router applications not trust applications. This is where much confusion exists. |
Beta Was this translation helpful? Give feedback.
-
@andorsk The devil is in the details. The talk I gave that drummond references sets the stage and the talk I am about to give will go into details on how to answer these questions. Its not as simple as a yes or no (forerunner or not) and I think that we should be viewing the concept of a forerunner very loosely and not try to diagram them just yet. The nuances are too many. |
Beta Was this translation helpful? Give feedback.
-
The field of automated reasoning which is a subset of what many now call AI is all about converting human decision into something a computing machine can do. It requires what we in the field call "reasoning in an environment of uncertainty". But the decisions using appropriate techniques can be highly reasonable and highly certain. So it is not al all beyond the scope of "trust over the internet" to consider upper layers that mange and compute reputational trust. Indeed, applying my expertise in automated reasoning to solve the reputation trust problem is what originally drew me to this field, now 8 years ago and counting. But that said, its pointless IMHO to even consider automating reputational trust until one has solved the attributional trust problem. And the later is what a TSP is all about. |
Beta Was this translation helpful? Give feedback.
-
good comments. For example the condition of being burned by someone with a “similar” reputation. I am not sure what you mean by “similar”. I can trust multiple people for being honest and in general if one of them proves to not be honest I don’t assume that all honest people are dishonest. Off course if I ascribe other attributes like race, age, cultural background as predictors of honesty then yes I could infer based on a false attribution that members of a class inherit or disinherit honesty by membership in the class. But that would be a poorly design reputation system in my view. More realistic is reputation by reference. In this case I trust someone because someone else whom I trust referred them to me. In general good reputation systems are reflexive. That is anything I do to confer reputation on someone else simulataneously confers reputation on me. So there is no free lunch. If I refer someone to someone else and that referral does not live up to my referall reputation then my reputation as a referrer will suffer proportionately. This makes referals reflexive. This provides a self-normalizing feedback loop. If you think of a reputation more like a control system ( I am a published expert in intelligent control systems which merge automated reasoning and control theory) then you see reputation as a way to modulate interactions between persons and their actions adaptively train the modulation system via nested feedback loops. So yes AI modeled as controls systems can indeed take into account the reflexive nature of human class reputation systems. And like any real world system you have uncertaintly that must be properly modeled. I have written on formal reputation system design (see the OpenReputation white papers and presentations in this repo https://github.com/SmithSamuelM/Papers). My definition of a reputation is as follows: a reputation is a contextual predictor of future behavior that modulates an interaction (transaction) As a predictor is mesures the risk of engaging in a transaction. One might then weigh the cost benefit of such a transaction in light of the predictor. A good rule of thumb would be don’t risk more in a transaction with an entity than that entity risks by losing its reputation should it cheat you. If loss of reputation is not enough then you have to make that entity put more skin in the game or you dont take the risk. Human communities traditionally have very expensive reputation systems and loss of reputation can be extremely costly. Online we don’t yet have anything as good but not because we cant but because we need secure attribution first. When I can’t know you then I can’t trust you. |
Beta Was this translation helpful? Give feedback.
-
“ I promise I will explain, but I want to do it concisely and carefully; I
don't have time for that today.”
I have learned that the wait is always worth it Daniel.
Thanks all for all of this excellent discussion.
On Wed, Feb 8, 2023 at 15:36 Daniel Hardman ***@***.***> wrote:
Your careful description of the principle matches what I got out of the
Beck paper when I studied the formal logic myself, earlier this week.
Perhaps it will save some other people the study time. Thank you.
I now understand what you intended by "universal," which is great.
However, your comment does not resolve my concerns. In formal debate, there
are direct challenges to an argument on logical or evidentiary grounds, and
then there are challenges that step back and question what's called
topicality -- whether we are even debating the same question, with
compatible assumptions. My concerns are more along the latter lines. I
promise I will explain, but I want to do it concisely and carefully; I
don't have time for that today.
—
Reply to this email directly, view it on GitHub
<#5 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFHWBYOUXD4EVU6BHCQJUDWWP7UJANCNFSM6AAAAAAUGROZVI>
.
You are receiving this because you were mentioned.Message ID:
<trustoverip/trust-spanning-protocol/repo-discussions/5/comments/4909386@
github.com>
--
*Darrell O'Donnell, P.Eng.*
***@***.***
|
Beta Was this translation helpful? Give feedback.
-
This is a good way of describing the concept of "end-verifiabilty" a secure identifier system overlay provides. The path doesn't matter because the end can always verify that the data, message, or information came from the source so identified via the signature. This allows a TSP to use untrustable routing networks which expands the support and broadens adoption which is what the hourglass principle is telling us to do. That does not mean we can't also have trustable routing in the support as well. The more the better in the support of the TSP. |
Beta Was this translation helpful? Give feedback.
-
I will naively weigh in here with a thought that has been bubbling in my
head. It hasn't fully formed (they rarely do in my skull) but I'll share my
thinking.
Given the communications layer has largely been settled over the internet
as TCP or UDP, both over IP, I keep thinking the following:
* TCP works well for most applications because it handles the hard stuff -
retrying, packaging, etc.
* UDP works for specialized use cases where the trade-offs of TCP impact
the need that UDP solutions serve.
In my mind I want a TCP equivalent that handles the main messaging, retry,
etc. for the majority of things that I see us needing. However, I also
recognize that UDP cases exist.
So the question to me is this - what is the real "thin waist" of the
internet. Given that an address is useless without a communication
approach, I'd argue that the TCP/IP and UDP/IP components comprise the thin
waist...
…On Thu, Feb 9, 2023 at 11:54 AM Daniel Hardman ***@***.***> wrote:
one could argue that the fragmentation feature of IP could be removed and
make the layer thinner if it were not for backwards compatibility concerns
So we have a winning spanning protocol that isn't pure and contains some
pragmatic compromises, and it still won handily. Why? Because it was a
noticeably better spanning protocol than its alternatives. *That* -- not
perfection in alignment to a theoretical model -- is the true predictor of
adoption. Insisting on a single feature is not only not what Beck or Clark
advised, it is actually not validated by any history. Not by IP, and not by
the Unix kernel. (It was the Unix kernel, not its shell, that Beck and
Clark cited, BTW.)
In his analysis of the relationship between the hourglass theorem, Beck
himself points out (page 53) that reflexively pushing features out of the
spanning layer into higher-level application constructs doesn't necessarily
optimize weakness:
Moving function upward in a layered
system can have the effect of removing
responsibility for particular functionality required by applications from
lower
layers. This leaves higher layers free
to implement their true requirements
without imposing costs or other artifacts due to inappropriate
functionality being implemented by lower layer
services. However, when applied to the
spanning layer, end-to-end arguments
do not necessarily lead to a design that
is logically weaker, and thus has more
possible supports.
Instead of doubling down on the logical purity of authenticity, I invite
you to consider a different question for a minute:
*Given two spanning layers that span an identical set of supports, but
which differ in strength (one supports more applications than the other),
which one is better?*
The answer, I submit, is NOT "whichever one is weaker." The whole point of
making something weaker is to increase spannable supports, and I've just
said that the difference between the two is irrelevant in this respect.
The answer, I submit, is also NOT "whichever one is stronger." That's too
simple. First we would have to evaluate whether the applications atop each
one are actually the applications that we intend to support. Beck himself
points this out: "The balance between more applications and more supports
is achieved by first choosing the set of necessary applications N and then
seeking a spanning layer sufficient for N that is as weak as possible. This
scenario makes the choice of necessary applications N the most directly
consequential element in the process of defining a spanning layer that
meets the goals of the hourglass model."(p. 52)
If we had a crisp idea of what kinds of applications we want to enable,
then we would tend to say that the stronger of the two *within the set of
applications we want* is better. But even THAT rule isn't quite right,
because as Beck himself points out, there are other considerations, like
simplicity, generality, resource limitation. Clark points out another
consideration like this, which is political rather than technical...
So, I claim that the argument for pure authenticity is reductionist. Full
of mostly true statements -- and I'm a big fan of authenticity as Sam has
implemented it. I just think we need some nuance here, not just
mathematical elegance.
I further claim that we need a deep discussion about what kind of
applications we want to enable. Sam's proposal treats all possible
applications as uninteresting and equal. That's not what Beck said to do.
—
Reply to this email directly, view it on GitHub
<#5 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFHWBZIX3XRPTR56PIESDDWWVDORANCNFSM6AAAAAAUGROZVI>
.
You are receiving this because you were mentioned.Message ID:
<trustoverip/trust-spanning-protocol/repo-discussions/5/comments/4924371@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
verification is up to the endpoint. Thus the. message must be verifiable (verifiable authenticity to the source cryptonym) or more explicity authenticatable. The sender can’t ensure that an receipient endpoint actually verified anything, unless someone wants to impose some very hard to build full duplex fully acked channel and a proof of verification sent back to the sender. Please no. We want async datagrams as the default. @darrellodonnell Many protocols provide reliable datagram services on top of UDP (RAET for Reliable Async Event Transport) is one that I wrote for example). In my experience it is much easier to make UDP reliable than it is to make TCP scalable. But both are independent feature branches in the IP stack and therefore should not be combined to thicken the. IP layer. When we try to think of a protocol stack in the ISO OSI way we have a fixed set of layers and we have to then apportion all features to one of those 7 layers. A spanning layer protocol looks like a tree and some branches longer than others i.e. have more or less layers. some protocols sit at the top of many layers and some sit at the top of only one or two. the many thin layers gives a designer freedom to make each layer optimally thin rather than shoe-horning features because they have to go in one of a fixed set of layers. This allow the protocol stack to grow organically. This leads to more rapid adoption because its easier to add features as needs demand. Just add a thin layer usually. Instead of breaking an existing layer to thicken it. |
Beta Was this translation helpful? Give feedback.
-
Amen! I have spent the better part of the last 3 years trying to shake that OSI 7 layer model. I don't have to catch myself as much now when I get tempted to push features into slots in a rigid layering. With regard your diagram. In the old days (circa IP), data in a layered protocol like IP consisted of C structs that provided self-framing headers. One of the fields in the C struct was the length of the header plus payload. Stacking layers meant nesting a layer-specific self-framing header. Upper layers nested inside lower layers. So when a protocol referred to data formatting and encoding it often meant the payload, not the headers. But with a Trust Protocol, cryptographic primitives need to be included in the "effective" headers. So CESR is used in the headers not just the payloads. That, coupled with CESR's grouping composition, means that CESR can provide self-framing payloads etc. |
Beta Was this translation helpful? Give feedback.
-
@JoOnGT I am writing up an description of the header and payload model used by CESR enabled KER ,and ACDC protocol layers. I think this will provide a very useful example as a point of departure for discussion and crytallize some of the design choices. If for no other reason than I can explicate in detail why we made those design choices. These protocols are now in production so the design example is not theoretical but has been battle tested some what. Many of the features you are talking about can fit it layers above. AFAIK the header structure is flexible enough to allow anything we want or need to extend the functionality. The unique features of the CESR encoding for packets including headers, payloads and attachments (signatures, receipts, references to anchors etc) include:
|
Beta Was this translation helpful? Give feedback.
-
@JoOnGT One payload data layer sitting above the TSP is the ToIP fostered ACDC protocol. ACDC stands for authentic chained data containers. It is a container for payload data with some specific features like chaining and revocation registries and different types of disclosure including selective disclose ( it is a type of VC) but takes a layered approach (factual claims in whatever format (JSON-LD or JSON) that one wants to convey in an authentic manner can be referenced in the payload of an ACDC. ACDCs are opinionted about having very strong security. Using a container with strong authentication means that a less secure W3C v1.1 can be conveyed over-the-wire with strong security using an ACDC or in some cases replaced by an ACDC. The W3C VCWG2 (VC spec version 2) is this week the Miami face-to-face going to be considering a "big tent" approach to VCs that would open the door to a TSP approach to VCs. Hopefully the community supports it. |
Beta Was this translation helpful? Give feedback.
-
For the candidates for TSP protocols:
Is it supposed to sit on “top” of DWNs, KERIs, and DIDComm or is it the responsibility of these protocols to figure out how to conform with TSP? How will it interact with these candidates. Looking for some clarification and discussion around this.
I.e
or
Beta Was this translation helpful? Give feedback.
All reactions