-
Notifications
You must be signed in to change notification settings - Fork 187
Token Curated Registries The New Search?
Token Curated Registries provide a market mechanism for content curation that could complement centralized curation services. Tokens are hereby used as economic incentives to curate lists, or rank information in such a list, including content feeds in a social network or recommendation algorithms for e-commerce platforms.
Listings and registries have proven to be a useful tool to organize, rank, and share information. We use lists for our daily decision-making processes, such as “best books,” “best restaurants,” “top universities,” “tokens to invest in,” “best movies,” “best classic movies,” “best horror movies,” “best rated products of a certain category of an e-commerce platform,” “best budget or luxury hotel in a region.” These lists or registries can be private or public and are usually centrally managed. One can use whitelists or blacklists to filter relevant information. Any newspaper and magazine is also a curated list of relevant information. Whether daily news or a fashion magazine, the content in these publications is carefully selected and sorted, highlighting more important information on the cover and the first pages, rather than in the middle or the end. Such filtering is a result of a third-party curation process, which is useful as the readers save much time researching and filtering information themselves. The curation process is outsourced to the editors who are trusted to curate with diligence.
Ever since the emergence of the Internet, such listings, rankings or recommendation services have become more important. The Internet has radically reduced the costs of publishing and sharing information. As a result, it has become difficult to filter meaningful information from all the online noise. The first online lists were websites that collected and sorted information from other websites to help users search for relevant information on the web. Early “search engines” were manually created by people who were paid for categorizing online content like books in library shelves, but this process was not scalable. The sheer information load triggered a new form of creating public lists by applying (i) machine learning algorithms and (ii) wisdom of the crowd mechanisms to derive meaningful lists and rankings. Google was one of the first search engines to introduce algorithmic search, and Tripadvisor introduced “wisdom of the crowd” solutions to produce a listing for “best hotel in the region,” aggregating a collection of personal recommendations. Such third-party curation, whether public or private, algorithmic or wisdom-of-the-crowd based, is prone to censorship and manipulation, as they are centrally managed.
Users of online services have to trust that the internet platform providing such curation services act honestly, and hope that their taste for restaurants or hotels aligns well with their own tastes. In privately managed lists, the owner of that list can arbitrarily add or remove list members or require payments from people who want to be listed. Their ranking methods are often undisclosed, can be gamed, or might not coincide with the taste or judgement of their users. Public lists such as Tripadvisor can also be manipulated by a load of pseudonymous users who spam the list, perform fake ratings, or socially engineer the list. To mitigate these problems of collectively curated lists, semi-centralized list moderators are often appointed to manually intervene, which is a point of centralization and does not scale well. Facebook, for example, outsources most of its manual content curation moderation to low-income countries like the Philippines to save costs.
The methods of third party curation and recommendation service providers are, for the most part, undisclosed, resulting in intransparent filtering algorithms. The curation tasks involve maintaining whitelists or blacklists, managing data feeds, filtering comments, or providing context-specific recommendations. Machine-learning algorithms derive their suggestions by correlating personalized user data with statistical data of the behavior of all other users. E-commerce platforms such as eBay or Amazon use machine learning to rank the search results, and once you select an item of your choice, suggest other products that might be relevant for you. Video streaming services such as Netflix use machine learning to suggest movies that might be relevant for you, while music platforms like Soundcloud or Spotify suggest music playlists that you might like. Social media platforms such as Twitter, Facebook, or Instagram use machine learning to rank the posts and ads in your data feed. However, only a handful of companies control the curation process of the search engines, social media networks and other digital services we use today.
Token Curated Registries (TCRs) are a market mechanism introduced by Mike Goldin for collectively curating lists in the absence of third-party coordination. Tokens provide an economic incentive to curate lists that are valuable to consumers. Transactions are settled and cleared autonomously by a distributed ledger. TCRs are designed to represent a public good. Anyone can participate.
Prerequisites: In order to set up a TCR, one needs to (i) define a purpose for the list, (ii) a native token, and (iii) a governance mechanism that makes sure that all token holders are incentivized to maintain a high-quality list.
Stakeholders: (i) candidates provide content for the list, (ii) consumers use the list, and (iii) curators collectively manage the quality of the list (token holders).
Process: Candidates have to deposit a certain amount of tokens to apply for the list. Any token holder can participate in the curation process, and has a certain time to cast a vote on whether or not the candidate's application should be included in the list. If they think that the application should be excluded, they can challenge the listing. To do so, they must make a deposit of a certain amount of tokens into a smart contract, locking a part of their network stake. Once a challenge has been initiated, all other token holders can vote by also staking their tokens. If at the end of the voting period, the application is rejected by the majority of token holders, the applicant’s deposit is split between the challenger and all other token holders who voted to reject the application. Otherwise the listing of the candidate is added to the registry, and the smart contract distributes the challenger’s deposit between the applicant and all token holders who voted for accepting the listing. It is advised that TCRs divide the voting process into two phases, the commit phase and the reveal phase. Results are only openly broadcasted after the commit phase is completed to avoid “coordination attacks,” where one curator could have influence over the voting process of other curators. Tokens are locked in the commit phase and unlocked during the reveal phase.
Token: Tokens are designed to be transferable and fungible (all tokens are designed to be equal). It is assumed that each list needs their own token to give a reliable signal of the quality of the list and the value of the network. The price of a token is a result of supply and demand, and as such, assumed to be a performance indicator for the collective actions of all token holders. If a TCR would accept a non-native token as a means of payment, such as BTC or ETH, the collective performance of the token holders would not reflect performance of the list and the economic incentive mechanisms would therefore not work.
Mechanism Design: The incentive mechanism needs to align incentives in a way to make sure that it pays off for token holders to vote truthfully, and that it does not pay to cheat the system. Candidates who believe they will be rejected are not likely to apply; otherwise, they would lose their tokens. Token holders, on the other hand, could theoretically reject every candidate, but that would collide with their interest to increase the value of their tokens. An empty list is not interesting for anyone. Profitability and quality of all stakeholders need to be well-aligned, so that objective and high-quality lists can be produced.
Design assumptions: The concept of a TCR is based on the assumption that a free market for listings could potentially provide a better mechanism for quality curation of lists than centrally managed lists and data feeds. It is also assumed that economic actors want to maximize their profits and act rationally at all times. Candidates are assumed to have an interest to be included on the list for advertising purposes, and be willing to pay a listing fee, as placement on such a list serves as validation of quality of their services. Curators, who also have a stake in the network in the form of network tokens, would make more money from well-maintained lists with a lot of traction, which means that they have an incentive to curate the list truthfully. The vote of token holders is proportional to the number of tokens they own, or stake. Proportional voting rights are based on the idea that those who have the most at stake are most incentivized to act in the network’s best interest. Consumers, on the other hand, seek high-quality information and use lists to make decisions. If the quality of the listing is good, consumers will be interested in consulting the listing, which will make it more attractive for candidates to apply to be listed and strengthens the overall economy of that list.
The economics behind the registry needs to be designed in a way that it accounts for all possible attack vectors. A number of attack vectors have been identified, such as “trolling,” “madman attacks,” “registry poisoning,” or “coin flipping.” A solution to each of these potential attacks needs to be reflected in the governance rules of the TCR to guarantee high-quality listings.
Trolls might try to add content to the list that does not satisfy the list’s criteria. Such trolling also happens on current Web2 platforms, such as Amazon, where adding reviews does not cost anything, except for the costs of writing the review. As a solution, the mechanism needs to be designed in a way to make it expensive for a troll to add low-quality listings. Losing one’s deposited listing fee is such a mechanism. But even if the listing fee is high enough for most users, an attacker with non-economic reasons, or an attacker with many funds at their disposal, might still be able to flood the system with non-relevant listings. One could raise the minimum deposit, which might exclude eligible applicants with little funds at their disposal to apply for a listing, thus creating an economic barrier of entry into the system.
Registry poisoning refers to the problem of what happens to a listing that was once accepted for good reasons, but the quality of its services has since declined so they do not meet the listing requirements anymore. The mechanism needs to be designed in a way to incentivize token holders to find and challenge listings that “poison” the registry.
Free riding: Token holders could decide to free-ride the system, and not actively participate in any of the voting processes, hoping that other token holders will maintain the quality of the list and therefore also the value of their token holdings.
Coin flipping: There are no direct penalties for making bad decisions, only indirect long-term ones that might reflect in the token price once the quality of the list does down. A profit-maximizing token holder might find it more rational in the short term to cast a random vote (coin flipping) instead of investing the time in making rational assessments over a potential listing. It is assumed that a certain distribution of votes between “coin flippers” and “truthful token holders” can maintain the integrity of the list in spite of such coin flipping behavior, but if too many curators decide to free-ride the system by coin flipping, the quality of the list could be jeopardized.
Madman Attack: This refers to a potential manipulation attempt of someone who might have an economic reason to undermine the quality of the list, in which they spend a large amount of funds to flood the registry with low-quality listings (51-percent attack). The mechanism needs to be designed in a way to make a 51-percent attack expensive. However, given potential “free-rider” problems, only a minority of token holders are likely to actively participate in voting for and against proposals, which means that, in reality, madman attacks may not be as expensive as a theoretical 51-percent attack of all token holders.
Vote memeing refers to the fact that some token holders might copy group behavior in the interest of always being in the majority voting block, thus being on the winning side and always earning tokens. To avoid this, commit-reveal schemes have been introduced to make sure that votes of others will only be revealed after the voting period ends.
Token Curated Registries could be a game changer if they manage to provide a manipulation-resistant alternative to centralized curation services. However, critics argue that TCRs that use token-weighted votes (i) cannot provide nuanced curation, (ii) cannot replace subjective reputation systems, and (iii) have a problem with “minimum economy” size. They claim that having a stake in a system alone cannot build quality curation, as token holders are more likely to maximize short-term profits, since they can sell their tokens any time and exit the system, which is harmful to the collective quality of the list in the long run. Furthermore, any TCR will need a minimum market size to resist manipulation attempts, which means that new lists have a chicken-and-egg problem. Also, consumers will not be interested in a small or half-empty registry, and candidates won’t be interested to apply to a registry that is not visited by anyone. Another issue is that TCRs are not useful for all types of registries. Bulkin, for example, is an outspoken critic and distinguishes between “subjective TCRs” and “objective TCRs.” In his opinion, a TCR can only be successful if (i) an objective answer to the listing question exists and if (ii) the answer is publicly observable, such as air temperature in a certain geographic area.
Bulkin criticizes that token-based voting does not necessarily result in higher quality curation for subjective lists, and is furthermore tainted by power asymmetries between small and big token holders, especially if registry tokens can be acquired with money, not reputation. To end up on the winning side, token holders will likely be incentivized to vote for the choices they believe the majority of token holders, or the big token holder, will vote for. Bulkin states that subjective questions cannot be accurately answered by an objective mechanism as proposed by Goldin. Lists that are prone to subjective tastes or opinions need a stronger coordination signal, which would require a well-defined set of curators with well-aligned values. In such a setup, trusting the people curating information and understanding their motives is important. For quality subjective lists, Bulkin suggests combining TCRs with social reputation systems could add necessary context to a TCR. As different people have different social values, adding the context of social value is important in curating certain types of lists. He also argues that reputation scores are more likely to be uniformly distributed than wealth, and that TCRs are easier to bootstrap when they include a subjective reputation system, which would resolve the “minimum economy” problem to make a list attractive enough for early adopters. Adding social reputation could also resolve the problem of vote-memeing, if bad actors could lose their reputation or if accounts could get blacklisted. Such a setup could also mitigate voting rings and some cases of vote-buying attacks.
Furthermore, Mike Goldin’s approach does not account for possible “free-rider” problems, where some token holders might choose to stay passive, simply investing in a token for speculative reasons. Such “free-riders” would hope that other curators will vote in a trustful manner, thus keeping the quality of the network high. “Free-riding” is a typical problem for public goods (read more: Part 4 - Purpose-Driven Tokens). To resolve this, the governance rules could be designed in a way where token holders are forced to vote. This, however, will very likely result in so-called “vote memeing” (copying someone else’s voting behavior) or “coin flipping” (making a random vote to save time in research and decision making), which could also reduce the registry’s quality over time. While the concept of TCRs could be used to make a decentralized list manipulation resistant, it will not work without a reputation system.
Since TCRs haven’t been tested publicly yet, it is unclear which governance rules will work in the long run, and how to optimally set the variables that govern the internal economy of a list. These variables may vary, depending on the type and purpose of the list, such as: (i) the amount of time token holders have to commit votes to a challenge; (ii) the amount of time token holders have to reveal their votes to a challenge; and (iii) the percentage of votes necessary for a certain outcome to take effect. One challenge in defining these variables could be the amount of time token holders have to challenge an application. If it is set for too long, token holders might forget to cast their votes. Changes to the parameters of the token governance rules could be voted for in a similar fashion to how new applications to the registry are voted for. To propose a new governance mechanism, token holders could stake tokens and submit the application to all other token holders to vote on. Applications of a new governance mechanism could be evaluated the same way that applications to the registry are voted upon, which means that they would be subject to the same attack vectors.
Alternative proposals have been made as to how to modify the initial concept introduced by Mike Goldin, to mitigate some of the attack vectors described above, or to add quality of information to the listing. The token governance rules of the TCR variations mentioned below cannot be explained in detail in this chapter, but can be researched online (check references at the end of the chapter).
-
Ordered TCRs: Simple TCRs are unordered, which means that they are just a list of entries that have made it into the list. Curators vote to include or exclude and decide on the ranking to each entry in the list. Each listing has an exclusive rank, which means that two listings cannot have the same rank. The number of entries can be limited or unlimited.
-
Graded TCRs are a simple variation of an ordered TCR where two listings can have the same amount of reputation points. Listings can have the same rank and don’t occupy a unique index. They give a better signal about the qualitative range of a listing.
-
Layered TCRs are more complete, as they introduce different layers of acceptance. In a first qualification round, a listing could qualify via some predefined rules, and would have to meet some additional criteria to qualify for the next layer, which could be helpful for building a more sophisticated hierarchy, allowing for more diversity or subjectiveness. Such an approach could increase the overall quality of a list.
-
Nested TCRs are lists where the entries of a listing have pointers to other lists. Nested TCRs can be used to reflect relationships between attributes rated in one list and attributes of the same listing that are rated in another list.
-
Combinatorial TCRs allow us to visualize an array of items in one list. Token holders can collectively define acceptable sets, ranges, and parameters.
-
Continuous Token-Curated Registries combine continuous token models with TCRs to create a liquid market for curation. Instead of generating and pre-selling tokens at one specific point in time, tokens are minted according to a predetermined algorithmic curve. The value of the registry is a function of the usefulness of the list and whether it can act as a natural “Schelling point.” A Schelling point, in this context, refers to a list that most users would agree on in the absence of communication. Continuous TCRs are useful to reflect the long tail of categorization that wasn’t possible or feasible before.
While the classic proposal of TCRs might have limited use cases, the emergence of more complex and sophisticated proposals is an interesting phenomenon to follow. More and more projects are starting to implement aspects of various TCR proposals in their token design. “Relevant” is building a reputation protocol that combines subjective criteria with TCRs. They want to use this to build a fake news–resistant social news reader, using token-backed qualitative metrics, valuing quality over clicks. Other examples of projects that use TCRs in their token design are “AdChain,” “Distric0x,” and “Messari.”
Online lists and recommendation engines use (i) machine learning algorithms and (ii) wisdom of the crowd mechanisms to derive meaningful lists, rankings and recommendations. Such lists or registries can be private or public and are usually centrally managed. Whitelists or blacklists are used to filter relevant information and saves the users time researching and filtering information themselves. Third-party curation, however, is prone to censorship and manipulation, as they are centrally managed.
The curation tasks involve managing and maintaining data feeds, filtering comments, or providing context-specific recommendations. Machine-learning algorithms derive their suggestions by correlating personalized user data with statistical data of the behavior of all other users. Their methods are, for the most part, undisclosed, resulting in intransparent filtering algorithms.
Token Curated Registries provide a tokenized market mechanism for collectively curating lists in the absence of third-party coordination and centralized list management. Tokens are used as economic incentives to perform curation tasks. Transactions are settled and cleared autonomously by a distributed ledger.
TCRs are designed to represent a public good. Anyone can participate. In order to set up a TCR, one needs to (i) define a purpose for the list, (ii) a native token, and (iii) a governance mechanism that makes sure that all token holders are incentivized to maintain a high-quality list.
The stakeholders are (i) candidates provide content for the list, (ii) consumers use the list, and (iii) curators collectively manage the quality of the list (token holders).
Candidates have to deposit a certain amount of tokens to apply for the list. Any token holder can participate in the curation process, and has a certain time to cast a vote on whether or not the candidate's application should be included in the list. To do so, they must make a deposit of a certain amount of tokens into a smart contract, locking a part of their network stake.
If at the end of the voting period, the application is rejected by the majority of token holders, the applicant’s deposit is split between the challenger and all other token holders who voted to reject the application. Otherwise the listing of the candidate is added to the registry, and the smart contract distributes the challenger’s deposit between the applicant and all token holders who voted for accepting the listing.
Candidates who believe they will be rejected are not likely to apply; otherwise, they would lose their tokens. Token holders, on the other hand, could theoretically reject every candidate, but that would collide with their interest to increase the value of their tokens. An empty list is not interesting for anyone. Profitability and quality of all stakeholders need to be well-aligned, so that objective and high-quality lists can be produced.
The price of a token is a result of supply and demand, and as such, assumed to be a performance indicator for the collective actions of all token holders. If a TCR would accept a non-native token as a means of payment, the collective performance of the token holders would not reflect performance of the list and the economic incentive mechanisms would therefore not work.
The vote of token holders is proportional to the number of tokens they own, or stake. Proportional voting rights are based on the idea that those who have the most at stake are most incentivized to act in the network’s best interest.
A number of attack vectors have been identified, such as “trolling,” “madman attacks,” “registry poisoning,” or “coin flipping.” A solution to each of these potential attacks needs to be reflected in the governance rules of the TCR to guarantee high-quality curation.
TCR can only be successful if (i) an objective answer to the listing question exists and if (ii) the answer is publicly observable. Subjective questions cannot be accurately answered by an objective mechanism. Lists that are prone to subjective tastes or opinions need a stronger coordination signal, which would require a well-defined set of curators with well-aligned values. Combining TCRs with social reputation systems could add necessary context to a TCR might resolve this problem, and mitigate some attack vectors of classic TCRs.
Alternative proposals to objective and subjective TCRs are: (i) Ordered TCRs, (ii) Graded TCRs, (iii) Layered TCRs, (iv) Nested TCRs, (v) Combinatorial TCRs, or (vi) Continuous Token-Curated Registries. They mitigate some of the attack vectors described above, or to add quality of information to the listing. Their token governance rules vary.
- Balasanov, Slava: “TCR Design Flaws: Why Blockchain Needs Reputation”, Jul 12, 2018: https://blog.relevant.community/tcr-design-flaws-why-blockchain-needs-reputation-c5771d97b210
- Bulkin, Aleksandr: “Curate This: Token Curated Registries That Don’t Work”, Apr 12, 2018: https://blog.coinfund.io/curate-this-token-curated-registries-that-dont-work-d76370b77150
- De la Rouviere, Simon: “Continuous Token-Curated Registries: The Infinity of Lists”, Oct 21, 2017: https://medium.com/@simondlr/continuous-token-curated-registries-the-infinity-of-lists-69024c9eb70d
- De la Rouviere, Simon: “City Walls & Bo-Taoshi: Exploring the Power of Token-Curated Registries”, Oct 9, 2017: https://medium.com/@simondlr/city-walls-bo-taoshi-exploring-the-power-of-token- curated-registries-588f208c17d5
- De Jonghe, Dimitri: “Curated Governance with Stake Machines”, Dec 4, 2017: https://medium.com/@DimitriDeJonghe/curated-governance-with-stake-machines-8ae290a709b4
- Gajek, Sebastian: “Graded Token-Curated Decisions with Up-/Downvoting — Designing Cryptoeconomic Ranking and Reputation Systems”, Apr 30, 2018: https://medium.com/coinmonks/graded-token-curated-decisions-with-up-downvoting-designing-cryptoeconomic-ranking-and-2ce7c000bb51
- Goldin, Mike: “Token-Curated Registries 1.0”, ConsenSys: https://docs.google.com/document/d/1BWWC -Kmso9b7yCI_R7ysoGFIT9D_sfjH3axQsmB6E/edit
- Goldin, Mike: “Token Curated Registries 1.1, 2.0 TCRs, new theory, and dev updates”, Dec 4, 2017: https://medium.com/@ilovebagels/token-curated-registries-1-1-2-0-tcrs-new-theory-and-dev-updates-34c9f079f33d
- Goldin, Mike: “Token-Curated Registries 1.0”, Sep 14, 2017: https://medium.com/@ilovebagels/token-curated-registries-1-0-61a232f8dac7 https://medium.com/@tokencuratedregistry/the-token-curated-registry-whitepaper-bd2fb29299d
- Goldin, Mike: “Mike’s Cryptosystems Manifesto #4”, https://github.com/kleros/kleros-papers/issues/4
- Gibson, Kyle: “3 Questions About Community Building for TCRs”, Mar 16, 2018: https://medium.com/tokenreport/questions-about-community-building-for-tcrs-d666b70ad3a7
- Lockyer, Matt:“Token Curated Registry (TCR) Design Patterns”, May 21, 2018: https://hackernoon.com/token-curated-registry-tcr-design-patterns-4de6d18efa15
- N.N.: “LTCR (Layered TCR)”: http://tokenengineering.net/ltcr
- McConaghy, Trent: “The Layered TCR”, May 1, 2018: https://blog.oceanprotocol.com/the-layered-tcr-56cc5b4cdc45
- Praver, Moshe: “Subjective vs. Objective TCRs”, Jun 27, 2018: https://medium.com/coinmonks/subjective-vs-objective-tcrs-a21f5d848553
- Adchain: https://adchain.com/
- District: https://district0x.io/
- Messari: https://messari.io/
- Relevant: https://relevant.community
Token Economy: How the Web3 reinvents the Internet.
Second edition, first amended printing, Nov 2020. The first edition was published by BlockchainHub Berlin https://blockchainhub.net in June 2019 under the title “Token Economy: How Blockchain & Smart contracts revolutionize the Economy” and had two amended editions.
Author: Shermin Voshmgir
BibTeX: @book{voshmgir2020token, title={Token Economy: How the Web3 reinvents the Internet}, author={Voshmgir, Shermin}, year={2020}, publisher={Token Kitchen} }
LICENCE: Copyleft 2020, Shermin Voshmgir:Creative Commons CC-BY-NC-SA
This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.
For commercial permissions contact: hello@token.kitchen
Comments: There are various possibilities of forking the text of the book: You could add code contributions to the theoretical examples mentioned in this book, create future iterations with new /modified chapters or translate this book into another language, and distribute it non-commercially (for free). To do this you can fork the wiki and start translating the texts under your own Github account anytime and publish the modified/translated book on Github directly. I can send you the original AI files for creating translated version of the graphics. Please not that with this licence you are not allowed to sell the book or use it for other commercial projects, but I can try and support you get funding with a Gitcoin Grant or any other crowdfunding campaign, so you get money for the work you put in.
If you plan a translation, or other iteration of the book, best to coordinate with me to avoid redundancies, in case someone else is already working on a similar translation/version.
Print book (paperback & hardcopy) and eBook editions are also available on Amazon (https://amzn.to/2W7lQ8h) & other online bookstores. Other languages: https://github.com/Token-Economy-Book/