-
Notifications
You must be signed in to change notification settings - Fork 122
ethCC Scalability Report
A few talks about how to ensure the correctness of the plasma chain. All were some variant of watchtowers. Watchtowers are entities to which a user can outsource the task of watching the chain to ensure correctness. While there is nothing particularly bad about these kind of solutions, it does bring about unnecessary complexity and risk that is completely unnecessary with zero-knowledge proofs.
Next to 0x, the only other project doing kind of the same as we is Liquidity Network. They are also working on 2nd layer scalability and have some special kind of front-running/FIFO system with cryptographic puzzles we should definitely take a look at.
EthFinex Trustless is also working on some 2nd layer scalability solution, probably zero-knowledge based but they didn't say. I tried to find out more, unfortunately the person of EthFinex I talked to didn't know anything about it.
These guys also did usability studies for DEXs so I think we could learn something from how they are doing things.
These are the guys that do decentralized identity on Ethereum using zk-SNARKs using JS as a language. For zk-SNARKs they are looking into optimizing their JS compiler and creating proofs using distributed computing, so nothing really new. The overhead of this system also makes it not suitable for applications that do any significant kind of work inside a SNARK.
Talk was mostly about their general vision on scalability, not in depth about how their DEX works with STARKs. But there was definitely some interesting information given.
- They don't like any other scaling solution instead of zero-knowledge proofs.
- They also don't like that every dApp would be isolated in their own 2nd layer solution, losing all the power you can have from the interconnectivity currently possibly on Ethereum
Estimated timeline for some things they are working on:
Liquidity aggregation -> Network transport (using libp2p) -> Trade execution (TEC) -> Trade Settlement (0x)
Not that much information was given about this setup. We already knew they were working on a TEC. No other details were provided.
The libp2p info was new for me. I don't know much about networking, but the use of libp2p would allow 2 parties to directly match orders with each other without a relayer and without knowing each other. So as I understand it centralized relayers wouldn't really be needed anymore (but I assume they would still have their advantages).
The same as above with STARKs before Trade Settlement.
This was still called very experimental (even more so than SNARKs because STARKs are newer). The verification cost for a STARK starts at 5M gas. 4,5M of the gas is used for sending these pretty large STARK proofs in the calldata, the actual verification computation only costs 500,000 gas (they first thought that these computations were impossible on current Ethereum, but they got it working some way with solidity assembly). The verification cost and the proof size increases very slowly with the number of constraints in the circuit. They are trying to lower the gas cost of the calldata through EIPs.
They are not going to depend on Ethereum for data availability. They are looking into using a simple setup where there would be a small group of nodes (the number 20 was used here, I think they called it a consortium) which would all have to sign the data necessary for rebuilding the merkle trees. Data availability is ensured as long as 1 of these nodes actually shares the data with everyone.
Had a long meeting with Alex Gluchowski and Alex Vlasov from Matter Labs. These guys are really pioneering some of the zero-knowledge technology. Some of the things they are working on:
- Sonic: A single trusted setup for ALL circuits. This is a game changer. Currently not possible on Ethereum, hopefully the hard-fork in august will contain the necessary precompiles.
- Recursive SNARKs: limited to a depth of 1
- Optimizing proof generation: They are convinced proving times will not be a problem in the near future (10x-100x improvement, GPUs /ASICs)
With the above technology they want to build a single second layer scaling side chain for all dApps. In other words, kind of like Ethereum itself but on the second layer. There are very good reasons to do this:
- dApps use the same state, so users don't have to deposit/withdraw to different contracts for every dApp
- The SNARKs can be batched together in a parent SNARK for even more efficient verification: ParentSnark -> [SnarkDAppA, SnarkDappB, SnarkDappC,... ]. It's also more efficient to batch verify SNARKs onchain.
- Currently only a single operator, but they are working on a multiple operator setup
- Setup for data availability without using Ethereum (similar to what 0x is going to do as far as I remember, though they didn't give many details)
If every dApp has its own sidechain they all fight for the same 8M gas inside a block. Batching the work together (with offchain data availability) greatly improves on this.
Even with recursive SNARKs and offchain data availability system we still don't have the possibility of unlimited throughput. The amount of constraints that can be efficiently verified onchain is limited (256M constraints). So only a limited number of SNARKs can be verified in the parent SNARK. Still, the possible throughput will be large. STARKs theoretically have unlimited scalability, but as far as I understand still some ways off for this to be possible (if it is even possible).
In practice, Matter Labs would provide an API we would use that abstracts most things away (we wouldn't even have to write any circuits ourselves normally). The drawback of this is that the merkle tree structure would be fixed: every dApp needs to use the same merkle tree structure. I told them what we would need for protocol 3 and I'm sure we can work together with them to see what's possible.
- Currently we wouldn't be able to do partial order matching
- Users would need to register an account for every token
- Locking balances to accounts is not really possible for now, but because of the operator setup dApps should be able to monitor the work and avoid conflicts
I think there's definitely good reasons for using their technology instead of rolling our own like we are currently doing with protocol 3:
- State sharing with other dApps
- Better/more efficient scalability
- We don't have to setup a system ourselves for offchain data availability (which is kind of delicate subject, especially if this is needed for every dApp independently).
- We don't have to chase the latest and greatest scaling tech ourselves
- Working together with Matter Labs should teach us a lot about this stuff. They have the knowledge, they are building the technology. Working together on this while they are still building it should give us an advantage.
- We probably won't be able to compete with dApps using the above tech. They will be able to pay a higher gas price to include their work onchain because they can do more work inside a single block
These guys are smart. Nobody else is really doing what they are doing with this. My current opinion would be to go for this.
Daniel: They'd like to have a meeting with you about this.
Loopring Foundation
nothing here