-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Write buffer for Dispute Coordinator #3437
Comments
We should be extra careful not to introduce a very big db transaction that could cause issues like #3242. |
Do we have any idea of size limits to prevent that? I think even a conservative value of say 50-100K would be a huge improvement as most statements are just a few hundred bytes. |
Should we pick this up? It would probably be good to complete this work before we finalize disputes testing. |
It's an optimization issue, so the next obvious action item is to collect I/O and latency data from a network involving hundreds of validators and tens of parachains. That'll come as we scale up in the end phases of disputes testing. The data will tell us how necessary this is. |
It seems to impact dispute import, see #4404 |
Ok we figured that the actual slowdown is coming from the quadratic complexity on import, which is going to be resolved via batching which in turn also results in less frequent db writes. Therefore we should be able to close this one. |
Every
ImportStatements
call to the Dispute Coordinator currently results in a database write. It would make more sense to buffer writes over the course of several seconds and flush to disk based on size & time. We shouldn't go more than say, 30 seconds, without writing to disk, because data could be lost if the node shuts down. But there should be little to no downside to building up at least a few hundred kilobytes of statements in memory before writing to disk, over a short period of time.We should also immediately flush anything which is the subject of an active dispute, because there may be DoS vectors that cause nodes to be taken down and statements on disputed candidates should be preserved at all costs.
The text was updated successfully, but these errors were encountered: