-
Notifications
You must be signed in to change notification settings - Fork 457
Achieve atomic block save #302
Comments
Work in progress. Will likely be included in the next minor release |
Stuck with this? Need help? 😉 The pattern suggested in pg-promise-demo can be achieved gradually, step by step 😉 The first best step - moving all the SQL into the same kind of structure as in the demo. |
Thanks @vitaly-t. Your help and advice is always appreciated 👍 |
Should largely be resolved after: #543. We will then come back and review best practices. |
Closed by #543 |
Reopened by #1169 |
See: https://github.com/vitaly-t/pg-promise/wiki/Chaining-Queries for reasoning, especially when speaking of poor connection usage. |
The transaction opening |
Yes I will go through all and see what's the best way to do. There are two different things we should keep in mind:
First one we should totally avoid, for second one we should improve as much we can. |
I think can be achieved by downwards sharing of |
The main focal point is block processing where are database writes are. However, in order to review the rest of application, mainly in terms of database reads, I scanned the https://gist.github.com/karmacoma/80495431baa2a56c47298eeaa8cd2e8f NOTE: This would change when we revert: #597 and our round logic is put back in the application layer by #1174. |
Since a long time, we have https://github.com/vitaly-t/pg-monitor configured for db event monitoring. https://github.com/LiskHQ/lisk/blob/9598668a45e2f38f1f787b1bee393c39c2827066/config.json#L23-L25 By adding this event: http://vitaly-t.github.io/pg-promise/global.html#event:connect We can reveal the problem we are speaking of more clearly:
Our parent (Crypti) only had limited support for db transactions, no db connection pooling or sharing, and no query chaining, therefore what you see here are the results of the first iteration of the work done thus far. If we look at it optimistically, the situation is only bad from point of view of no shared database transaction between these discrete actions. This issue represents the next step in adhering to the best practices more completely. |
@karmacoma #302 (comment) Yes that can be achieved. But that |
Parent: #1169
If a node is interrupted during block processing i.e. due to reload, there is a high probability of partial writes to the database, leading to corruption of memory tables and a break in consensus with the rest of network.
Solution: Rewrite database logic so that transaction application, block save, and round tick are all encompassed within an atomic database transaction that can be easily rolled back upon interruption or error.
This should better ensure data integrity across nodes on the network.
Logic rewrite will also provide better use of shared connections from pool, and improve block processing capability greatly.
Current behavior
Bad: In the event of rollback on error in later db transactions, memory tables will be inconsistent.
Proposed behavior
Good: In the event of rollback on error, memory tables will be consistent.
Code Target
An example where we are doing things correctly, but also at the heart of where our database writes start:
https://github.com/LiskHQ/lisk/blob/9598668a45e2f38f1f787b1bee393c39c2827066/modules/blocks/chain.js#L84-L109
Therefore, the actions: apply unconfirmed transaction, apply confirmed transaction and round ticks are those we need to consider.
Task List
Chain.prototype.saveBlock
Add "transaction applications" (unconfirmed/confirmed) to the same promise chain as Chain.prototype.saveBlock #1181Chain.prototype.saveBlock
Add "round ticks" (forwards/backwards) to the same promise chain as Chain.prototype.saveBlock #1297The text was updated successfully, but these errors were encountered: