-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend DB Layer & Wallet to support wallet metadata #158
Conversation
This slightly increases the readability and makes future (incoming) extensions easier
d163142
to
5b3d203
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - really fond of the construct :
newtype ErrSome (operation :: Symbol)
src/Cardano/Wallet/DB.hs
Outdated
@@ -62,13 +94,18 @@ data DBLayer m s = DBLayer | |||
:: PrimaryKey WalletId | |||
-> m (Map (Hash "Tx") (Tx, TxMeta)) | |||
-- ^ Fetch the current transaction history of a known wallet. Returns an | |||
-- empty map if the wallet isn't found. | |||
-- empty map if the wallet istn't found. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was not necessary amendment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Erf. Seems like a merge resolution gone wrong.
= ErrNoSuchWallet WalletId | ||
deriving (Show, Eq) | ||
-- | Can't perform given operation because there's no wallet | ||
newtype ErrNoSuchWallet (operation :: Symbol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
neat way to encode additional info to which operation the error is related 💯
data Database s = Database | ||
{ wallet :: Wallet s | ||
, metadata :: WalletMetadata | ||
, txHistory :: Map (Hash "Tx") (Tx, TxMeta) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to these days cannot understand why we need TxMeta
(not related to this PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need them because our users need them ;)
Those pieces of information (direction, timestamp, total amount etc..) are very valuable to API consumers.
Wallet Metadata | ||
-----------------------------------------------------------------------} | ||
|
||
, putWalletMeta = \key@(PrimaryKey wid) meta -> ExceptT $ do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wonder if fine-tuning type with "Symbol" would not do the trick here to embrace one impl of putCheckpoint
and readCheckpoint
for both checkpoint and metadata - the implementations are very close ....
putCheckpoint db key cp1 | ||
txs' <- readTxHistory db key | ||
txs' `shouldBe` txs | ||
dbPutCheckpointIsolationProp db key (cp,meta,txs) cp' = monadicIO $ liftIO $ do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a number of tests that are permutations of put{TxHistory, WalletMeta, Checkpoint}
. Basically, we have :
- CreateWallet
- Two puts
- Two reads where one does not overlap with one of 2.
I wonder if we can replace all those tests with one, where we create wallet, than shuffle list of pairs (operation and arg like txs, cp meta). Moreover, we can make it more general. For instance, generate :
([Wallet DummyState], [WalletMetadata], [Map (Hash "Tx") (Tx, TxMeta)])
and even make sure the lengths of those three lists are different. then construct operations with generated values using those. Then shuffle. In that case we will test a number of what should be independent operations, some of them updating and at the end we expect last values of each list to be returned upon corresponding reads ...
Of course this can be a separate PR - but I think it would be valuable to have those tests, mainly because we will end up with one sequential test and one parallel test for a lot of permutations and multiple updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed! That's what I mentioned to you on Slack. I am not yet fully satisfied with the testing situation here, although it tests things, we can feel the duplication and weight of the tests here.
Although it's tricky to come up with a unified approach for operation agnostic properties here, i think it's valuable in the long run.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
putCheckpoint db (PrimaryKey wid) cp' | ||
unsafeRunExceptT $ putTxHistory db (PrimaryKey wid) txs -- Safe after ^ | ||
-- FIXME | ||
-- Note that, the two calls below are _safe_ under the assumption that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also add a note that there is no transaction surrounding readCheckpoint
and putCheckpoint
because there will be only one thread applying blocks.
5b3d203
to
19e9502
Compare
… factoring out a few things
19e9502
to
c3b1a93
Compare
@paweljakubas I did take a shot at refactoring the tests to make them easier to read and be less-redundant. I actually caught an implementation bug doing so :) |
Issue Number
#145
Overview
Comments
This is still very drafty, the MVar tests are a lot of the same thing. I'd like to spend some time refactoring them in order to abstract away the core properties we are testing on each resource, and make them more readable.