Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Never follow bitcoind backwards #3274

Conversation

rustyrussell
Copy link
Contributor

This solves the underlying reason why #3243 was such a problem!

sync_blockheight(bitcoind, [l1])
l1.stop()

# Now shrink chain (invalidateblock leaves 'headers' field)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tried it myself, and indeed the headers remains unchanged after invalidateblock, however if invalidateblock is followed by a restart headers will reset to the expected value. Maybe that's a more stable way of going backwards than blowing away the underlying directory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that would have been easier!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@rustyrussell
Copy link
Contributor Author

Rebased to use the Decker technique for creating reorgs...

Comment on lines +1190 to +1199
time.sleep(5)
assert l1.rpc.getinfo()['blockheight'] == 110
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be made into a wait_for. Happy to fix it in a cleanup though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it can't: we want to make sure it's still at 110.

@cdecker
Copy link
Member

cdecker commented Nov 20, 2019

Seems we have a bit of collateral damage:

  • test_db_dangling_peer_fix uses a DB that was synced up to 205 while we migrate on a chain that has 101 blocks, need to generate 104 more blocks before starting the node, or specify a rescan.
  • test_rescan seems to believe it was once synced to a 500k chain, and is being started with a 101 block chain.

Other than that LGTM 👍

@rustyrussell
Copy link
Contributor Author

Seems we have a bit of collateral damage:

* `test_db_dangling_peer_fix` uses a DB that was synced up to 205 while we migrate on a chain that has 101 blocks, need to generate 104 more blocks before starting the node, or specify a rescan.

* `test_rescan` seems to believe it was once synced to a 500k chain, and is being started with a 101 block chain.

Other than that LGTM +1

Thanks, fixed.

I actually like the change in behavior (refusing to start) if you ask it to rescan from a not-yet-present block, so I've kept and tested that.

This leads to all sorts of problems; in particular it's incredibly
slow (days, weeks!)  if bitcoind is a long way back.  This also changes
the behaviour of a rescan argument referring to a future block: we will
also refuse to start in that case, which I think is the correct behavior.

We already ignore bitcoind if it goes backwards while we're running.

Also cover a false positive memleak.

Changelog-Fixed: If bitcoind goes backwards (e.g. reindex) refuse to start (unless forced with --rescan).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@rustyrussell
Copy link
Contributor Author

Ack cdafcf3

Fixed all the test collateral damage.

@rustyrussell rustyrussell merged commit 654faa6 into ElementsProject:master Nov 21, 2019
@darosior darosior mentioned this pull request Jan 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants