Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve tombstone management #128

Closed
wants to merge 39 commits into from
Closed

Conversation

slfritchie
Copy link
Contributor

Addresses #82. See also #102.

This PR is the result of two efforts, the second building upon the first:

  1. A fix for all known merge mis-behavior and races with respect to tombstones and merging. (This stopped roughly at the tip of the slf-merge-panopticon branch on 24 December.
  2. Change the format of hint files (in a backward-compatible way) to make it possible to avoid storing tombstone'ed keys in the keydir.

…>youngest

The NIF change fixes a long-standing latent bug: when put'ing a key
that does not exist, if there's a race with a merge, keydir_put_int()
would return 'ok' (error) rather than 'already_exists' (correct).  The
'already_exists' return value is a signal to the read-write owner of
the bitcask that the current append file must be closed and a new one
opened (with a larger fileid than any merge).

The tombstone change adds a new tombstone data format.  Old tombstones
will be handled correctly.  New tombstones for any key K contain
the fileid & offset of the key that it is deleting.  If the fileid
F still exists, then the tombstone will always be merged forward.
If the fileid F does not exist, then merging forward is not
necessary. When F was merged, the on-disk representation of key K
not be merged forward: K does not exist in the keydir (because it
was deleted by this tombstone), or it was replaced by a newer put.
Originally found with bitcask_pulse, I deconstructed the test case to
help understand what was happening: the new EUnit test is
new_20131217_a_test_.

As a result of the puts, key #13 is written 3x to fileid #1 (normal,
tombstone, normal) and 1x to fileid #2 (normal @ the very beginning
of the file).  The merge creates fileid #3 and copies only the
tombstone (the normal entry isn't copied because it is out-of-date).
Before the close, the internal keydir contains the correct info
about key #13, but after the close and re-open, we see key #13's
entries: normal (and most recent) in fileid 32, and tombstone in
fileid #3, oops.

The fix is to remove all of the merge input fileids from the set of fileids
that will survive/exist after the merge is finished.
…th bitcask:delete & NIF usage before continuing
…ll_resize() predicate test for creating a keydir->pending
* Add 'already_exists' return to bitcask_nifs_keydir_remove(): we need
it to signal races with merge, alas.

* Add state to #filestate to be able to 'undo' last update to both a
data file and its hint file.  This probably means that we're going
to have to play some games with merge file naming, TBD, stay tuned.

* For bitcask:delete(), make the keydir delete conditional: if it fails,
redo the entire thing again.

* inner_merge_write() can have a race that, if a partial merge happens
at the proper time after, we see an old value reappearing.  Fix by
checking the return value of the keydir put, and if 'already_exists',
then undo the write.

* When do_put() has a race and gets 'already_exists' from the keydir,
undo the write before retrying.  If this key is deleted sometime later,
and then a partial merge happens after that, we might see this value
reappear after the merge is done.

* Add file_truncate() to bitcask_file.erl.  TODO: do the same for the
NIF style I/O.

* Better robustness (I hope) to EUnit tests in bitcask_merge_delete.erl
I hope this will eliminate a nasty source of nondeterminism during
PULSE testing.
…th delete

Scenario, with 3 writers, 2 & 3 are racing:

* Writer 1: Put K, write @ {file 1, offset 63}
* Writer 2: Delete operation starts ... but there is no write to disk yet
* Writer 3: Merge scans file 1, sees K @ {1,30} -> not out of date ->
  but there is no write to disk yet
* Writer 2: writes a tombstone @ {3,48}
* Writer 2: Keydir conditional delete @ old location @ {1,63} is ok
* Writer 2: keydir delete returns from NIF-land
* Writer 3: merge copies data from {1, 63} -> {4, 42}
* Writer 3: keydir put {4, 42} conditional on {1,63} succeeds due to
  incorrect conditional validation: the record is gone, but bug
  permits put to return 'ok'.
When a Bitcask is opened and scan_key_files() is reading data
from disk and loading the RAM keydir, we now detect if the key
is a tombstone and, if so, do not store it in the keydir.

Normally, only hint files are scanned during startup.  However,
hint files have not stored enough information to confirm that
a key is/is not a tombstone.  I have added such a flag in a
backward-compatible way: the offset size has been reduced from
64 -> 63 bits, and the uppermost bit (which is assumed to be
0 in all cases -- we assume nobody has actually written a file
bit enough to require 64 bits to describe the offset) is used
to signal tombstone status.

An optional argument was given to the increment_file_id() NIF
to communicate to the NIF that a data file exists ... a fact
that would otherwise be lost if a hint/data file contains
only tombstones.

For testing purposes, fold() and fold_keys() are extended with
another argument to expose the presence of keydir tombstones.

Adjust timeouts and concurrency limits in bitcask_pulse.erl
to avoid the worst of false-positive errors when using the
PULSE model: {badrpc,timeout} nonsense.
// If put would resize and iterating, start pending hash
if (kh_put_will_resize(entries, keydir->entries) &&
keydir->keyfolders != 0 &&
if (keydir->keyfolders != 0 &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to kill the new multiple folds improvement. Is it intentional?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we can't do this, given how reliant we now are on AAE.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand, sorry, both based on my sketchy knowledge of khash's internals, the multi-fold work, and the fact that if you restore that ky_put_will_resize() check then the PULSE test will start spewing failures very quickly because the keydir isn't properly frozen.

If the put is going to resize the hash, I certainly agree that a freeze is required in order to keep the folders' sorting order stable. However, this function is doing a mutation, and with a mutation of any kind (if there are any keyfolders) we must freeze. My brain can't think of a reason why you'd have keyfolders != 0 and doing a mutation where you would not want to freeze ... but then again, it's -18F here in Minneapolis today, my brain may be slushy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the work for multi-folds was aimed at making this possible: delaying the inevitable freeze until a put changed the number of khash slots. Iterators are linked to a timestamp and tolerate finding entries added after the iteration was started. Entries can now be simple entries or linked lists, which can contain multiple versions of an entry. Puts then do not replace the entry, but add another version to this list (or convert a plain entry to a list). Iterators will choose from this list of timestamped entries the one from the snapshot they belong to. When the pending hash is merged (freeze is over!), all entries are merged back to good ol' plain entries. It's a lot of fun, you should try it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a lot of fun, you should try it.

Ha, then how about a change to the PULSE model to deal with freezes that aren't really freezes? Or change all of the other logic to compare timestamps in the same way so that real frozenness returns?

Is the goal of delaying the freeze is simply a RAM-consumption optimization?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is there to make sure the number of concurrent keyfolders is not limited by when the next write is coming, which is usually "there, it's already here". In fact, memory consumption could be higher with this, as you can end up with many versions of values. It depends on initial value of the khash table and typical growth during freeze I suppose. I have not looked at the implications for the PULSE model, but it does sound like we'll need to work on that soon to merge this work before 2.0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I apparently do not understand the goals of the multiple folds work. I'll try to review the original PR & comments today, because I'm stuck in a pit of befuddlement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright ... I put the will_resize check back in the code. The bitcask_pulse test finds this problem with a single folder (the fork_merge step in the counterexample):

[[{set,{var,11},{call,bitcask_pulse,bc_open,[true]}},
  {set,{var,13},
       {call,bitcask_pulse,puts,[{var,11},{1,16},<<0>>]}},
  {set,{var,18},{call,bitcask_pulse,bc_close,[{var,11}]}},
  {set,{var,23},{call,bitcask_pulse,incr_clock,[]}},
  {set,{var,24},{call,bitcask_pulse,bc_open,[true]}},
  {set,{var,26},
       {call,bitcask_pulse,puts,[{var,24},{4,13},<<0>>]}},
  {set,{var,31},{call,bitcask_pulse,fork_merge,[{var,24}]}},
  {set,{var,34},{call,bitcask_pulse,fold,[{var,24}]}},
  {set,{var,37},{call,bitcask_pulse,fold,[{var,24}]}}],
 {19747,13791,98974},
 [{events,[]}]]

The problem is that the fold in step # 37 sees the key 14 twice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I've a different counterexample that skips keys: a fold doesn't see keys that have been put and have never been deleted.

7> C9c.
[[{set,{var,32},{call,bitcask_pulse,bc_open,[true]}},
  {set,{var,46},
       {call,bitcask_pulse,puts,[{var,32},{1,1},<<0,0,0>>]}},
  {set,{var,48},{call,bitcask_pulse,bc_close,[{var,32}]}},
  {set,{var,64},{call,bitcask_pulse,bc_open,[true]}},
  {set,{var,65},
       {call,bitcask_pulse,puts,[{var,64},{1,22},<<0>>]}},
  {set,{var,66},{call,bitcask_pulse,fork_merge,[{var,64}]}},
  {set,{var,81},{call,bitcask_pulse,incr_clock,[]}},
  {set,{var,82},{call,bitcask_pulse,bc_close,[{var,64}]}},
  {set,{var,85},{call,bitcask_pulse,bc_open,[true]}},
  {set,{var,87},{call,bitcask_pulse,fold,[{var,85}]}}],
 {86273,69841,50357},
 [{events,[]}]]

Here is an annotated timeline of the race:

folding proc:                      merging proc:
-------------------------------    ------------------------------------------

%% Fold starts.
{list_data_files,[1,2,3]}
{subfold,processing_file_number,1}

                                  %% Merge starts
                                  {merge_files,input_list,[2]}
                                  {inner_merge_write,fresh,new_file_is,4}
                                  {merge_single_entry,<<"kk02">>,
                                   old_location,file,2,offset,19}
                                  {merge_single_entry,<<"kk02">>,
                                   not_out_of_date}
                                  {inner_merge_write,<<"kk02">>,before_write}

{subfold,processing_file_number,2}

                                  {inner_merge_write,<<"kk02">>,
                                   new_location,file,4,offset,19,
                                   old_location,file,2,offset,19}
                                  {inner_merge_write,<<"kk02">>,keydir_put,ok}

%% The fold reaches the place in
%% file #2 where key <<"kk02">> is stored.
{fold,<<"kk02">>,keydir_get,not_exact,
 file_number,2,offset,19,
 keydir_location_is_now,file,4,offset,19
 fold_does_not_see_this_entry}

{subfold,processing_file_number,3}

%% key <<"kk02">> is not found in file #3, nor is it found in
%% any other file that the fold can process (in this case, file 3 is
%% the last file in the fold's list of files).

If the keydir is frozen, then the keydir update by the merge causes the keydir to freeze, and the folder sees <<"kk02">>'s entry in file # 2, the keydir's frozenness will allow the folder to see that the <<"kk02">>'s keydir entry is that same place in file # 2.

But I'm not seeing an easy way to fix this merge race without consuming more memory ... I need to sleep on this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm, well, my comment about a fix without using more RAM may or may not survive.

For the 2nd problem mentioned above, the one with the annotated timeline. I am not understanding why the mutation made by the "merging proc" isn't visible by the "folding proc" at the place where the fold reaches that key in file # 2.

@engelsanchez
Copy link
Contributor

@slfritchie A PR so long in the making, involving so many commits and so many files touched should really have a good high level summary of the changes and a rationale. See any of the LevelDB changes by MvM in the last year. Could you cook up something similar to help with the review?

@slfritchie
Copy link
Contributor Author

[...] would be great to break this up into several different PRs (or at least rebase, logically grouping the commits into groups of unrelated changes).

I'd just squish it all down to a single commit. There's just one goal: eliminate tombstones from the keydir.

In order for the multi-folder work to operate correctly, it
needs to be able to keep track of exactly when a fold started
relative to any mutations that happen during the same 1-second
time interval.  If a merge process cannot tell exactly if a
mutation happened before or after its fold started, then merge
may do the wrong thing: operate on a key zero times, or
operate on a key multiple times.

An epoch counter now subdivides Bitcask timestamps.  The epoch
counter is incremented whenever an iterator is formed.  A new NIF
was added, keydir_fold_is_starting(), to inform a fold what the
current epoch is.  The fold starting timestamp + epoch are used for
all get operations that the folder performs.

If the keydir contains only a single entry for a key, there's
no need to store the epoch with that key.  The epoch is stored
in the struct bitcask_keydir_entry_sib, when there are multiple
entries per key.

Things are very tricky (alas) when keeping entries in the
siblings 'next' linked list in newest-to-oldest timewise order.
A merge can do something "newer" wall-clock-wise with a mutation
that is "older" by that same wall-clock view.  The 'tstamp'
stored in the keydir is the wall-clock-time when the entry
was written for any reason, including a merge.  However, we
do NOT want a merge to change a key's expiration time, thus a
merge may not change a key's tstamp -- the solution is to have
the keydir also store the key's 'orig_tstamp' to keep a copy
of they key's specified-by-the-client timestamp for expiration
purposes.

To avoid mis-behavior of merging when the OS system clock
moves backward across a 1-second boundary, there is new checking
for mutations where Now < keydir->biggest_timestamp.  Operations
are retried when this condition is detected.

Try to avoid false-positives in the
bitcask_pulse:check_no_tombstones() predicate by calling only
when opened in read-write mode.

Remove the fold_visits_unfrozen_test_() and replace with a
corrected fold_snapshotX_test_()
@slfritchie
Copy link
Contributor Author

Hi, reviewers. I've got a branch that's based on this one that contains the hackery required to add an epoch counter to distinguish writes within Bitcask's 1-second timestamp granularity. Would you prefer to see that work tacked onto this branch and reviewed together, or as a PR to merge into this one, or ... something else?

https://github.com/basho/bitcask/compare/slf-tombstone-management...slf-tombstone-management%2Bsub-second-epochs?expand=1

The amortization mechanism attempts to limit itself to less than
1 msec of additional latency for any get/put/delete call, but as
shown below, it doesn't always stay strictly under that limit when
you're freeing hundreds of megabytes of data.

Below are histograms showing the NIF function latency as recorded
on OS X on my MBP for a couple of different mutation rates: 90%
and 9%.  The workload is:

* Put 10M keys
* Create an iterator
* Modify some percentage of the keys (e.g. 90%, 9%)
* Close the iterator
* Fetch all 10M keys, measuring the NIF latency time of each call.

---- snip ---- snip ---- snip ---- snip ---- snip ----

By my back-of-the-envelope calculations (1 word = 8 bytes)

      bitcask_keydir_entry = 3 words + key bytes
    vs.
      bitcask_keydir_entry_head = 2 words + key bytes
      plus
      bitcask_keydir_entry_sib = 5 words

So each 2-sibling entry is using 2 + (2*5) = 12 words, not counting
the key bytes.
After the sibling sweep, we're going from 12 words -> 3 words per key.

So for 10M keys, that's a savings of 9 words -> 687MBytes.

(RSS for the 10M keys @ 90% mutation tests peaks at 1.45GB of RAM.  By
 comparison, the 10M key @ 0% mutation test peaks at ~850MByte RSS.
 So, those numbers roughly match, yay.)

No wonder that there are a very small number of
bitcask_nifs_keydir_get_int() calls that are taking > 10msec to
finish: it may be the OS getting involved with side-effects from
free(3) calls??

** 90% mutation

10M keys, ~90% mutated, 0% deleted
via:
    bitcask_nifs:yoo(10*1000000, 9*1000*1000, 0).

*** Tracing on for sequence 1...

    bitcask_nifs_keydir_get_int latency with off-cpu (usec)
             value  ------------- Distribution ------------- count
                 8 |                                         0
                16 |@@@@@@@@@@@@@@@                          40
                32 |@@@@@@                                   17
                64 |@@@                                      8
               128 |@@                                       5
               256 |@@                                       5
               512 |@@@@@@@@@@                               27
              1024 |@@                                       5
              2048 |                                         0

    bitcask_nifs_keydir_get_int latency (usec)
             value  ------------- Distribution ------------- count
                 0 |                                         0
                 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@         7954469
                 2 |@@@@@@@@                                 2051656
                 4 |                                         10440
                 8 |                                         25446
                16 |                                         209
                32 |                                         5
                64 |                                         0
               128 |                                         9
               256 |                                         0
               512 |                                         11698
              1024 |                                         723
              2048 |                                         12
              4096 |                                         0
              8192 |                                         0
             16384 |                                         1
             32768 |                                         1
             65536 |                                         0

*** Tracing on for sequence 4...

    bitcask_nifs_keydir_get_int latency with off-cpu (usec)
             value  ------------- Distribution ------------- count
                 8 |                                         0
                16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@              56
                32 |@@@@@@                                   13
                64 |@@@@                                     8
               128 |@@                                       5
               256 |                                         0

    bitcask_nifs_keydir_get_int latency (usec)
             value  ------------- Distribution ------------- count
                 0 |                                         0
                 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@          7754589
                 2 |@@@@@@@@@                                2211364
                 4 |                                         15906
                 8 |                                         25269
                16 |                                         375
                32 |                                         8
                64 |                                         0

** 9% mutation

10M keys, ~9% mutated, 0% deleted
via:
    bitcask_nifs:yoo(10*1000000, 9 * 100*1000, 0).

*** Tracing on for sequence 1...

    bitcask_nifs_keydir_get_int latency with off-cpu (usec)
             value  ------------- Distribution ------------- count
                 8 |                                         0
                16 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@             64
                32 |@@@@@                                    12
                64 |@@@                                      7
               128 |@@                                       4
               256 |@                                        3
               512 |@                                        2
              1024 |                                         0

    bitcask_nifs_keydir_get_int latency (usec)
             value  ------------- Distribution ------------- count
                 0 |                                         0
                 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@           7530608
                 2 |@@@@@@@@@@                               2465556
                 4 |                                         12014
                 8 |                                         24721
                16 |                                         234
                32 |                                         6
                64 |                                         0
               128 |                                         0
               256 |                                         1
               512 |                                         1473
              1024 |                                         3
              2048 |                                         0

*** Tracing on for sequence 4...

    bitcask_nifs_keydir_get_int latency with off-cpu (usec)
             value  ------------- Distribution ------------- count
                 4 |                                         0
                 8 |@                                        3
                16 |@@@@@@@@@@@@@@@@@@@@@@@@@                55
                32 |@@@@@@@                                  15
                64 |@@                                       5
               128 |@@@                                      6
               256 |@                                        2
               512 |                                         1
              1024 |                                         0

    bitcask_nifs_keydir_get_int latency (usec)
             value  ------------- Distribution ------------- count
                 0 |                                         0
                 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@           7631794
                 2 |@@@@@@@@@                                2345179
                 4 |                                         13769
                 8 |                                         26056
                16 |                                         324
                32 |                                         22
                64 |                                         2
               128 |                                         0
The EUnit test is freeze_close_reopen_test(), which forces an
actual old-style freeze of the keydir and then checks the sanity
of folds while frozen.

The PULSE model change is something I'm not 100% happy with,
but anytime the PULSE model has false positives, it takes a huge
amount of time to determine that it's a false alarm.  So, this
change should eliminate a rare source of false reports ... but I
hope I haven't introduced something that will also hide a real
error.

The problem comes from having a read-only proc folding a cask
and having it frozen, then the read-write proc closes & reopens
the cask and does a fold.  If the keydir has been frozen the
entire time, the PULSE model doesn't know about the freezing and
thus reports an error in fold/fold_keys results.

This model change will discover if there has ever been a fold
in progress while the 1st/read-write pid opens the cask.  If yes,
then fold/fold_keys mismatches are excused.
…ULSE test

Bitcask's fold semantics are difficult enough to try to predict,
but when a keydir is actually frozen, the job is even more difficult.
This NIF is added to reliably inspect if the keydir was frozen during
a PULSE test case, and if we find a fold or fold_keys problem, we
let it pass.

The new NIF is also tested by an existing EUnit test,
keydir_wait_pending_test().
@slfritchie
Copy link
Contributor Author

PR for the above-mentioned branch: #134

@engelsanchez
Copy link
Contributor

#134 has now been merged into this. Scott's description of those commits:

Just when I thought I was done last week with this branch-of-a-branch, PULSE found another problem.

The 'epoch' subdivision of Bitcask timestamps: this fixes fold bugs where merge mutations can be seen by folding procs.

The sibling->regular entry conversion sweeper helps reclaim memory by converting siblings back to single entries whenever mutations have happened during a fold and then the keydir is quiescent long enough to allow sweeping during get/put/remove operations. If a new fold starts in the middle of the sweep, the sweep stops and will resume when it's able to.

Model corrections to avoid false positive reports. Until we get a temporal logic hacker skilled enough to build a real model, or fix the silly keydir frozen feature, I'm afraid that this is as good as I can do.

@@ -127,6 +155,7 @@ struct bitcask_keydir_entry_sib
uint32_t total_sz;
uint64_t offset;
uint32_t tstamp;
uint32_t tstamp_epoch;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Deja vu) This should be uint8_t instead of uint32_t like in the other places, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, will fix in a review-fixer-upper commit.

evanmcc added a commit that referenced this pull request Feb 25, 2014
- pull in more aggressive pulse tunings from #128
- remove all test sleeps related to old 1 second timestamps, so that
  things will break if old code is retained.
evanmcc added a commit that referenced this pull request Mar 5, 2014
- pull in more aggressive pulse tunings from #128
- remove all test sleeps related to old 1 second timestamps, so that
  things will break if old code is retained.
evanmcc added a commit that referenced this pull request Mar 7, 2014
- pull in more aggressive pulse tunings from #128
- remove all test sleeps related to old 1 second timestamps, so that
  things will break if old code is retained.
@evanmcc evanmcc closed this Mar 20, 2014
@seancribbs seancribbs deleted the slf-tombstone-management branch April 1, 2015 22:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants