Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using per-key basis queue #5420

Merged

Conversation

georgesjamous
Copy link
Contributor

Currently, the RedisCacheAdapter uses a global promise queue chain to handle redis cache operations.

this.p = this.p.then(() => {

This produces a bottleneck point and causes several problems during high requests such as timeouts and delays since all cache operations are chained and depend on each other. As discussed in this ticket #5401
I believe that initially it was designed that way to allow maximum data consistency and prevent operations from overlapping.

A potential solution would be to chain the cache operations per key basis, that way consistency on-key-basis is maintained and that would allow parallel operations to be invoked, which would loosen up the single bottleneck point.

I have used a custom KeyPromiseQueue class as opposed to the built-in MemoryCache since ttl should not be used because expiring active operations would mean the next ones would not be chained (since the previous ones are not tracked anymore) and data consistency would be lost again. Which is our main concern here, otherwise we would not use any chain.


Any thoughts or concerns on such modification?

@codecov
Copy link

codecov bot commented Mar 14, 2019

Codecov Report

Merging #5420 into master will increase coverage by 0.02%.
The diff coverage is 98.46%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #5420      +/-   ##
==========================================
+ Coverage   93.93%   93.96%   +0.02%     
==========================================
  Files         123      124       +1     
  Lines        9005     9047      +42     
==========================================
+ Hits         8459     8501      +42     
  Misses        546      546
Impacted Files Coverage Δ
src/Adapters/Cache/RedisCacheAdapter/index.js 100% <100%> (ø)
...dapters/Cache/RedisCacheAdapter/KeyPromiseQueue.js 95.45% <95.45%> (ø)
src/Adapters/Storage/Mongo/MongoStorageAdapter.js 91.48% <0%> (-0.73%) ⬇️
src/RestWrite.js 93.07% <0%> (-0.19%) ⬇️
src/triggers.js 94.49% <0%> (+0.18%) ⬆️
src/Routers/PublicAPIRouter.js 92.1% <0%> (+0.92%) ⬆️
src/Controllers/UserController.js 94.33% <0%> (+0.94%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8622e5c...b70fb05. Read the comment docs.

@saulogt
Copy link
Contributor

saulogt commented Mar 14, 2019

Nice work!
I saw you dropped the idea of the lru cache... It produced a much more predictable code 👍

I have two humble suggestions for your PR:

  1. Use a single object to keep track of both count and promise chain.
  2. Replace the raw object by a Map to keep track o them for performance reason (mainly on delete)

Maybe something like this:

  beforeOp(key) {
    let obj = this.queues.get(key);
    if (!obj) {
      obj = [0, Promise.resolve()];
      this.queues.set(key, obj);
    }
    obj[0]++;
  }

  afterOp(key, obj) {
    let count = obj[0];
    if (count === undefined) {
      return;
    }
    count--;
    if (count <= 0) {
      this.queues.delete(key);
      return;
    }
    obj[0] = count;
  }

  enqueue(key, operation) {
    let obj = this.beforeOp(key);
    const toAwait = obj[1];
    const nextOperation = toAwait.then(operation);
    const wrappedOperation = nextOperation.then(result => {
      this.afterOp(key, obj);
      return result;
    });
    obj[1] = wrappedOperation;
    return wrappedOperation;
  }

@georgesjamous
Copy link
Contributor Author

@saulogt I have modified the code reflecting your suggestions, you're right using one object is cleaner and faster, let me know what you think.

@saulogt
Copy link
Contributor

saulogt commented Mar 15, 2019

@georgesjamous It looks great

@georgesjamous
Copy link
Contributor Author

Travis check has passed but for some reason, it's not being updated in the PR. Is there a way to restart the test without a commit?

@acinader I don't use codecov much, can you be a little bit more specific?

@dplewis
Copy link
Member

dplewis commented Mar 20, 2019

@georgesjamous This PR looks great.

I've restarted the test for you. If you want to restart it just close and reopen the PR.

Codecov shows which parts of your code haven't been tested. The goal is to have as much tested as possible to avoid regression.

Use npm run coverage.

@georgesjamous
Copy link
Contributor Author

Alright, I have added additional tests and made some minor alterations.

@georgesjamous
Copy link
Contributor Author

@acinader @dplewis this is ready to be merged I guess.
I am not sure who is currently accepting PRs, I have tagged you both.

@acinader acinader merged commit 214aa2e into parse-community:master Apr 2, 2019
UnderratedDev pushed a commit to UnderratedDev/parse-server that referenced this pull request Mar 21, 2020
* adding KeyPromiseQueue

* nit

* removing secondary object and using a tuple

* using array

* nits

* some tests

* Minor refinements

* removing old adapter

* dummy change, travis test not found

* travis test missing, dummy change

* revrting mistake

* reverting mistake

* indentation fix

* additional tests for coverage

* extending coverage

* nits

* fixing mistake

* better code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants