Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: use atomic reference count to track external in-memory cache record usage #779

Open
MrCroxx opened this issue Oct 22, 2024 · 4 comments
Assignees
Labels
RFC Request for Comments
Milestone

Comments

@MrCroxx
Copy link
Collaborator

MrCroxx commented Oct 22, 2024

Atomic Reference Count Management

[RawCache] uses an atomic reference count to management the release of an entry.

The atomic reference count represents the external references of a cache record.

When the reference count drops to zero, the related cache shard is locked to release to cache record.

It is important to guarantee the correctness of the usage of the atomic reference count. Especially when triggering
the release of a record. Without any other synchronize mechanism, there would be dangling pointers or double frees:

Thread 1: [ decrease ARC to 0 ] ============> [ release record ]
Thread 2:                         [ increase ARC to 1 ] =======> dangling!! ==> double free!!

Thankfully, we can prevent it from happening with the usage of the shard lock:

The only ops that will increase the atomic reference count are:

  1. Insert/get/fetch the [RawCache] and get external entries. (locked)
  2. Clone an external entry. (lock-free)

The op 1 is always guarded by a mutex/rwlock, which means it is impossible to happen while releasing a record with
the shard is locked.

The op 2 is lock-free, but only happens when there is at least 1 external reference. So, it cannot happen while
releasing a record. Because when releasing is happening, there must be no external reference.

So, this case will never happen:

Thread 1: [ decrease ARC to 0 ] ================> [ release record ]
Thread 2:                           [ (op2) increase ARC to 1 ] ==> dangling!! ==> double free!!

When starting to release a record, after locking the shard, it's still required to check the atomic reference count.
Because this cause still exist:

Thread 1: [ decrease ARC to 0 ] ====================================> [ release record ]
Thread 2:                           [ (op1) increase ARC to 1 ] =======> dangling!! ==> double free!!

Although the op1 requires to be locked, but the release operation can be delayed after that. So the release
operation can be ignored after checking the atomic reference count is not zero.

There is no need to be afraid of leaking, there is always to be a release operation following the increasing when
the atomic reference count drops to zero again.

Related issues & PRs

#778

@MrCroxx MrCroxx added the RFC Request for Comments label Oct 22, 2024
@MrCroxx MrCroxx added this to the v0.13 milestone Oct 22, 2024
@MrCroxx MrCroxx self-assigned this Oct 22, 2024
@hzxa21
Copy link
Collaborator

hzxa21 commented Oct 23, 2024

The RFC LGTM.

Let me put the background of this RFC here (@MrCroxx correct me if I am wrong):

  • Prior to this RFC, the reference counter of a cache entry is an int under the protection of the shard lock. That means whenever we need to do ref++ or ref--, the shard lock needs to be acquired.
  • After this RFC, the reference counter of a cache entry is an atomic. The reference counting manipulation and shard lock is decoupled. That means we don't have to acquire the shard lock when ref > 0. We only need to acquire the shard lock when ref drops to 0.

@wenym1
Copy link
Collaborator

wenym1 commented Oct 23, 2024

A drawback of using atomic add and atomic sub is that we are not aware of the value of the variable when the operation is actually applied on it. To resolve this drawback, a feasible solution can be using CAS. With CAS, we can be aware of the value before the operation is applied on the atomic variable.

A general implementation can be:

  • Acquirers ensure that that will not increment the value when the value is 0
  • Releasers decrement the value anyway. The releaser that decrement the value from 1 to 0 is responsible for releasing the object.

@Li0k
Copy link
Collaborator

Li0k commented Oct 23, 2024

After discussion, we found that there are several situations that may lead to entry clone:

  1. single flight , after the acquisition, each waiter will get a clone of the entry. 2. cache get , get the clone of the entry from the foyer cache.

  2. cache get, clone of entry from foyer cache

  3. clone when entry is used by upper level caller.

Therefore, clone is not always initiated by the caller (3), but can also come from 1 and 2, so I'm in favour of adopting the current RFCs to address all the scenarios involved.

Since the frequency of locks has been greatly reduced since the RFC, using CAS / Lock is acceptable to me for both.

Thanks for the efforts

@MrCroxx
Copy link
Collaborator Author

MrCroxx commented Oct 23, 2024

A drawback of using atomic add and atomic sub is that we are not aware of the value of the variable when the operation is actually applied on it. To resolve this drawback, a feasible solution can be using CAS. With CAS, we can be aware of the value before the operation is applied on the atomic variable.

A general implementation can be:

  • Acquirers ensure that that will not increment the value when the value is 0
  • Releasers decrement the value anyway. The releaser that decrement the value from 1 to 0 is responsible for releasing the object.

I think this optimization does not solve the problem. If I understand incorrectly, please correct me.

When the releaser finds itself responsible for releasing the object, it still needs to acquire the mutex lock of the shard. During the gap between the lock guard is actually acquired, there is a chance that another thread calls "get/fetch" on the same key and increases the refs again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
RFC Request for Comments
Projects
Status: No status
Development

No branches or pull requests

4 participants