Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Events based cache could lose events at startup #5151

Closed
faisal-memon opened this issue May 16, 2024 · 3 comments · Fixed by #5289
Closed

Events based cache could lose events at startup #5151

faisal-memon opened this issue May 16, 2024 · 3 comments · Fixed by #5289
Labels
priority/backlog Issue is approved and in the backlog
Milestone

Comments

@faisal-memon
Copy link
Contributor

When the server starts up there could be a skipped event that hasn't resolved. So upon restart we have to get a list of all the events and look for any gaps. This leads to the below issue:

Server 1 starts event 20 but doesnt finish
Server 2 creates event 21
Events < 20 get pruned
Server 3 starts up at event 21
Event 20 completes

In the scenario event 20 will never get processed by server 3.

  • Version: 1.9.5
  • Platform: all
  • Subsystem: server events based cache
@faisal-memon
Copy link
Contributor Author

The solution I am exploring is to do uncommitted reads for a few minutes after the first event comes in after start up. Using the below sequence of events:

Server 1 requests next auto inc number, gets 20
Server 2 requests next auto inc number, gets 21
Server 2 commits row
Server 3 reads list of events at start up
Server 1 writes row to transaction buffer

Server 3 would miss event 20. The next cache refresh it would do an uncommitted read and see event 20 and then can wait for it to commit. If server 1 never writes the row it wouldn't show up in the transaction buffer. The time between when the auto inc number is requested and the row gets written is going to be small as its part of the same statement. so we wont need to check for a long time for a missed event to come in.

@edwbuck
Copy link
Contributor

edwbuck commented May 22, 2024

Faisal asked me to write up his design.

TLDR: Improve tracking of the skipped database events by detecting them directly instead of detecting them by noticing skipped elements.

The changes in this effort will include reading the database using a special READ_UNCOMMITTED option in the read transaction. This will present the data with the committed and uncommitted data combined, such that uncommitted data is present (hiding the older committed row). A second read will also be performed, only on the committed data. By comparing the two, we can determine which Event IDs are not yet committed, and add those as tracked elements without potentially skipping any.

The prior solution might potentially skip the first uncommitted elements if they had an EventID that preceded the first committed EventID. The previous solution heavily uses the "last seen EventID" to keep a marker where it would then pick up from in the next scan for database events. While the work in #5071 would notice skipped EventIDs and periodically rescan for them each polling loop, it could not detect skipped items prior to the "last seen EventID" being set.

By reading uncommitted EventIDs, the "skipped" list of EventIDs will not be surmised, but read directly from the database. This presents a small issue, as the only means of determining which elements are uncommitted is to difference them against the same read without the READ_UNCOMMITTED option set.

The overall algorithim to determine which elements need period polling is roughly:

  1. Read the events with READ_UNCOMMITTED
  2. Read the events without READ_UNCOMMITTED
  3. Difference the two to discover which EventIDs are issued by the AUTOINCREMENT generator of the database into events that are not yet committed.
  4. Add those items to the "uncommitted event id list" which replaces the skipped list.
  5. Act on the uncommitted event id list upon the following scenarios
    A. If the item disappears from the READ_UNCOMMITTED and doesn't appear in the non-READ_UNCOMMITTED queries, the transaction holding it was cancelled. Drop it from the uncommited event id list.
    B. If the item disappears from the READ_UNCOMMITTED and appears in the non-READ_UNCOMMITTED, process it as a new database event, before processing events beyond the last processed event id.

Items that appear in the non-READ_UNCOMMITTED query without ever appearing in the READ_UNCOMMITTED query will not see different processing, as they aren't in long lived transaction. The existing algorithm already properly handles such items.

@edwbuck
Copy link
Contributor

edwbuck commented May 30, 2024

  • We currently have a scanning system for all events after the last seen event id.
  • We currently have a scanning system for all events skipped between the first seen event id and the last seen event id.
  • To close the gap, we need to scan all event ids prior to the first seen event id, but we don't need to scan for those longer than a transaction can stay uncommitted.

For MySQL that is 8 hours, postgres (since version 9.6) defaults to 24 hours, sqlite doesn't support a timeout (and is unsupported a shared database setup). For this reason, the maximum effective scanning time should be 24 hours.

  1. Upon detecting the first database event, that event id will be stored as the "first event".
  2. Upon each "new event" scanning cycle, if that cycle is less than 24 hours after the server start time, a query selecting all database event ids below the "first event" id is executed. This should be relatively fast, as the database event id is indexed, being the primary key of the table. The response should be relatively fast too, as it is expected to return zero entries on most scans.
  3. If a response contains database event entries:
    A. The "first event" id is updated.
    B. If gaps between the new "first event" and the prior "first event" are detected, those are added to the skipped id list and processed as skipped events.
    C. Scanning in future cycles uses the new "first event" value.
  4. After a 24 hours pass for the server, the query for "prior to the first event" can be disabled to conserve database resources, or can stay active based on developer / maintainer preference (but it would be impossible for it to obtain new records as any potential "in flight" transaction would either have completed or failed past its timeout.).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/backlog Issue is approved and in the backlog
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants