Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server/msgBook: atomic seq updates with respect to order book contents #2

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

norwnd
Copy link
Owner

@norwnd norwnd commented Mar 8, 2023

seq updates on server for msgBook are not atomic with respect to order book contents, this means 2 clients (or even 1 client sending 2 requests in parallel) can see different order book contents for the same seq number (e.g. one order book snapshot already including last book_order/update_remaining/unbook_order update effects, while the other snapshot lacking those because that update technically belongs to the next seq and is only there for a brief period of time because there is no atomicity there)

which, in turn, means:

  1. client has to handle it with extra care to avoid applying the effects of the same update twice (e.g. client got snapshot with latest update applied already, but he also can get same update notification with seq+1 afterwards); currently dexc resolves such redundant book_order/unbook_order notifications by checking order existence in OrderBook.orders map but doesn't seem to do anything that would prevent update_remaining from applying twice (which is another way we can get to app: ignore zero-quantity book_order updates decred/dcrdex#1543)
  2. if client were to issue multiple concurrent subscribe/resubscribe requests it probably opens lots more possibilities to get screwed, but we prevent those requests from running concurrently (in client/core: OrderBook update notifications races #1) for other reasons too; so that's off the table
  3. if somebody takes periodic server order book snapshots, and, say, counts how much buy/sell orders there is in the book (for debuging/stats-gathering purposes or whatnot) he can't rely on this giving him exact results because of that extra ghost update that might be present in the older snapshot (of 2 consecutive snapshots seq and seq+1); if server is running in VM and it gets frozen at just the right instruction for a sec ... to resume later ... or something like that; gotta be pretty unlikely to happen in practice, probably also off the table too

It's not an issue per se? I think it is, see 1) above.
Overall, it's a tradeoff that leaves more room for shooting yourself in the foot for seemingly no good reason.

This PR adds aforementioned atomicity property.

@norwnd norwnd force-pushed the server-msgBook-consider-atomic-seq-updates branch from 3083cf1 to a6fb6f7 Compare March 8, 2023 06:47
Comment on lines 562 to 565
book.mtx.RLock()
defer book.mtx.RUnlock()

if !book.running {
// Don't want to block client requests for too long, so return fast if order book
// isn't initialized yet instead of proceeding to wait on mutex below for who
// knows how long.
if !book.running.Load() {
return nil
}
Copy link
Owner Author

@norwnd norwnd Mar 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this was the original intent (see the comment I added there) for having running flag there, it didn't work because client request would block on book.mtx before order book is available or not. Or is there any other purpose for running ?

Or maybe it just got out of date. Moved running from under book.mtx.RLock()/book.mtx.RUnlock() section for now.

@norwnd
Copy link
Owner Author

norwnd commented Mar 8, 2023

Just to clarify this is the expected behavior (it seems to work like that on master now):

  • for until msgBook is ready (running=true) we are returning error response along the lines of market not running to clients
  • for suspended market we return empty book (which isn't exactly the same as returning error) or frozen order book (depending on persist flag value)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant