You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an edge case which shouldn't occur on real life, and kind of theoric, although quite relevant for the event's reliability we require on the system. #1832 introduced a new, more reliable algorithm for monitoring/filtering new contracts events which works by, in normal operation, ensuring the range=5 (default) last blocks are always scanned in each new block getLogs request for new events (instead of the default last block only default heuristic from ethers). This works by both giving ~5 blocks of buffer for Infura nodes to pick up/sync a new block/event, and by giving each block/request ~5 opportunities to come in a request.
The above algorithm works fine if some requests may give inconsistent/slightly outdated results (like Infura), but less than 5 blocks are ever skipped on this polling. But, if for some reason (e.g. a temporary loss of connection), more than ~5 blocks are skipped, we would lose a sizeable portion of blocks in the range:
Example: we just received block=100 (which caused a getLogs in the range [95, 100]; then we lose connectivity, and only recover it in the block=120; the current algorithm will then scan range [115, 120], and would have lost any event which happened between blocks ]100, 115[.
Although losing connecion for more than 5 blocks could be considered undefined behavior, we want as much as possibly to not lose any range, and if possible, make sure every block in the range [95, 120] (in the above example) is given at least rangeopportunities of being scanned, regardless of connection loss.
Of course, if an event comes at any point, we still can be sure every event before that event's blockNumber was properly fetched, and can continue to optimize fetching from next block to avoid emit's duplication and optimize request.
Together with above, getLogs thrown exceptions (e.g. network request error) would be propagated to observable and errors code which doesn't expect it to ever error.
Expected Result
every block in range is scanned at least range times
Actual Result
some range inside connection loss time isn't fetched
The text was updated successfully, but these errors were encountered:
Steps to Reproduce
This is an edge case which shouldn't occur on real life, and kind of theoric, although quite relevant for the event's reliability we require on the system.
#1832 introduced a new, more reliable algorithm for monitoring/filtering new contracts events which works by, in normal operation, ensuring the
range=5 (default)
last blocks are always scanned in each new blockgetLogs
request for new events (instead of the default last block only default heuristic fromethers
). This works by both giving ~5 blocks of buffer for Infura nodes to pick up/sync a new block/event, and by giving each block/request ~5 opportunities to come in a request.The above algorithm works fine if some requests may give inconsistent/slightly outdated results (like Infura), but less than 5 blocks are ever skipped on this polling. But, if for some reason (e.g. a temporary loss of connection), more than ~5 blocks are skipped, we would lose a sizeable portion of blocks in the range:
Example: we just received block=100 (which caused a
getLogs
in the range[95, 100]
; then we lose connectivity, and only recover it in the block=120; the current algorithm will then scan range[115, 120]
, and would have lost any event which happened between blocks]100, 115[
.Although losing connecion for more than 5 blocks could be considered undefined behavior, we want as much as possibly to not lose any range, and if possible, make sure every block in the range
[95, 120]
(in the above example) is given at leastrange
opportunities of being scanned, regardless of connection loss.Of course, if an event comes at any point, we still can be sure every event before that event's blockNumber was properly fetched, and can continue to optimize fetching from next block to avoid emit's duplication and optimize request.
Together with above,
getLogs
thrown exceptions (e.g. network request error) would be propagated to observable and errors code which doesn't expect it to ever error.Expected Result
range
timesActual Result
The text was updated successfully, but these errors were encountered: