Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid closing connection on channel EOF #816

Merged
merged 3 commits into from
Jan 6, 2023
Merged

Conversation

etan-status
Copy link
Contributor

While closing an individual channel (e.g., due to cancellation) there is a race where we may still receive messages before we deallocated the channel. Handle that case gracefully to avoid closing down the entire underlying connection (which also holds all other active channels).

While closing an individual channel (e.g., due to cancellation) there
is a race where we may still receive messages before we deallocated the
channel. Handle that case gracefully to avoid closing down the entire
underlying connection (which also holds all other active channels).
try:
await channel.pushData(data)
trace "pushed data to channel", m, channel, len = data.len
except LPStreamClosedError as exc:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I'm surprised this didn't get noticed earlier.

What was the specific condition that triggered it? It should also be possible to test this scenario explicitly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--sync-light-client on Nimbus beacon node sometimes leading to spurious disconnects. There have been multiple fixes w.r.t. that behaviour, hoping that with this fix we are one step closer to being able to support that feature.

What is unique about --sync-light-client is that light_client_manager cancels useless request copies once it obtained satisfying response from one of the peers. This cancellation highlights a couple of those issues. Regular request_manager on the other hand always waits for all peers to reply, even when one already provided the response.

If you know where to extend the test suite, you'd have to send a request, then cancel it (the cancellation would push EOF), and at the same time answer it from the other side (so that there is still a MsgIn). But then you also need to get lucky so that chronos schedules the MsgIn before the channel.join inside the cleanup logic. On Nimbus I get a repro once per ~20 hours currently by spamming local testnets with --sync-light-client.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to mock this in a similar manner to how it's done here - https://github.com/status-im/nim-libp2p/blob/unstable/tests/testmplex.nim#L379, tho simulating the exact raise might be a challenge.

There are a couple of ways this can be potentially tested/reproduced:

  • Use a BufferStream instead of a Transport for more fine grained control
  • Use chronos stepsAsync which allows more fine grained control of the event loop - I think this was added precisely to test cancelations and related issues
  • Use both - BufferStream with stepsAsync

Hope this helps!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regular request_manager on the other hand always waits for all peers to reply, even when one already provided the response.

unrelated, but @etan-status this sounds like something to fix, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, request manager overall needs fixing, primarily also in the way how it is fed.

There is some loop that feeds a queue whenever parent block is missing, but it does so at its own pace, irrespective of request manager progression. so, it could happen that multiple requests for the same block are pushed into the request manager (while it is stuck processing an existing request). once request manager gets to the next request, it then requests all those queued up copies of the same block multiple times in a single request, which then could time out / fail once more, but as all the retries by the feeder are already exhausted, it ultimately fails to obtain the block (until it is re-enqueued due to MissingParent error, or so far out of sync that the sync_manager kicks in once more).

The issue in this PR is related to cancellation randomly breaking the underlying connection. It looks much more stable now, so the request manager could now also be adjusted to use cancellation. The thing feeding the request manager seems to be the bigger culprit though (but both issues need to be fixed eventually).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't rely too heavily on cancellation yet, besides these small bugs in libp2p and elsewhere, we still have fundamental issues like status-im/nim-chronos#280

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, good to know! Yeah, I noticed also quite a few asyncSpawn in libp2p code base for which cancellation (and also stopping behaviour, if that even exists) is rather difficult to analyze overall.

I guess fixing the request_manager feeder regarding retries could already improve gap fill performance quite a bit though. And for the cancellations, once --sync-light-client is enabled in local CI testnets once more, it at least allows continuous progressive testing of cancellation behaviour.

Copy link
Contributor Author

@etan-status etan-status Jan 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding cancellation, there is also status-im/nim-chronos#334

This one is used in eth2_network in nimbus-eth2 and leads to weird order of finally / except blocks.

e.g.,

proc makeEth2Request(peer: Peer, protocolId: string, requestBytes: Bytes,
                     ResponseMsg: type,
                     timeout: Duration): Future[NetRes[ResponseMsg]]
                    {.async.} =
  let deadline = sleepAsync timeout
  let stream = awaitWithTimeout(peer.network.openStream(peer, protocolId),
                                deadline): return neterr StreamOpenTimeout
  try:
    # Send the request
    # Some clients don't want a length sent for empty requests
    # So don't send anything on empty requests
    if requestBytes.len > 0:
      await stream.writeChunk(none ResponseCode, requestBytes)
    # Half-close the stream to mark the end of the request - if this is not
    # done, the other peer might never send us the response.
    await stream.close()

    nbc_reqresp_messages_sent.inc(1, [shortProtocolId(protocolId)])

    # Read the response
    return await readResponse(stream, peer, ResponseMsg, timeout)
  finally:
    await stream.closeWithEOF()

The readResponse then does

    let nextFut = conn.readResponseChunk(peer, MsgType)
    if not await nextFut.withTimeout(timeout):
      return neterr(ReadResponseTimeout)
    return nextFut.read()

When the nextFut.withTimeout is cancelled, due to status-im/nim-chronos#334, it could mean that the await stream.closeWithEOF() is triggered while the nextFut is still in the process of getting canceled.

And, the closeWithEOF has some comments suggesting that if there is still an ongoing read that it may lead to nasty bugs:

proc closeWithEOF*(s: LPStream): Future[void] {.async, public.} =
  ## Close the stream and wait for EOF - use this with half-closed streams where
  ## an EOF is expected to arrive from the other end.
  ##
  ## Note - this should only be used when there has been an in-protocol
  ## notification that no more data will arrive and that the only thing left
  ## for the other end to do is to close the stream gracefully.
  ##
  ## In particular, it must not be used when there is another concurrent read
  ## ongoing (which may be the case during cancellations)!
  ##

But anyhow, in the way how it is currently used in practice it seems alright for now.

@codecov
Copy link

codecov bot commented Dec 7, 2022

Codecov Report

Merging #816 (e10fbf5) into unstable (64cbbe1) will increase coverage by 0.15%.
The diff coverage is 57.14%.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff              @@
##           unstable     #816      +/-   ##
============================================
+ Coverage     83.57%   83.73%   +0.15%     
============================================
  Files            81       82       +1     
  Lines         14696    14914     +218     
============================================
+ Hits          12282    12488     +206     
- Misses         2414     2426      +12     
Impacted Files Coverage Δ
libp2p/muxers/mplex/lpchannel.nim 83.13% <33.33%> (-0.60%) ⬇️
libp2p/muxers/mplex/mplex.nim 88.52% <75.00%> (-0.56%) ⬇️
libp2p/protocols/pubsub/gossipsub.nim 84.51% <0.00%> (-1.36%) ⬇️
libp2p/dial.nim 50.81% <0.00%> (-0.85%) ⬇️
libp2p/protocols/pubsub/gossipsub/behavior.nim 87.84% <0.00%> (-0.71%) ⬇️
libp2p/protobuf/minprotobuf.nim 82.13% <0.00%> (-0.18%) ⬇️
libp2p/discovery/discoverymngr.nim 97.56% <0.00%> (-0.13%) ⬇️
libp2p/muxers/yamux/yamux.nim 89.45% <0.00%> (-0.13%) ⬇️
libp2p/protocols/connectivity/relay/client.nim 75.00% <0.00%> (-0.12%) ⬇️
libp2p/connmanager.nim 91.13% <0.00%> (-0.10%) ⬇️
... and 12 more

Menduist
Menduist previously approved these changes Dec 13, 2022
@etan-status
Copy link
Contributor Author

Currently running a perpetual test on nimbus-eth2 to confirm that this fix solves the reliability issues with --sync-light-client flag due to random disconnects:

https://github.com/status-im/nimbus-eth2/commits/dev/etan/z
Job autorestarts itself forever, until it eventually crashes with recursive job start stack overflow in jenkins (after a couple days)

@etan-status
Copy link
Contributor Author

There is another instance of potentially overly aggressive connection closing happening here:
https://github.com/status-im/nim-libp2p/blob/5e3323d43f540d303a52877b6e8492fa7100cf85/libp2p/muxers/mplex/lpchannel.nim#L244-L255

Notably, a LPStreamResetError takes the generic CatchableError path, triggering a full connection closure instead of just resetting the individual (cancelled) stream.

Like LPStreamClosedError, LPStreamResetError is also a descendant type of LPStreamEOFError. In total, there are four distinct LPStreamEOFError descendants:

  • LPStreamResetError* = object of LPStreamEOFError
  • LPStreamClosedError* = object of LPStreamEOFError
  • LPStreamRemoteClosedError* = object of LPStreamEOFError
  • LPStreamConnDownError* = object of LPStreamEOFError

In yamux, all LPStreamEOFError are handled the same, and the overall problem of spurious connection closure in face of stream cancellations has not yet appeared when using yamux, suggesting that maybe the mplex logic is incomplete.

  • Is it alright to update mplex > lpchannel.nim logic to also treat LPStreamResetError and LPStreamRemoteClosedError the same as LPStreamClosedError, i.e., by simply re-raising them instead of acting on them?
  • What about LPStreamConnDownError? Should this also just be propagated to the caller, or is the existing CatchableError logic correct for that error kind?

@Menduist
Copy link
Contributor

Menduist commented Jan 3, 2023

I may be mistaken, but I don't think these errors can appear here.
The 4 errors you talk should only happen on multiplexed streams (with yamux / mplex)
But here, we are writing on the underlying connection (which is a SecureConnection or whatever), so we should only get EOFs

I don't see any reason not to switch to the more generic error, though

@etan-status
Copy link
Contributor Author

https://ci.status.im/blue/organizations/jenkins/nimbus-eth2%2Fplatforms%2Flinux%2Fx86_64/detail/dev%2Fetan%2Fz/112/artifacts

This run shows an example of LPStreamResetError occurring:

{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"pushing data to channel","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","channel":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","len":35,"msgType":2,"id":5,"initiator":false,"size":35}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"Pushing data","topics":"libp2p bufferstream","s":"16U*3RAA2m:639f0451a2218e3aed875021","data":35}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"pushed data to channel","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","channel":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","len":35,"msgType":2,"id":5,"initiator":false,"size":35}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"waiting for data","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"readFrame","topics":"libp2p noise","sconn":"16U*3RAA2m:639f0381a2218e3aed875016","size":18}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"decryptWithAd","topics":"libp2p noise","tagIn":"8f73ec206fcb...6ea7377335c6","tagOut":"8f73ec206fcb...6ea7377335c6","nonce":4785}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"read header varint","topics":"libp2p mplexcoder","varint":44,"conn":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"read data","topics":"libp2p mplexcoder","dataLen":0,"data":"","conn":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"read message from connection","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","data":"","msgType":4,"id":5,"initiator":false,"size":0}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"Processing channel message","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","channel":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","data":"","msgType":4,"id":5,"initiator":false,"size":0}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"Pushing EOF","topics":"libp2p bufferstream","s":"16U*3RAA2m:639f0451a2218e3aed875021"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"add leftovers","topics":"libp2p bufferstream","s":"16U*3RAA2m:639f0451a2218e3aed875022","len":66}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"add leftovers","topics":"libp2p bufferstream","s":"16U*3RAA2m:639f0451a2218e3aed875021","len":34}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"readOnce","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875022","bytes":1}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"readOnce","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","bytes":1}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"waiting for data","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"readFrame","topics":"libp2p noise","sconn":"16U*3RAA2m:639f0381a2218e3aed875016","size":18}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"decryptWithAd","topics":"libp2p noise","tagIn":"6719e3713ccb...d6b6fd481151","tagOut":"6719e3713ccb...d6b6fd481151","nonce":4786}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"read header varint","topics":"libp2p mplexcoder","varint":46,"conn":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"read data","topics":"libp2p mplexcoder","dataLen":0,"data":"","conn":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"read message from connection","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","data":"","msgType":6,"id":5,"initiator":false,"size":0}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.577+00:00","msg":"Processing channel message","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","channel":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","data":"","msgType":6,"id":5,"initiator":false,"size":0}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Resetting channel","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","len":34}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"sending reset message","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","conn":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"writing mplex message","topics":"libp2p mplexcoder","conn":"16U*3RAA2m:639f0381a2218e3aed875017","id":5,"msgType":5,"data":0,"encoded":2}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closing BufferStream","topics":"libp2p bufferstream","s":"16U*3RAA2m:639f0451a2218e3aed875021","len":34}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closed BufferStream","topics":"libp2p bufferstream","s":"16U*3RAA2m:639f0451a2218e3aed875021"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closing connection","topics":"libp2p connection","s":"16U*3RAA2m:639f0451a2218e3aed875021"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closed connection","topics":"libp2p connection","s":"16U*3RAA2m:639f0451a2218e3aed875021"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closing stream","topics":"libp2p lpstream","s":"639f0451a2218e3aed875021","objName":"LPChannel","dir":"In"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closed stream","topics":"libp2p lpstream","s":"639f0451a2218e3aed875021","objName":"LPChannel","dir":"In"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Channel reset","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"waiting for data","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"cleaned up channel","topics":"libp2p mplex","m":"16U*3RAA2m:639f0381a2218e3aed875017","chann":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"readOnce","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875022","bytes":66}
{"lvl":"DBG","ts":"2022-12-18 12:15:13.578+00:00","msg":"Snappy decompression/read failed","topics":"sync","msg":"Unexpected EOF before snappy header","conn":"16U*3RAA2m:639f0451a2218e3aed875021"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"reading first requested proto","topics":"libp2p multistream","conn":"16U*3RAA2m:639f0451a2218e3aed875022"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"successfully selected ","topics":"libp2p multistream","conn":"16U*3RAA2m:639f0451a2218e3aed875022","proto":"/eth2/beacon_chain/req/light_client_updates_by_range/1/ssz_snappy"}
{"lvl":"DBG","ts":"2022-12-18 12:15:13.578+00:00","msg":"Error processing request","topics":"networking","peer":"16U*3RAA2m","responseCode":1,"errMsg":"Failed to decompress snappy payload"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"exception in lpchannel write handler","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040","msg":"Stream Reset!"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Already closed","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875021:639f0451a2218e3aed875040"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.578+00:00","msg":"Closing secure conn","topics":"libp2p secure","s":"16U*3RAA2m:639f0381a2218e3aed875017","dir":1}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"Shutting down chronos stream","topics":"libp2p chronosstream","address":"127.0.0.1:6001","s":"16U*3RAA2m:639f0381a2218e3aed875016"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"Shutdown chronos stream","topics":"libp2p chronosstream","address":"127.0.0.1:6001","s":"16U*3RAA2m:639f0381a2218e3aed875016"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"Closing connection","topics":"libp2p connection","s":"16U*3RAA2m:639f0381a2218e3aed875016"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"Closed connection","topics":"libp2p connection","s":"16U*3RAA2m:639f0381a2218e3aed875016"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"Closing stream","topics":"libp2p lpstream","s":"639f0381a2218e3aed875016","objName":"ChronosStream","dir":"Out"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"Closed stream","topics":"libp2p lpstream","s":"639f0381a2218e3aed875016","objName":"ChronosStream","dir":"Out"}
{"lvl":"TRC","ts":"2022-12-18 12:15:13.579+00:00","msg":"exception in lpchannel write handler","topics":"libp2p mplexchannel","s":"16U*3RAA2m:639f0451a2218e3aed875022","msg":"Stream Underlying Connection Closed!"}

@Menduist
Copy link
Contributor

Menduist commented Jan 3, 2023

Oh right, they are coming from the prepareWrite, my bad

@etan-status
Copy link
Contributor Author

OK. Updated to handle all four LPStreamEOFError the same then.

Is this also alright for LPStreamConnDownError, or should that one still use the previous CatchableError logic?

@Menduist
Copy link
Contributor

Menduist commented Jan 3, 2023

Ideally the LPStreamConnDownError should reset the stream (that would happen anyway at some point, but better be safe)

@etan-status
Copy link
Contributor Author

Following up here, I ran about 300 testnets on nimbus-eth2 using CI based on the new logic here, and they all completed without hiccups. Would be great to get this merged, so I can integrate it into nimbus-eth2. Hoping that spurious disconnects are all gone now :-)

@etan-status etan-status merged commit ba45119 into unstable Jan 6, 2023
@etan-status etan-status deleted the dev/etan/mp-catchpush branch January 6, 2023 14:18
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Jan 6, 2023
libp2p issues related to operation cancellations have been addressed in
vacp2p/nim-libp2p#816
This means we can once more enable `--sync-light-client` in CI, without
having to deal with spurious CI failures due to the cancellation issues.
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Jan 6, 2023
libp2p issues related to operation cancellations have been addressed in
vacp2p/nim-libp2p#816
This means we can once more enable `--sync-light-client` in CI, without
having to deal with spurious CI failures due to the cancellation issues.
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Jan 6, 2023
libp2p issues related to operation cancellations have been addressed in
vacp2p/nim-libp2p#816
This means we can once more enable `--sync-light-client` in CI, without
having to deal with spurious CI failures due to the cancellation issues.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants