Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible deadlock with requesting orderbook request when P2P node got disconnected + excessive data usage #910

Open
Milerius opened this issue Apr 16, 2021 · 8 comments
Labels
bug: orders bug: P2P priority: high Important tasks that need attention soon.

Comments

@Milerius
Copy link

Milerius commented Apr 16, 2021

Describe the bug

As discussed with artem we maybe have a possible deadlock, see the logs:

2021-04-16-14-13-55.log
2021-04-16-14-14-06.mm2.log

Screenshots

image

  • OS: Linux
  • MM2 Version: AtomicDEX MarketMaker 2.1.3329_mm2.1_dd24de2a5_Linux_Release DT 2021-04-12T16:30:36+07:00

cc @TheComputerGenie

@artemii235 artemii235 added the bug label Apr 19, 2021
@artemii235
Copy link
Member

Thanks for the report, checking now.

@artemii235 artemii235 self-assigned this Apr 19, 2021
@artemii235 artemii235 added the P0 label Apr 19, 2021
artemii235 added a commit that referenced this issue Apr 20, 2021
…#911)

* Set explicit 10 seconds timeout for P2P requests in our behavior.
Do not hold orderbook lock during GetOrderbook request.

* Limit Node::wait_peers loop to 10 attempts.
@artemii235
Copy link
Member

Should be fixed now, please try testing with the latest MM2 release - it should be published in a few minutes.

@TheComputerGenie
Copy link

Should be fixed now, please try testing with the latest MM2 release - it should be published in a few minutes.

That change does lessen the frequency of it happening, but doesn't seem to totally fix it
2021-04-20-05-40-45.log
2021-04-20-05-40-58.mm2.log

@artemii235
Copy link
Member

@TheComputerGenie Should be something different now. GUI log previously had Failed to read HTTP status line from: batch_balance_and_tx and Failed to read HTTP status line from: process_orderbook, now it has only batch_balance_and_tx messages.

I see that MM2 is getting frequently disconnected from SPACE electrums. Could you please try to disable SPACE and check if there is any difference?

@lightspeed393
Copy link

lightspeed393 commented Apr 20, 2021

@TheComputerGenie if there's a SPACE electrum server issue please lmk. They appear okay on my end though.

@TheComputerGenie
Copy link

TheComputerGenie commented Apr 20, 2021

now it has only batch_balance_and_tx messages.

The message of which thing causes the fail is intermittent.
On additional runs it does include the orderbook fail as well
image
Disabling the heavy UTXO PoS coin seems to have stopped it

@TheComputerGenie if there's a SPACE electrum server issue please lmk. They appear okay on my end though.

I'm not sure it's a "SPACE electrum" issue as much as an actual electrum issue not really being designed to handle 6500+ UTXOs.

@TheComputerGenie
Copy link

the batch_balance_and_tx messages seem to come from having "normal" DSL and not being situated in an urban area with super speed

I've gotten several
· 2021-04-20 12:30:22 -0500 [tx_history THC] utxo_common:1513] Error "utxo_common:1231] JsonRpcError { client_info: \"coin: THC\", request: JsonRpcRequest { jsonrpc: \"2.0\", id: \"2455\", method: \"blockchain.scripthash.get_balance\", params: [String(\"5030111ffcc95ef54f496c5b0549a2f958ce07dadcba0f995a9424272c04227a\")] }, error: Transport(\"rpc_clients:1213] rpc_clients:1215] [\\\"rpc_clients:2027] common:1411] future timed out\\\", \\\"rpc_clients:2027] common:1411] future timed out\\\"]\") }" on getting balance
which is obviously a different issue since I've never used THC with this address

@artemii235
Copy link
Member

Disabling the heavy UTXO PoS coin seems to have stopped it

I see, it now looks like a network congestion problem that might be caused by excessive data usage. Will check how we can optimize MM2 for this case.
Changing the priority of the issue since the deadlock seems to be fixed and the current problem is a bit different.

@artemii235 artemii235 added P1 and removed P0 labels Apr 21, 2021
@artemii235 artemii235 changed the title [Bug]: Possible deadlock with requesting orderbook request when P2P node got disconnected [Bug]: Possible deadlock with requesting orderbook request when P2P node got disconnected + excessive data usage Apr 21, 2021
@onur-ozkan onur-ozkan changed the title [Bug]: Possible deadlock with requesting orderbook request when P2P node got disconnected + excessive data usage Possible deadlock with requesting orderbook request when P2P node got disconnected + excessive data usage Dec 12, 2024
@onur-ozkan onur-ozkan added priority: urgent Critical tasks requiring immediate action. bug: P2P bug: orders and removed bug labels Dec 12, 2024
@shamardy shamardy added priority: high Important tasks that need attention soon. and removed priority: urgent Critical tasks requiring immediate action. labels Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug: orders bug: P2P priority: high Important tasks that need attention soon.
Projects
None yet
Development

No branches or pull requests

6 participants