Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not send dash-specific requests to masternodes before we are fully connected #1882

Merged
merged 6 commits into from
Feb 1, 2018

Conversation

UdjinM6
Copy link

@UdjinM6 UdjinM6 commented Jan 27, 2018

Open all masternode connections via CConnman::ThreadOpenMasternodeConnections only. Queue requests and process them after some timeout.

@UdjinM6 UdjinM6 added this to the 12.3 milestone Jan 27, 2018
@UdjinM6 UdjinM6 changed the title Do not send anything to masternodes before we are fully connected Do not send dash-specific requests to masternodes before we are fully connected Jan 27, 2018
… connected

Open all masternode connections via CConnman::ThreadOpenMasternodeConnections only. Queue requests and process them after some timeout.
@@ -740,6 +740,48 @@ std::pair<CService, std::set<uint256> > CMasternodeMan::PopScheduledMnbRequestCo
return std::make_pair(pairFront.first, setResult);
}

void CMasternodeMan::ProcessQueuedMnbRequests(CConnman& connman)
{
std::pair<CService, std::set<uint256> > p = mnodeman.PopScheduledMnbRequestConnection();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use of mnodeman is unneeded here as we are already in CMasternodeMan

Copy link

@codablock codablock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see inline comments

}

connman.AddPendingMasternode(infoMn.addr);
SetState(POOL_STATE_QUEUE);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this is still correct. Before this PR, the connect to the MN was synchronous so at at least it was correct in the sense of successful connections (but we still didn't know if the MN accepted/processed the messages). Now we don't even know if the connection was successful. Maybe we have to introduce a CONNECTING state?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked the timeout handling code as well and currently we'd have to wait 30 seconds until we run into a timeout if the connection attempt fails. Maybe we should also have a different timeout for the connection attempt?

dsaQueue.push_back(std::make_pair(GetTime(), std::make_pair(infoMn.addr, CDarksendAccept(nSessionDenom, txMyCollateral))));
}

SetState(POOL_STATE_QUEUE);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

connman.AddPendingMasternode(infoMn.addr);
SetState(POOL_STATE_QUEUE);
nTimeLastSuccessfulStep = GetTimeMillis();
LogPrintf("CPrivateSendClient::JoinExistingQueue -- connecting (from queue): nSessionDenom: %d (%s), addr=%s\n",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you reword the "connecting (from queue)" part so that it matches the log from StartNewQueue?

return true;
}
strAutoDenomResult = _("Failed to join existing mixing queues");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would something like "Failed to find mixing queue to join" better describe what went wrong? (we didn't even try to join a queue as I understand it)

}

if (!mnbQueue.empty()) {
// wait at least 5 seconds since we tried to open corresponding connection

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm sounds like it go wrong or at least take more time then actually needed in some cases.

What about the other way around? You always loop through all fully connected nodes and for each node that is in mnbQueue you send the messages and remove the entry from mnbQueue.
After that loop, you go through mnbQueue and remove all entries that didn't get processed for some time already (which means connection failed).
This way the good nodes are processed ASAP, while the bad nodes fall out of the queue (which is not really a queue anymore) after some time.

Same applies to ProcessQueuedMnvRequests and ProcessQueuedDsa

@UdjinM6
Copy link
Author

UdjinM6 commented Jan 30, 2018

Pushed few fixes. Not fixing POOL_STATE_QUEUE, adding TODO instead - it works just fine as it is while fixing it requires a protobump. I'm proposing to fix this later together with some other fixes that would require a protobump.

Copy link

@codablock codablock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

@UdjinM6 UdjinM6 merged commit 1b1a440 into dashpay:develop Feb 1, 2018
andvgal pushed a commit to energicryptocurrency/gen2-energi that referenced this pull request Jan 6, 2019
… connected (dashpay#1882)

* Do not send dash-specific requests to masternodes before we are fully connected

Open all masternode connections via CConnman::ThreadOpenMasternodeConnections only. Queue requests and process them after some timeout.

* drop excessive `mnodeman.`

* switch from queues to maps of pending requests

* adjust few strings, add TODO for POOL_STATE_CONNECTING

* fix

* there can be only one pending dsa request per ps client
@UdjinM6 UdjinM6 deleted the mncon branch November 26, 2020 13:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants