Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalable scheduler changes for #1918 #2073

Closed
wants to merge 2 commits into from

Conversation

gregw
Copy link
Contributor

@gregw gregw commented Dec 20, 2017

Fixes #1918 and replaces #1938

Signed-off-by: Greg Wilkins gregw@webtide.com

@gregw gregw requested a review from sbordet December 20, 2017 10:45
@sbordet
Copy link
Contributor

sbordet commented Dec 21, 2017

@gregw you have just redone this PR or you have also modified code with respect to my last commit ?

@gregw
Copy link
Contributor Author

gregw commented Dec 21, 2017

@sbordet I just redid the PR

@sbordet
Copy link
Contributor

sbordet commented Dec 21, 2017

@gregw ok so you're up for reviewing it.

@sbordet sbordet removed their request for review December 21, 2017 08:38
@gregw
Copy link
Contributor Author

gregw commented Dec 21, 2017

@sbordet so as I blatted over the history.... can you remind me which classes you recently changed?

@gregw
Copy link
Contributor Author

gregw commented Dec 21, 2017

@sbordet nevermind.... I see the change you made and am reviewing it now.

Copy link
Contributor Author

@gregw gregw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to deal with the lack of HttpChannel recycling

@@ -27,11 +27,13 @@
protected static final Logger LOG = Log.getLogger(HttpChannel.class);

private final HttpDestination _destination;
private final TimeoutCompleteListener _totalTimeout;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the problem with having the TimeoutCompleteListener on the HttpChannel, is that for HTTP/2, the channel is not recycled, so we lose the benefit of the CyclicTimeout.

It is logically the right place, but only if the HttpChannel is recycled!

Also, we probably need to add a destroy method to HttpChannel, so that a busy HTTP/2 server does not end up with many dead HttpChannels being held in memory by their CyclicTimers

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbordet why did we turn off HttpChannel recycling in h2? Does it still need to be off?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbordet cancel these comments. We do recycle the HttpChannel for the HttpClient, so all good with this PR (but we should review the server side default).

I think this PR LGTM to merge

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gregw I don't see where HttpChannel is recycled for HttpClient. Your first concern is good IMHO: since we don't recycle HttpChannel, we don't get the benefits of CyclicTimeout.

So perhaps we really need to rethink the implementation by having the CyclicTimeout at the connection level; for HTTP/1.1 this will map 1-to-1 with a channel, and the connection is pooled; for HTTP/2 we will use a shared CyclicTimeout for all the channels created by a connection.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or perhaps we recycle HttpChannels? They are recycled for HTTP/1 (HttpChannelOverHTTP is a final field of HttpConnectionOverHTTP) so why not for HTTP/2? We never really discovered a problem, we just suspected it.

The main components of the HttpChannel are the Sender and Receiver, both of which are designed to be reused. Can you at least look up the issues we had with recycling and see?

The CyclicTimeout really matches well with the HttpChannel, as we need one timeout per active request and each channel can have only 1 active request. It is the right spot for it.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
@sbordet sbordet force-pushed the jetty-9.4.x-1918-scalable-scheduler2 branch from cc155eb to 3e372b1 Compare January 4, 2018 18:36
@sbordet
Copy link
Contributor

sbordet commented Jan 4, 2018

@gregw I rebased this work on the latest 9.4.x, so that the fix for #2088 is taken into account (recycling HttpChannel). You want to re-review, but overall I think we are ok.

The only concern I have is about the usage of CyclicTimeout.destroy(). Right now HttpChannel does not really have a lifecycle method that signals that it is about to be disposed, where we can destroy the CyclicTimeout.

Typically HttpChannel is created and then unreferenced when it's not needed anymore.

Adding a HttpChannel.dispose() method will require some precise work to find out the right spot to call it. If we don't call CyclicTimeout.destroy() its Wakeups will be retained by the Scheduler it until they are all executed.

Thoughts ?

@gregw
Copy link
Contributor Author

gregw commented Jan 4, 2018

@sbordet So the issue could be if the client disposes of lots of HttpChannels, then we could hold them in memory until their timer expires.

But we pool the channels, so there should not be a lot of disposals, unless there are a lot of errors
Perhaps we destroy at least the timer when we know we are not pooling?

Also when a connection is destroyed, it should be easy enough to destroy all it's pooled channels?

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Jan 8, 2018

closed in favour of #2101

@gregw gregw closed this Jan 8, 2018
@gregw gregw deleted the jetty-9.4.x-1918-scalable-scheduler2 branch January 8, 2018 15:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants