Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Honor task desired state in allocator #435

Merged
merged 1 commit into from
Apr 26, 2016
Merged

Honor task desired state in allocator #435

merged 1 commit into from
Apr 26, 2016

Conversation

mrjana
Copy link
Contributor

@mrjana mrjana commented Apr 25, 2016

With the task lifecycle changes and the introduction of DesiredState in
task object, allocator needs to the honor that and make
allocation/deallocation decisions based on that to properly work in sync
with the other stages of the manager pipeline. Made changes to make
allocation/deallocation decisions based on a combination of task
DesiredState and the current state.

Signed-off-by: Jana Radhakrishnan mrjana@docker.com

@aaronlehmann
Copy link
Collaborator

@mrjana: CI is failing with a race detector warning.


// taskDead checks whether a task is not actively running as far as allocator purposes are concerned.
func taskDead(t *api.Task) bool {
return t.DesiredState == api.TaskStateDead && t.Status != nil && t.Status.State > api.TaskStateRunning
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think t.DesiredState == api.TaskStateDead is a sufficient condition. Does the allocator care about the observed state?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I thought about that but decided that we don't want to deallocate the network resources for the task until the task is actually not running in the node, because otherwise we might provide the same IP address to more than one running containers even though one of them might be going down

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I just thought of this after adding the comment.

But doesn't this mean that if a node fails, we will never free the network resources associated with the tasks it was running? What's the right behavior here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am assuming that if the node fails, the task will be removed. That is why I am checking task dead or isDelete here: https://github.com/docker/swarm-v2/pull/435/files#diff-119d353212583d96a59cba8c82b80280R254 while deciding if I want to deallocate.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tasks from failed node won't be immediately deleted, to provide task history. Instead, they generally have DesiredState set to DEAD.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm.. When will they be removed? I realized this as well when I was doing further testing with manager restart. Seems like we retain dead nodes and I was not handling that properly in doNetworkInit.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a PR I'm going to open later today for this. The idea is that we will keep a certain number of old tasks per service instance, and then start deleting the oldest ones.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed offline, there is no perfect solution to this. The current approach in the PR is the most conservative, since it will favor leaking resources over reusing them in a dangerous way. This seems like the right place to start, but in the future we might have to iterate on the approach.

With the task lifecycle changes and the introduction of DesiredState in
task object, allocator needs to the honor that and make
allocation/deallocation decisions based on that to properly work in sync
with the other stages of the manager pipeline. Made changes to make
allocation/deallocation decisions based on a combination of task
DesiredState and the current state.

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
@aaronlehmann
Copy link
Collaborator

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants