-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configurable JsonPath for pod owner #72
Configurable JsonPath for pod owner #72
Conversation
151f67e
to
10f55ad
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the patch @IlyaMar! I appreciate all the hard work you put into it.
the overall approach looks good to me. However, I would love to see two issues addressed:
- The biggest challenge is to detect whether we should use
ReplicaSet
orDeploymentConfig
information to distinguish deployments. There's also a third controller, very frequently used with stateful applications -StatefulSets
. I believe we might query Kubernetes API for all 3 types of information and see what we get in return. Then, we might automatically decide, which one to use to separate the deployments. My main point is that we can get rid ofpodOwnerJsonPath
and make this decision automatically. - I would like to keep this project without additional dependencies. So it would be great to find a way to implement it without
json-path
. This makes things a bit easier when deploying as modules in Wildfly (and other WF-dependent projects).
Autodetection sounds reasonable and easy to implement. I have a question about StatefulSets, it's not clear by what criteria should we distinguish new and old pods during rolling update. |
@IlyaMar Indeed, |
@slaskawi I've updated PR with controller-revision-hash usage, please review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good to me, nice work!
Let me test them out a bit more this week. Once I do that, I'll merge the PR.
I've found a new problem just now. When performing Rolling Update, new pod is not fully started at the moment KUBE_PING performs discovery. So, new pod is filtered out in org.jgroups.protocols.kubernetes.Client#parseJsonResult I think we need to move filtraton by readiness to findMembers function, after senderParentDeployment is determined. |
@IlyaMar So indeed, Then, as you said, the Pods that are not ready (or are not running) are not processed by in However, as you said, the Pod that is sending the discovery request also might not be ready. Since it's not present in a list returned by I believe we have at least 3 choices here:
I personally like the option no. 3 the most. I think that's the cleanest approach. no. 2 is also "pretty good". |
Ok, I'll implement option #3. Change will be included in the same pull
request.
ср, 17 июля 2019 г., 11:57 Sebastian Łaskawiec <notifications@github.com>:
… When performing Rolling Update, new pod is not fully started at the moment
KUBE_PING performs discovery. So, new pod is filtered out in
org.jgroups.protocols.kubernetes.Client#parseJsonResult
As result, we cannot determine senderParentDeployment variable inside
org.jgroups.protocols.kubernetes.KUBE_PING#findMembers, and old members are
not filtered from cluster list. New pod cannot start as it stuck
communicating with old members.
@IlyaMar <https://github.com/IlyaMar> So indeed, Client#parseJsonResult
checks if Pod is running (that in turn invokes Client#podRunning, which
checks both flags - is Running and is ready). Then, we exclude all Pods
that do not pass this predicate from the List<Pod> returned by
Client#parseJsonResult.
Then, as you said, the Pods that are not ready (or are not running) are
not processed by in KUBE_PING#findMembers. But since, they are not there,
they shouldn't form a cluster. In other words, not ready Pods should not
join any cluster (neither old, nor new one).
However, as you said, the Pod that is sending the discovery request also
might not be ready. Since it's not present in a list returned by
Client#parseJsonResult it can not grab it's own Parent Deployment (stored
in senderParentDeployment), and it can not compare it to the other Pods
and we get into troubles.
I believe we have at least 3 choices here:
1. if senderParentDeployment is null, we simply exclude all Pods from
the discovery. This means the Pod that is sending the discovery request is
not ready, so it doesn't know which cluster to join. The biggest downside
is that an application may be in not-ready state for a longer period of
time (e.g. doing a migration or fetching a larger dataset from a database).
So clearly, this solution is not perfect.
2. We can modify the Client#getPods to always preserve a sender Pod.
Later on, we can filter the sender in KUBE_PING#findMembers. However,
sending discovery to self should not do any harm.
3. As you mentioned, move the whole if
(split_clusters_during_rolling_update) to the Client#parseJsonResult
and adjust the readiness check part.
I personally like the option no. 3 the most. I think that's the cleanest
approach. no. 2 is also "pretty good".
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72?email_source=notifications&email_token=AAWYIMGIXZDGHRJJE45C26TP73NGDA5CNFSM4H646OW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2DQVDA#issuecomment-512166540>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAWYIMGI2BT7MV4TBQUD5CDP73NGDANCNFSM4H646OWQ>
.
|
93a9ec2
to
be3d733
Compare
@slaskawi , I finally implemented slightly different solution. Client now reads all pods, even not ready, with flag isReady. It's KUBE_PING.findMembers responsibility to form a list of members from this pod list. Looks like this resulted to more readable code. Please see updated PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Let's wait for @belaban to see if he has any concerns around this change.
Hey @slaskawi thanks for looking into this! Can you merge the PR, or do you need further testing? |
@belaban No, I believe it's fine! I just wanted to check if you have any comments regarding to the code. Once you approve it, I'll push it and release it. |
done |
@slaskawi I appreciate that you can still find time to spend on Kubernetes/Openshift discovery protocols (KUBE_PING, DNS_PING)! |
This is a fix for #50
New property 'podOwnerJsonPath' added with JsonPath to pod owner (ReplicaSet or OpenShift Deployment) relative to pod json node. Default value is "$.metadata.labels.deployment" to keep backward compatibility.
For kubernetes w/o OpenShift, property value should be set to "$.metadata.ownerReferences[?(@['kind'] == 'ReplicaSet')].name"
Please note new dependency is added, com.jayway.jsonpath:json-path. It should be placed to JBoss module somehow, by adding a new jar or by a shaded jar, please advice how to do it.