-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let kafka-headless service resolve even before pods are ready #56 #58
Conversation
@@ -2,6 +2,8 @@ apiVersion: v1 | |||
kind: Service | |||
metadata: | |||
name: kafka-headless | |||
annotations: | |||
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since which version of k8s is this available?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know ... it works at least from Kubernetes 1.6.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tombentley It was added in Kubernetes 1.3.
I did use the
It does provision three nodes each ( Now I did the following, locally (w/ console scripts from
Using the
|
@matzew What is the timing for the operations? Did you wait after Another point which I'm curious about - where does the CLI where you run the |
@scholzj yeah, I've waited until all are up (three nodes ZK, and three for Kafka). I just tried again:
new topic / "old" problem.
From outside, from the shell on my Fedora26 notebook |
@matzew Hmm, ok. I guess that is related to #50 - it now uses DNS names as advertised hostnames and you will probably not have them resolved outside of OpenShift. The error doesn't really say anything, but this is what normally prints in this situation. I have to think about the best way to deal with this. |
Back in September, this was all working fine :) |
@scholzj I am happy to test any PR :) |
After rolling back #50 this is not needed anymore and can be closed. |
Updating NotReady state as a failure Signed-off-by: Paolo Patierno <ppatierno@live.com>
Updating NotReady state as a failure Signed-off-by: Paolo Patierno <ppatierno@live.com>
CSMDS-321: Dump all Kafka resources in report.sh (strimzi#24) CSMDS-329: Add all topic describe to report.sh (strimzi#37) CSMDS-420: Fix report.sh to not fail when Kafka resource is being deleted during script run (strimzi#39) CSMDS-317: Add java_thread_dump.sh to dump Java threads of all containers o… (strimzi#23) CSMDS-445: Make cluster arg optional in report.sh (strimzi#47) This will allow using report.sh on a namespace which only contains a cluster operator. CSMDS-433: Fix getting a ready kafka broker pod with kubectl when describing topics (strimzi#48) The head command will immediately return with first line and if kubectl writes anything to stdout after that, there will be nobody to receive it on the right side of the pipe. Because of that, the command will fail with error code 141. CSMDS-450: Get events with -o wide flag in report.sh script (strimzi#51) CSMDS-458: Update report.sh to be cluster-wide (strimzi#54) To get a proper diagnostic bundle from a cluster, report.sh should be changed to dump all information. This simplifies the process (should only be called once), and also makes sure that everything needed gets captured for diagnosing issues. CSMDS-444: Dump license JSON in report.sh (strimzi#56) CSMDS-444: Use secret.data to capture license content (strimzi#73) MINOR: Allow report.sh to continue when a resource disappears (strimzi#74) CSMDS-418: Fix local build issues (strimzi#58) CSMDS-514: remove --request-timeout flag where it is buggy (strimzi#129) CSMDS-600: report.sh fails to collect multiple replicasets (strimzi#149) CSMDS-601: Don't export property files when using report.sh (strimzi#151) CSMDS-388: Extend report.sh to dump all Kafka Connect CRs and KConnect status (strimzi#150) CSMDS-598: Tolerating not found entities in report.sh (strimzi#162) It could happen that between listing by type and the actual retrieval of an entity, the entity is being deleted. CSMDS-637: Add k8s version to report.sh (strimzi#167) CSMDS-588: Collect kafka-log-dirs output in report.sh (strimzi#172) CSMDS-815: Add cluster ID and pod top to report.sh (strimzi#201) CSMDS-803: Dump additional volumes in report.sh (strimzi#203)
No description provided.