You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running into an issue with Consul and AKS for service sync. Service configurations inside AKS are the same and I've tried modifying them to ensure this was not causing the issue.
They are set up as NodePort with ports 80 and 443 exposed (APIs)
A datacenter running 1.3.0 (and no ACLs) created 119 days ago (so prior to #63 being merged), NodePort services are showing up registered to the nodes with an IP address and port. AKS is configured using Azure as the network policy, which is intended to map pods to routable addresses (so each pod is on it's own IP rather than sharing IPs). Each Kubernetes node has enough IP addresses allocated to it to allow for all pods to have their own discrete IP.
Accessing it using curl only works over port 80 (and 443), rather than the port provided by consul (31698 in this case)
This allows for simple DNS resolution and navigation without having to perform more advanced queries or keep track of what ports are in use (which for API<->API communcation would not work)
A datacenter running 1.4.0+ent (with ACLs) lists the services as being part of the k8s-sync node. They provide the same details however they are only accessible over the specified node port rather than the agent port/IP that was being mapped originally. This prevents the simple DNS and API communication.
The text was updated successfully, but these errors were encountered:
That looks like the NodePort. If you run kubectl get svc -o wide what do you see?
I know this ticket is super old and you've probably moved past this problem so I'm going to close it for now but if you still have this issue, please comment and I'll re-open.
I'm running into an issue with Consul and AKS for service sync. Service configurations inside AKS are the same and I've tried modifying them to ensure this was not causing the issue.
They are set up as NodePort with ports 80 and 443 exposed (APIs)
A datacenter running 1.3.0 (and no ACLs) created 119 days ago (so prior to #63 being merged), NodePort services are showing up registered to the nodes with an IP address and port. AKS is configured using Azure as the network policy, which is intended to map pods to routable addresses (so each pod is on it's own IP rather than sharing IPs). Each Kubernetes node has enough IP addresses allocated to it to allow for all pods to have their own discrete IP.
Accessing it using curl only works over port 80 (and 443), rather than the port provided by consul (31698 in this case)
This allows for simple DNS resolution and navigation without having to perform more advanced queries or keep track of what ports are in use (which for API<->API communcation would not work)
A datacenter running 1.4.0+ent (with ACLs) lists the services as being part of the
k8s-sync
node. They provide the same details however they are only accessible over the specified node port rather than the agent port/IP that was being mapped originally. This prevents the simple DNS and API communication.The text was updated successfully, but these errors were encountered: