-
Notifications
You must be signed in to change notification settings - Fork 810
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
supporting multiple ports #151
Comments
Just been thinking about this some more, thinking of a configuration design that would look like the following (moving ports to an array, and giving it a name): apiVersion: "stable.agones.dev/v1alpha1"
kind: GameServer
metadata:
name: "gds-example"
spec:
container: example-server
ports: # (new) this is an array of ports
- name: "main" # (optional) name is purely descriptive, so it's easy to know what each port is for
portPolicy: "dynamic" # still defaults to dynamic, but as an example
containerPort: 7654
protocol: UDP
- name: "beacon"
containerPort: 7652
protocol: UDP # can now mix UDP/TCP, if you _really_ want to
# .... This would still only apply to the single gameserver container. I was debating allowing this to be applied to any container in the pod, but I think it will get too confusing overall. WDYT? |
+1 for just game server container.
I'm uncertain about how ports are handled in general. Do we intend to run the GameServer pods with hostNet=true? I thought that was the case, but it looks like you might actually be detecting the external ip of the node the GameServer pod is running on and using that in combination with the dynamically generated hostPort value as the address of the GameServer. That avoids requiring hostNet=true and ensures no port collisions occur, but I'm uncertain about the performance impact. Also, some dedicated servers actually advertise the port they are bound to with another service for discovery. If clients connect via a different port using the nodes external IP, then this could cause connectivity issues. |
That's exactly how it works. Right now it goes through iptables - max 0.5ms extra latency - but gives us the ability to run sidecars due to actually having network separation between each gameserver and related pods, which gives us simplicity, the ability to still use k8s constructs and for now, segregation between each game server. Nothing to say we couldn't have an "escape hatch" to switch it to Down the line the iptables is being replaced by ipvs proxying, which should be much faster - but I should confirm if this is not just Services, but also hostPort as well (I assume so, but should double check).
I would say that this is current out of scope, and if this is needed, build an adapter to watch Agones gameserver events through the k8s api, and broadcast events that way (and since we would have names on each port, it should be relatively straightforward to know what port does what). |
Yep, this is technically true, but realize that it may limit who is able to consider using agones. This forces people to integrate their system with agones at a deeper level than they may have resources for. For example, the port situation would mean there's no practical migration path from the infrastructure I used to deploy UE4 dedicated servers over to agones. Our UE4 dedicated servers registered their port with another service and that service expects the port to be reachable. In the above scenario, agones might be attractive for it's sdk and scale in/out behavior, but the network connectivity issue would have forced us to pass on using the project :/ I think this is fairly simple to solve though. Just make some option that toggles hostNetworking:true on the Pod and tell people the whole port situation is up to them to handle if they use that option. That should satisfy both use cases. 👍 for the port list though. Looks good! |
thinks Another potentially interesting idea we could also float, is have a mechanism that pushes the currently allocated port down to the gameserver container as an env var / available through the SDK - in which case, it (in theory) could be relatively trivial to tweak the current setup to register with that value rather than the one the game server process starts on. Would that work? |
interesting. that would satisfy the use case I outlined as well. I'm just not sure how you'd accomplishing getting the host port to the SDK like that. I'm not sure k8s exposes that information into the pod. The sdk would have to maybe look up it's own pod in the apiserver or something. I dunno... Then you'd have to handle dealing with the game server trying to learn about the ports before the sdk learned about them. maybe it's possible though, good thought! |
Agones does the dynamic Port Allocation step - it's not controlled by Kubernetes, so we have lots of control there. Off the top of my head, we could inject a configmap into the pod to be exposed an env var, and then update it's value once the port allocation step had been completed. That would likely be the easiest. We could then have the SDK read that (as a convenience function) and/or the game server process could do it itself if it so desired. |
or the SDK could talk to the sidecar, which also has the ability to read the data for the currently running |
I tend to believe that a configmap with the serialized CRD object is
better, since it'd be strictly read only for the current game pod.
|
Work on this starting here: https://github.com/markmandel/agones/tree/feature/multiple-ports |
Implemented in #283 |
This is a feature request. I just wanted to through out a use case that might not have been identified yet.
Some UE4 dedicated servers need access to both a game port and something called a "beacon" port. It's possible other dedicated servers require the ability to expose multiple ports as well.
It looks like the GameServer CRD is currently only able to express a single port. It might help to make that a list in the future.
The text was updated successfully, but these errors were encountered: