-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sample: update local-drone-control-scala for kubernetes #1015
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking good
@@ -8,6 +8,7 @@ include "cluster" | |||
|
|||
akka { | |||
loglevel = DEBUG | |||
remote.artery.canonical.hostname = "127.0.0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have to define this at all? doesn't work with default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will try it, was just retaining the config that was there before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, shouldn't be needed. And single-node cluster.
@@ -0,0 +1,39 @@ | |||
# Production configuration for running the local-drone-control service in Kubernetes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should not name this prod.conf
since the single node application.conf is also a prod alternative. Maybe application-kubernetes.conf
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Was going to name it kubernetes.conf
originally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although, I guess the single node with H2 version can run in kubernetes too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
true, but then we should have rather different deployment yml for that so that it's not forming a cluster with other nodes
canonical.port = 2552 | ||
canonical.port = ${?REMOTE_PORT} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this config file compare to other clustered samples? e.g. restaurant-drone-deliveries-service-scala
I'm missing for example downing-provider-class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I'll make sure it has everything. And it could make sense to consolidate into the cluster.conf
file for when it's clustered, (and not included from single-node application.conf
).
@@ -1,5 +1,3 @@ | |||
akka { | |||
actor.provider = cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this config file is just confusing for this sample, and we can put this in application.conf and application-kubernetes.conf ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could put more cluster config here, for the clustered variants, and then only have provider = cluster
in the default single-node application.conf
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I think something is missing for the single node one as well. contact-point-discovery config should be there. That is now only in local-shared.conf. The local files are intended for local dev mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Single-node has its own main, does a self-join there, and doesn't start cluster bootstrap. Local has config discovery.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alright, I'll look over the stuff. confusing to use different approaches, but nothing you should spend time on in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I'll look deeper at the naming of configs and the bootstrap of single node.
Config reworked for suggestions. Native image config updated for changes (SBR included). Tested in k3s again. |
Update local-drone-control config to run in kubernetes. Run with the native image tracing agent to generate updated native image config. Tested with k3s.