Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: disable runner #64

Merged
merged 4 commits into from
Jan 22, 2024
Merged

fix: disable runner #64

merged 4 commits into from
Jan 22, 2024

Conversation

bavarianbidi
Copy link
Member

No description provided.

@rafalgalaw
Copy link
Collaborator

rafalgalaw commented Jan 15, 2024

Idk about my impl... On the one hand i like that it cleans up runner crs if the runner-sync flag is set to false and than disables itself, on the otherhand this might be confusing as there is still sth going on for the enduser even if the feat is disabled. I also thought about starting the runner controller not with the manager, but separate so one could disable to entire reconciler at runtime...

Comment on lines +358 to 355
cache := make(map[string]struct{})
var diff []string

for _, runner := range garmRunners {
cache[runner] = struct{}{}
}

for _, runnerCR := range runnerCRs {
if !slices.Contains(garmRunners, runnerCR) {
if _, found := cache[runnerCR]; !found {
diff = append(diff, runnerCR)
}
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the new cache var needed?
doesn't do the old if !slices.Contains(garmRunners, runnerCR) { implementation exactly the same as the new if _, found := cache[runnerCR]; !found {?

I personally would stick to the funcs of the official slices package.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just about optimization. Before that we had a O(n^2) runtime, as there were basically two nested for loops. Did some benchmarking on that with the benchmark package, and this solution is much more efficient because of the maps constant lookup times but of course has a bit higher memory footprint

Copy link
Member Author

@bavarianbidi bavarianbidi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk about my impl... On the one hand i like that it cleans up runner crs if the runner-sync flag is set to false and than disables itself, on the otherhand this might be confusing as there is still sth going on for the enduser even if the feat is disabled. I also thought about starting the runner controller not with the manager, but separate so one could disable to entire reconciler at runtime...

i get your idea for cleaning up old/reflected runners but tbh, it feels wrong doing it even if the flag is set to don't do it.
IMHO it's fine to have orphaned runners in the cluster if the flag got disabled. Once the flag got enabled, they will gone - and in the meantime it's still possible to delete them by hand.

@rafalgalaw rafalgalaw marked this pull request as ready for review January 16, 2024 07:29
Copy link
Member Author

@bavarianbidi bavarianbidi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - would approve if i'm allowed to 😅

@rafalgalaw rafalgalaw self-requested a review January 22, 2024 11:11
@bavarianbidi bavarianbidi merged commit b36d49b into main Jan 22, 2024
3 checks passed
@bavarianbidi bavarianbidi deleted the toggle_runner_reconciliation branch January 22, 2024 15:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants