Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve sync #1102

Merged
merged 11 commits into from
Oct 6, 2018
Merged

Improve sync #1102

merged 11 commits into from
Oct 6, 2018

Conversation

dgageot
Copy link
Contributor

@dgageot dgageot commented Oct 5, 2018

  • Add more logs and fix some of them
  • Improve errors
  • Pass context to commands

Signed-off-by: David Gageot <david@gageot.net>
Signed-off-by: David Gageot <david@gageot.net>
Signed-off-by: David Gageot <david@gageot.net>
Signed-off-by: David Gageot <david@gageot.net>
Signed-off-by: David Gageot <david@gageot.net>
Signed-off-by: David Gageot <david@gageot.net>
We can’t use skaffold labels to narrow down the
list of pods.

Signed-off-by: David Gageot <david@gageot.net>
Signed-off-by: David Gageot <david@gageot.net>
@codecov-io
Copy link

codecov-io commented Oct 5, 2018

Codecov Report

Merging #1102 into master will decrease coverage by 0.2%.
The diff coverage is 48.27%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1102      +/-   ##
==========================================
- Coverage   43.16%   42.96%   -0.21%     
==========================================
  Files          74       74              
  Lines        3408     3403       -5     
==========================================
- Hits         1471     1462       -9     
- Misses       1799     1801       +2     
- Partials      138      140       +2
Impacted Files Coverage Δ
pkg/skaffold/runner/runner.go 51.47% <0%> (-0.21%) ⬇️
pkg/skaffold/kubernetes/sync.go 52.94% <52.17%> (-15.48%) ⬇️
pkg/skaffold/sync/sync.go 95% <66.66%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d65ab29...3e8f198. Read the comment docs.

if err := r.Syncer.Sync(s); err != nil {
color.Default.Fprintf(out, "Syncing %d files for %s\n", len(s.Copy)+len(s.Delete), s.Image)

if err := r.Syncer.Sync(ctx, s); err != nil {
logrus.Warnln("Skipping build and deploy due to sync error:", err)
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot to add this but if we get an error here, we should add that dirty artifact back to the rebuild list

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you do that?

@@ -271,12 +271,12 @@ func (r *SkaffoldRunner) Dev(ctx context.Context, out io.Writer, artifacts []*la
return ErrorConfigurationChanged
case len(changed.needsResync) > 0:
for _, s := range changed.needsResync {
if err := r.Syncer.Sync(s); err != nil {
color.Default.Fprintf(out, "Syncing %d files for %s\n", len(s.Copy)+len(s.Delete), s.Image)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update to latest code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean I'm rebased on old code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah just maybe the last commit. I changed this message to the debug and info statement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, the one I asked you to add? It didn't work well in reality :-) Didn't give enough feedback

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:) SGTM

LabelSelector: labelSelector(),
})

pods, err := client.CoreV1().Pods("").List(meta_v1.ListOptions{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the skaffold labelselector here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed it because it wasn't working. What I saw is that when I use a deployment, this deployment gets the labels but not the pods.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah good point 👍

return errors.Wrap(err, "syncing files")
}

if int(performed) != len(files) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need to count here. errgroup handles this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check is to make sure that we try to perform the sync for n files. It has nothing to do with errors. For example, with the code that filters on skaffold labels, it always syncs zero files without giving an error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I guess this is a bit tricky. What if we have 2 replicas?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll see how to fix it. Each file should be synced at least once, I guess.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think thats the best we can do. But making sure we sync at least once is a good check.

if err := perform(s.Image, s.Copy, copyFileFn); err != nil {
return errors.Wrap(err, "copying files")
func (k *KubectlSyncer) Sync(ctx context.Context, s *sync.Item) error {
if len(s.Copy) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe move the len check into perform? a few lines of code saved

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do that, yes.

Sync files in a sequential manner, it makes
the code simpler and anyway, the typical use case
will be to sync one file with one pod.

Signed-off-by: David Gageot <david@gageot.net>
@dgageot dgageot merged commit 227ed25 into GoogleContainerTools:master Oct 6, 2018
@dgageot dgageot deleted the improve-sync branch December 28, 2018 07:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants