-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support dependencies between build artifacts - 1 #4828
Conversation
All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the ℹ️ Googlers: Go here for more info. |
5d45010
to
9285542
Compare
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
Codecov Report
@@ Coverage Diff @@
## master #4828 +/- ##
==========================================
+ Coverage 71.85% 71.96% +0.10%
==========================================
Files 356 357 +1
Lines 12215 12327 +112
==========================================
+ Hits 8777 8871 +94
- Misses 2786 2799 +13
- Partials 652 657 +5
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great. Few thoughts,
- Add a test case for requires to make sure this works or instructions for reviewers to check it out.
- I feel, the
waitForDependencies
should be called in a forever loop. Not sure how it is monitoring if all dependent artifacts are build.
pkg/skaffold/build/scheduler.go
Outdated
|
||
func (c *scheduler) run(ctx context.Context, a *latest.Artifact, build func() (string, error), onSuccess func(tag string), onError func(err error)) { | ||
dag := c.dagMap[a.ImageName] | ||
err := dag.waitForDependencies(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should happen in a forever loop right until all dependent artifacts are build?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ptal at InOrder
function as I removed this class. Also my explanation below
c357b95
to
aa0a5bb
Compare
there are success and failure test cases in
it tries reading from every channel that represents each of its dependencies. the single for loop over the dependencies slice will complete only when all of these channels are closed. So it doesn't need a forever loop. |
Config schema Config validation Build controller(only parity with existing code)
Co-authored-by: Brian de Alwis <bsd@acm.org>
IT added. PTAL @briandealwis, @tejal29 |
} | ||
|
||
func (l *logAggregatorImpl) GetWriter() (io.WriteCloser, error) { | ||
if err := l.checkCapacity(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this condition would never happen as channel capacity is size of artifacts. Can we drop this condition to make things simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to keep it. I think that logAggregator
can be reused somewhere else if we have a need for concurrent runs for something. So I wanted to keep it safe, otherwise there won't be any error messages if writer requests exceed the buffer capacity and the code would deadlock.
Looks really great. Some nit changes to get rid of if-else branches. Looking forward to try this out now |
testing notes:
Observations:
Observations:
Observations:
Observations:
Observations:
We are still trying to list images from docker daemon. This should have been aborted. On master, i dont see these (ofcourse with sequential builds)
|
Added message: if concurrency > 1 {
color.Default.Fprintf(out, "Building %d artifacts in parallel\n", concurrency)
}
It shouldn't output partial logs for any build actually. If it starts outputting logs for image 3 it'll wait until all logs are printed. So the actual builder needs to handle context cancellation to output such a message.
We do allow redirecting build logs into files so I think it's better to show the full stack trace for both failures. The actual stack trace seems to be a failure of the individual builders to handle context cancellation properly, I can add a bug for that but don't want to complicate this PR with these fixes.
This was because image pruner wasn't handling context cancellation correctly. I've made this one fix. 🙂 |
thanks @tejal29 for such detailed test notes. 👏🏽 |
sounds good.
sounds good.
+1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is now always running builders with their output going to a pipe, so we loose colour in the build logs (since it's not os.Stdout
, a TTY). If we're in concurrency=1, it would be better to send the provider writer down and skip the logAggregator.
When running with concurrency=0, it seems an image is chosen at random to follow live. Might be better to just hide the build logs and instead provide feedback on the builds as they happen and finish. This can be done as a separate PR.
} | ||
|
||
// InOrder builds a list of artifacts in dependency order. | ||
func InOrder(ctx context.Context, out io.Writer, tags tag.ImageTags, artifacts []*latest.Artifact, artifactBuilder ArtifactBuilder, concurrency int) ([]Artifact, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: colour output: out
is normally os.Stdout
here.
…`skaffold dev` and `skaffold debug`
Another round.
I still see the warning when pruning. I tried using
I see the following:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one round of fixing the Warnf
message on pruner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge after creating issues for todos and adding them in the TODO comment.
Related: #4713
Part 1 of several PRs.
Implements sections of the design:
The current set of changes do not modify the output behavior.
Couldn't reopen the previous PR as I force pushed the branch.