-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add taskrun controller implementation #86
add taskrun controller implementation #86
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EXCITING!!
my main piece of feedback is id like to see this go in without most of the status checking, maybe just create when needed, delete when needed (or just create), and deal with status/conditions in the next iteration
currently there are no tests around the controllers and this piece is not fully tested.
once we have #75 where @tejal29 is writing some initial unit tests we can add them for this logic too
Outputs *Outputs `json:"outputs,omitempty"` | ||
BuildSpec BuildSpec `json:"buildSpec"` | ||
Outputs *Outputs `json:"outputs,omitempty"` | ||
BuildSpec *buildv1alpha1.BuildSpec `json:"buildSpec"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switching to the build crd type will make our code easier
- also lets delete
type BuildSpec struct
in the same file
// AbortedState means prow killed the job early (new commit pushed, perhaps). | ||
AbortedState = "aborted" | ||
// ErrorState means the job could not schedule (bad config, perhaps). | ||
ErrorState = "error" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use the types we already defined instead of replacing with the prow ones? from what i can see here it doesn't seem like the prow conditions are following the k8s api conventions but what we had before did (and so does most of knative)
As discussed this morning with @tejal29 we could completely remove most of the condition logic here and replace it with something like:
- Does it need to exist? Then create it
- Does it need to be deleted? Then delete it (or even leave this for a later iteration)
p.s. I think in the long term we should use https://github.com/knative/pkg/blob/master/apis/duck/v1alpha1/conditions_types.go but due to a lack of docs im not exactly sure how to just dive in there (maybe the docs exist and i just can't find them @mattmoor ?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 agree with @bobcatfish. Lets add them if we need those and end up using these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a code example from serving https://github.com/knative/serving/blob/76500a6917ce4aec522651677dd9bac677c6034f/pkg/reconciler/v1alpha1/revision/revision.go#L265
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pivotal-nader-ziada thanks for thereconcileBuild
links!
I think our plan for Task Status is to adopt the same status conditions as Build and sync the Tasks status to be the same as the Build's status
switch { | ||
case tr.DeletionTimestamp == nil: | ||
wantBuild = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- i think an if statement would be clearer here? or do we already know we need other cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for if
is there a possibility to merge the multiple switch cases info one switch with the code actually inside the case? would be easier to read if possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the taskRun
is deleted, what is the case you need to create anything? should we just assume its done?
|
||
// Get related task for taskrun | ||
t, err := c.PipelineClientSet.PipelineV1alpha1().Tasks(tr.Namespace).Get( | ||
tr.Spec.TaskRef.Name, metav1.GetOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- i think this is only used in the
wantBuild && !haveBuild:
case, maybe we could move this logic there? (maybe even in a separate function, so that this function is only about understanding the cases?
b, err := c.getBuild(namespace, name) | ||
switch { | ||
case errors.IsNotFound(err): | ||
// Do not have a build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- this should probably be an error if we expect the build to already exist right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- yes, should either return an error or create a build if it needs it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the error is that a build is not found, that means that we need to create a build (haveBuild = false which is it's default value) which is a passthrough here. In the case that it is any other error (the next switch case) we return an error. The passthrough is required to catch any other kind of error in the next statement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll collapse this into one statement
|
||
var haveBuild bool | ||
|
||
b, err := c.getBuild(namespace, name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- can we add a comment about how we're using the same
name
forBuild
andTaskRun
case errors.IsNotFound(err): | ||
// Do not have a build | ||
case err != nil: | ||
return fmt.Errorf("get build: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- maybe some more detail in this error message, e.g.
expected build %s to exist for taskRun %s but failed to retrieve it: %s
switch { | ||
case !wantBuild: | ||
if !haveBuild { | ||
fmt.Printf("Deleted %s", key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- should this be done via the
logger
? (also maybe more info in the log message)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- same thing for other
fmt.Printf
s in this review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed, good catch!
} | ||
switch or := metav1.GetControllerOf(b); { | ||
case or == nil, or.APIVersion != groupVersionKind.GroupVersion().String(), or.Kind != groupVersionKind.Kind: | ||
return nil // Not controlled by this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- what is
or
? im kind of confused by this case, maybe more comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
or
i guess is ownerRef
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or is ownerref which is what is being checked. i've changed the name to be ownerref
descSucceeded = "succeeded" | ||
descFailed = "failed" | ||
descUnknown = "unknown status" | ||
descMissingCondition = "missing end condition" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as i can tell, these pretty much mirror the statuses, so i don't feel like they add much and we can remove them. in the recommended api conventions, I think the examples we have (which use reason
, type
and message
) are pretty good:
(We can leave most of this for the next iteration tho, but in that case I'd rather leave it out of this PR if we can)
// AbortedState means prow killed the job early (new commit pushed, perhaps). | ||
AbortedState = "aborted" | ||
// ErrorState means the job could not schedule (bad config, perhaps). | ||
ErrorState = "error" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 agree with @bobcatfish. Lets add them if we need those and end up using these.
groupVersionKind = schema.GroupVersionKind{ | ||
Group: v1alpha1.SchemeGroupVersion.Group, | ||
Version: v1alpha1.SchemeGroupVersion.Version, | ||
Kind: "TaskRun", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- add constant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed groupVersionKind to be a const
case or == nil, or.APIVersion != groupVersionKind.GroupVersion().String(), or.Kind != groupVersionKind.Kind: | ||
return nil // Not controlled by this | ||
} | ||
fmt.Printf("Delete builds/%s", key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- use logger here and elsewhere.
} | ||
switch or := metav1.GetControllerOf(b); { | ||
case or == nil, or.APIVersion != groupVersionKind.GroupVersion().String(), or.Kind != groupVersionKind.Kind: | ||
return nil // Not controlled by this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
or
i guess is ownerRef
} | ||
|
||
// Ensure taskrun status is correct | ||
haveState := tr.Status.State |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the ntr
is not used anywhere?
Can we just move lines 219:232 to a function, and use that in L237 in reconcile ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an error on my part, ntr should be used in updateStatus. I will need to move some things around for this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will refactor lines 219:232 to a function
(plz add link to #59 in commit message) |
For this to work, would a person need to deploy any Build CRD types to their cluster as well? If so plz update DEVELOPMENT.md re. how to do this |
Outputs *Outputs `json:"outputs,omitempty"` | ||
BuildSpec BuildSpec `json:"buildSpec"` | ||
Outputs *Outputs `json:"outputs,omitempty"` | ||
BuildSpec *buildv1alpha1.BuildSpec `json:"buildSpec"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switching to the build crd type will make our code easier
- also lets delete
type BuildSpec struct
in the same file
// AbortedState means prow killed the job early (new commit pushed, perhaps). | ||
AbortedState = "aborted" | ||
// ErrorState means the job could not schedule (bad config, perhaps). | ||
ErrorState = "error" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a code example from serving https://github.com/knative/serving/blob/76500a6917ce4aec522651677dd9bac677c6034f/pkg/reconciler/v1alpha1/revision/revision.go#L265
switch { | ||
case tr.DeletionTimestamp == nil: | ||
wantBuild = true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for if
is there a possibility to merge the multiple switch cases info one switch with the code actually inside the case? would be easier to read if possible
b, err := c.getBuild(namespace, name) | ||
switch { | ||
case errors.IsNotFound(err): | ||
// Do not have a build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- yes, should either return an error or create a build if it needs it
ntr.Status.State = wantState | ||
ntr.Status.Description = wantMsg | ||
} | ||
|
||
// Reconcile this copy of the task run and then write back any status | ||
// updates regardless of whether the reconciliation errored out. | ||
err = c.reconcile(ctx, tr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was expecting the actual logic to check/create the build to be in this private func, see example here: https://github.com/knative/serving/blob/76500a6917ce4aec522651677dd9bac677c6034f/pkg/reconciler/v1alpha1/revision/revision.go#L314:22
otherwise consider if we need this func?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the way that @pivotal-nader-ziada 's example code explicitly expresses phases, then delegates work to them seems pretty slick!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I realize now that I put this in the wrong place (Reconcile vs reconcile). I will move everything to the correct reconcile
function, thanks for catching this!
}, | ||
}, | ||
Spec: *t.Spec.BuildSpec, | ||
}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have to check if the BuildSpec has a template and apply that template.
Applying the template includes:
- checking type of template
- getting appropriate template object using lister
- apply the template (use ApplyTemplate() from Build?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have currently omitted this change as I believe the Build controller will do this for us:
https://github.com/knative/build/blob/e69c21ed74a05d83d68fc15bacbc458807be8288/pkg/controller/build/controller.go#L338
I could definitely be misunderstanding something though, let me know if you still believe we need to call ApplyTemplate()
@@ -136,6 +159,79 @@ func (c *Reconciler) Reconcile(ctx context.Context, key string) error { | |||
// Don't modify the informer's copy. | |||
tr := original.DeepCopy() | |||
|
|||
// build creation and status update adapted from prow: https://github.com/kubernetes/test-infra/blob/8faa040989cf8552683dcdd4400505ca62a3c408/prow/cmd/build/controller.go#L207 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a tracker defined in the reconciler (meant to track build status) that I don't see used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I understand now, a tracker is used to monitor child object (in this case a Build) and when the tracker sees a change on the tracked object, it runs reconcile
? I have added a tracker.Track statement to reconcile, thanks for pointing out the tracker!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pivotal-nader-ziada
could you clarify how you thought the tracker would be used, from the comment:
// TODO use build tracker to fetch build
it seems that it might be possible to fetch a build with a tracker. My understanding is that a tracker is an event watcher(tracker) that runs reconcile
when whatever it is tracking changes. Can you shed some light here? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are correct Aaron, the tracker is to watch the build. Sorry for the confusing comment
} | ||
return &buildv1alpha1.Build{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: t.Name, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably want this to be the name of the taskrun, since one task could have many task runs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
) | ||
|
||
// TaskRunSpec defines the desired state of TaskRun | ||
type TaskRunSpec struct { | ||
TaskRef TaskRef `json:"taskRef"` | ||
Trigger TaskTrigger `json:"trigger"` | ||
// +optional | ||
// +optionalm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually.
all feedback has been addressed |
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually.
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually. Also had to change some optional resources to be pointers, because json marshalling will still require optional fields to be present if they are not nullable (https://stackoverflow.com/questions/18088294/how-to-not-marshal-an-empty-struct-into-json-with-go)
switch { | ||
case err != nil && !errors.IsNotFound(err): | ||
return fmt.Errorf("retrieving build %s for taskRun %s: %v", tr.Name, tr.Name, err) | ||
case b.DeletionTimestamp == nil: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this will panic when the error != nil
is true & errors.IsNotFound(err)
is true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
b, err := c.getBuild(tr.Namespace, tr.Name)
switch {
// if there is an error that is NOT "build not found"
case err != nil && !errors.IsNotFound(err):
this case is for any error that is not "build not found". if "build is not found", haveBuild will remain false from initialization which is desired. let me know if i am still missing something or misunderstanding, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the b.DeletionTimestamp
will panic if the build is not found and is nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah of course, I will change it back to the original form
descMissingCondition = "missing end condition" | ||
) | ||
|
||
const ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be var
instead of const
. Its making the code not compile and thats why the tests in the PR are failing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've changed this back to var
return fmt.Errorf("delete build: %v", err) | ||
} | ||
return nil | ||
case tr.Status.Conditions[0].Status != corev1.ConditionUnknown: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will panic if the array is empty which what happened when I tried to run it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed now, thanks!
@aaron-prindle can you add at least one test to make sure the code runs with no issues, we can add more tests later |
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually.
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually.
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually. Co-authored-by: Aaron Prindle <aprindle@google.com>
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually. Co-authored-by: Aaron Prindle <aprindle@google.com>
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually. Co-authored-by: Aaron Prindle <aprindle@google.com>
This is a WIP of adding an integration test to cover #59, which is a simple TaskRun which has no inputs or outputs but just runs a container that echoes helloworld. At this point we have the logic to: - Create the Task and TaskRun - Wait for the TaskRun to have a condition (TODO: actually check the conditions, waiting to rebase onto #86) - Get the Build associated (not yet created, will be via #86) - Get the Pod for the build - TODO: get the logs for that Pod (we've been looking at https://github.com/knative/build/blob/e8c2cb6eb5cb09d9737ca9e6da4a1c68af3247b2/pkg/logs/logs.go#L36:6 for inspiration) This also changes namespaces to be test specific, so each test can create a namespace and tear it down, without needing to keep track of resources created and delete them individually. Co-authored-by: Aaron Prindle <aprindle@google.com>
all changes made, changed taskrun.go controller to no longer support delete trigger (mirroring pipelinerun). this simplifies the controller logic. also added taskrun_test.go which mirrors pipelinerun_test.go |
descSucceeded = "succeeded" | ||
descFailed = "failed" | ||
descUnknown = "unknown status" | ||
descMissingCondition = "missing end condition" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- i think these can be deleted, doesn't seem like they are used anymore
) | ||
|
||
var ( | ||
groupVersionKind = schema.GroupVersionKind{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- this is misc cleanup but we should change
schema
toscheme
maybe? kinda weird that we are using bothschema
andscheme
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what is suggested here, the lib's default name is schema
. Would you want schema
to be aliased to something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope my bad, i misunderstood and thought we had made a package called schema
Name: tr.Name, | ||
} | ||
if err := c.tracker.Track(buildRef, tr); err != nil { | ||
logger.Errorf("build %s for taskrun %s: %v", buildRef, tr.Name, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- could the error say something about what it was trying to do? e.g. something like
failed to create tracker for build %s for taskrun %s: %v
// make build obj from task buildspec | ||
logger.Infof("make build: %s", tr.Name) | ||
if b, err = c.makeBuild(t, tr); err != nil { | ||
return fmt.Errorf("make build: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
failed to create a build for taskrun %s: %v
|
||
func (c *Reconciler) getBuild(namespace, name string) (*buildv1alpha1.Build, error) { | ||
return c.buildLister.Builds(namespace).Get(name) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- i dont think we need these wrapper methods, this might have been based on a previous iteration of the pipelinerun controller (for mocking), check out the latest greatness https://github.com/knative/build-pipeline/blob/master/pkg/reconciler/v1alpha1/pipelinerun/pipelinerun.go#L69
One big find here was that the logs in the pods went away too quickly for us to read them, even when trying to read them IMMEDIATELY after the Build completed, so we need a better solution here (@aaron-prindle you or I should make a task in the october milestone for this) |
This pr implements a simple TaskRun controller that creates a knative/build Build and updates the TaskRun status to reflect the Build status. We delegate to the knative/build controller to do the work of actually fulfillign the Build itself - meaning we have a hard dependency on knative/build. The integration test doesn't actually assert on the logs output by the bulid step because the pods disappear immediately after completion, so we need a better solution here (e.g. writing to a PVC in the test) - in the long run we need to implement better log support (#107). Remaining work for #59 is to improve unit test coverage, and add some docs on running + debugging.
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aaron-prindle, bobcatfish The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
We don't yet have validation implemented for our Runs, but as of #86 we are using the same Conditions as Knative Build: just `Succeeded` (where unknown means running, true means it finished successfully and false means it finished but failed)
We don't yet have validation implemented for our Runs, but as of #86 we are using the same Conditions as Knative Build: just `Succeeded` (where unknown means running, true means it finished successfully and false means it finished but failed)
this pr implements a simple taskrun controller that creates a build and updates the taskrun status to reflect the build status. currently there are no tests around the controllers and this piece is not fully tested.