Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update build status to indicate build failure reason #10817

Merged
merged 1 commit into from
Dec 1, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions pkg/build/api/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,46 @@ const (
// secret for pushing the output image.
// The build will stay in the pending state until the secret is created, or the build times out.
StatusReasonMissingPushSecret StatusReason = "MissingPushSecret"

// StatusReasonPostCommitHookFailed indicates the post-commit hook failed.
StatusReasonPostCommitHookFailed StatusReason = "PostCommitHookFailed"

// StatusReasonPushImageToRegistryFailed indicates that an image failed to be
// pushed to the registry.
StatusReasonPushImageToRegistryFailed StatusReason = "PushImageToRegistryFailed"

// StatusReasonPullBuilderImageFailed indicates that we failed to pull the
// builder image.
StatusReasonPullBuilderImageFailed StatusReason = "PullBuilderImageFailed"

// StatusReasonFetchSourceFailed indicates that fetching the source of the
// build has failed.
StatusReasonFetchSourceFailed StatusReason = "FetchSourceFailed"

// StatusReasonCancelledBuild indicates that the build was cancelled by the
// user.
StatusReasonCancelledBuild StatusReason = "CancelledBuild"

// StatusReasonDockerBuildFailed indicates that the docker build strategy has
// failed.
StatusReasonDockerBuildFailed StatusReason = "DockerBuildFailed"
)

// NOTE: These messages might change.
const (
StatusMessageCannotCreateBuildPodSpec = "Failed to create pod spec"
StatusMessageCannotCreateBuildPod = "Failed creating build pod"
StatusMessageInvalidOutputRef = "Output image could not be resolved"
StatusMessageCancelBuildFailed = "Failed to cancel build"
StatusMessageBuildPodDeleted = "The pod for this build was deleted before the build completed"
StatusMessageExceededRetryTimeout = "Build did not complete and retrying timed out"
StatusMessageMissingPushSecret = "Missing push secret"
StatusMessagePostCommitHookFailed = "Build failed because of post commit hook"
StatusMessagePushImageToRegistryFailed = "Failed to push the image to the registry"
StatusMessagePullBuilderImageFailed = "Failed pulling builder image"
StatusMessageFetchSourceFailed = "Failed to fetch the input source"
StatusMessageCancelledBuild = "The build was cancelled by the user"
StatusMessageDockerBuildFailed = "Docker build strategy has failed"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about failures during committing the container and pushing the image?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bparees do we have a way to know when the commit failed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're explicitly invoking commit on the container, so i'd assume so... (for s2i builds. not for docker builds, of course)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nm, I was thinking about the separation between s2i code and the origin builder code, but the commit happens within the same process as the origin builder container.


// BuildSource is the input used for the build.
Expand Down
89 changes: 54 additions & 35 deletions pkg/build/builder/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ import (
"github.com/docker/distribution/reference"
"github.com/fsouza/go-dockerclient"

kclient "k8s.io/kubernetes/pkg/client/unversioned"

"github.com/openshift/origin/pkg/build/api"
"github.com/openshift/origin/pkg/client"
"github.com/openshift/origin/pkg/generate/git"
Expand All @@ -20,7 +22,14 @@ import (
// client facing libraries should not be using glog
var glog = utilglog.ToFile(os.Stderr, 2)

const OriginalSourceURLAnnotationKey = "openshift.io/original-source-url"
const (
OriginalSourceURLAnnotationKey = "openshift.io/original-source-url"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bparees please open a follow up to move this to pkg/build/api and add Godoc. It doesn't follow our API conventions either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@csrwng it looks like you added this.. is anything using this other than the git server? are we going to break anything meaningful if we change the annotation name?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bparees so this is used for the case where we have to use a proxy because the password is too long to be used directly with git. True, it was mainly for the use case of the git server because we use service account tokens... but it could be used in other cases where you have a long password/token. It was only a temporary solution until git supported long passwords in centos/rhel. With 7.3 it's no longer necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

opened issue to track removing this stuff: #11969


// containerNamePrefix prefixes the name of containers launched by a build.
// We cannot reuse the prefix "k8s" because we don't want the containers to
// be managed by a kubelet.
containerNamePrefix = "openshift"
)

// KeyValue can be used to build ordered lists of key-value pairs.
type KeyValue struct {
Expand Down Expand Up @@ -66,35 +75,6 @@ func buildInfo(build *api.Build) []KeyValue {
return kv
}

func updateBuildRevision(c client.BuildInterface, build *api.Build, sourceInfo *git.SourceInfo) {
if build.Spec.Revision != nil {
return
}
build.Spec.Revision = &api.SourceRevision{
Git: &api.GitSourceRevision{
Commit: sourceInfo.CommitID,
Message: sourceInfo.Message,
Author: api.SourceControlUser{
Name: sourceInfo.AuthorName,
Email: sourceInfo.AuthorEmail,
},
Committer: api.SourceControlUser{
Name: sourceInfo.CommitterName,
Email: sourceInfo.CommitterEmail,
},
},
}

// Reset ResourceVersion to avoid a conflict with other updates to the build
build.ResourceVersion = ""

glog.V(4).Infof("Setting build revision to %#v", build.Spec.Revision.Git)
_, err := c.UpdateDetails(build)
if err != nil {
glog.V(0).Infof("error: An error occurred saving build revision: %v", err)
}
}

// randomBuildTag generates a random tag used for building images in such a way
// that the built image can be referred to unambiguously even in the face of
// concurrent builds with the same name in the same namespace.
Expand All @@ -108,11 +88,6 @@ func randomBuildTag(namespace, name string) string {
return fmt.Sprintf("%s:%s", repo, randomTag)
}

// containerNamePrefix prefixes the name of containers launched by a build. We
// cannot reuse the prefix "k8s" because we don't want the containers to be
// managed by a kubelet.
const containerNamePrefix = "openshift"

// containerName creates names for Docker containers launched by a build. It is
// meant to resemble Kubernetes' pkg/kubelet/dockertools.BuildDockerName.
func containerName(strategyName, buildName, namespace, containerPurpose string) string {
Expand Down Expand Up @@ -180,3 +155,47 @@ func execPostCommitHook(client DockerClient, postCommitSpec api.BuildPostCommitS
Stderr: true,
})
}

func updateBuildRevision(build *api.Build, sourceInfo *git.SourceInfo) *api.SourceRevision {
if build.Spec.Revision != nil {
return build.Spec.Revision
}
return &api.SourceRevision{
Git: &api.GitSourceRevision{
Commit: sourceInfo.CommitID,
Message: sourceInfo.Message,
Author: api.SourceControlUser{
Name: sourceInfo.AuthorName,
Email: sourceInfo.AuthorEmail,
},
Committer: api.SourceControlUser{
Name: sourceInfo.CommitterName,
Email: sourceInfo.CommitterEmail,
},
},
}
}

func retryBuildStatusUpdate(build *api.Build, client client.BuildInterface, sourceRev *api.SourceRevision) error {
return kclient.RetryOnConflict(kclient.DefaultBackoff, func() error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't this a PATCH?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton can you give me an example where we do it with patch so that i can change it accordingly?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I spoke with @mfojtik who told me that @Kargakis says we don't use patch from within controllers. Can someone elaborate on why/why not and clarify our philosophy here?

Certainly we have plenty of other places where we do the pattern shown in the current code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a broken telephone. We added Patch in the idling controller for DCs and RCs. I think Patch in controllers makes sense in cases where the resource to be updated is not the main resource owned by the controller and you always want the update to succeed (eg. you mark a deployment as {inert,cancelled,complete,...} or you adopt a replica set). That being said it doesn't seem that you use retryBuildStatusUpdate in any build controller but in the builder images?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes technically it's the builder image, but realistically we can consider actions the builder image takes to be direct extensions of actions the controller takes. And it is the primary object (the build object, owned by the controller) that's being updated.

So it seems like using update would follow the existing patterns, such as they are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so do i leave it as it is?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton can you weigh back in? is using patch from controllers(or in this case, a process that's subservient to a controller) a pattern we're trying to move to? or should this be left as is since it's aligned w/ what we're doing elsewhere today?

@Kargakis can you point us to the code where you do patch in the idling controller?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton ^^ ping!

// before updating, make sure we are using the latest version of the build
latestBuild, err := client.Get(build.Name)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@csrwng i'm trying to avoid an update conflict to the build object, and i need to get the latest version of the object, the problem is i don't have credentials to do so from the builders, just need a Get()
tried to add this to bootstrap policy, what am i doing wrong? https://github.com/openshift/origin/pull/10817/files#diff-1d38b34f638332aebf97978ff71d9664R128

error: An error occured while updating the build status: User "system:serviceaccount:openshift:builder" cannot get builds in project "openshift"

cc: @mfojtik @bparees

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt @deads2k
can you tell me how i can get the latest build object from the API from the builder itself?
i've tried something below with adding a bootstrap policy for serviceaccount:builder but it doesn't seem to work. i need to get the latest version of the build object.

the above code fails with

User "system:serviceaccount:openshift:builder" cannot get builds in project "openshift"
also the above code executes here: https://github.com/openshift/origin/pull/10817/files#diff-c63f8d7a6dbdd8d953d6bd2afed3330bR72

i tried adding a serviceaccount:builder like this:
https://github.com/openshift/origin/pull/10817/files#diff-1d38b34f638332aebf97978ff71d9664R453
https://github.com/openshift/origin/pull/10817/files#diff-1d38b34f638332aebf97978ff71d9664R129

i also tried adding get a rule here:
https://github.com/openshift/origin/blob/master/pkg/cmd/server/bootstrappolicy/policy.go#L447
the problem is that that's the ImageBuilderRoleName = "system:image-builder"
not system:builder or something similar.
i'm quite convinced this is not the way to do it, but i don't know what else to try.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adding the get rule to the system:image-builder role would work:

authorizationapi.NewRule("get").Groups(buildGroup).Resources("builds").RuleOrDie(),

you need to run oadm policy reconcile-cluster-roles --confirm after rebuilding to update the loaded policy in your cluster

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt it worked, thank you very much!

if err != nil {
// usually this means we failed to get resources due to the missing
// privilleges
return err
}
if sourceRev != nil {
latestBuild.Spec.Revision = sourceRev
latestBuild.ResourceVersion = ""
}

latestBuild.Status.Reason = build.Status.Reason
latestBuild.Status.Message = build.Status.Message

if _, err := client.UpdateDetails(latestBuild); err != nil {
return err
}
return nil
})
}
44 changes: 38 additions & 6 deletions pkg/build/builder/docker.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ import (
dockercmd "github.com/docker/docker/builder/dockerfile/command"
"github.com/docker/docker/builder/dockerfile/parser"
docker "github.com/fsouza/go-dockerclient"

kapi "k8s.io/kubernetes/pkg/api"
utilruntime "k8s.io/kubernetes/pkg/util/runtime"

s2iapi "github.com/openshift/source-to-image/pkg/api"
"github.com/openshift/source-to-image/pkg/tar"
Expand Down Expand Up @@ -69,12 +71,22 @@ func (d *DockerBuilder) Build() error {
}
sourceInfo, err := fetchSource(d.dockerClient, buildDir, d.build, d.urlTimeout, os.Stdin, d.gitClient)
if err != nil {
d.build.Status.Reason = api.StatusReasonFetchSourceFailed
d.build.Status.Message = api.StatusMessageFetchSourceFailed
if updateErr := retryBuildStatusUpdate(d.build, d.client, nil); updateErr != nil {
utilruntime.HandleError(fmt.Errorf("error: An error occured while updating the build status: %v", updateErr))
}
return err
}

if sourceInfo != nil {
updateBuildRevision(d.client, d.build, sourceInfo)
glog.V(4).Infof("Setting build revision with details %#v", sourceInfo)
revision := updateBuildRevision(d.build, sourceInfo)
if updateErr := retryBuildStatusUpdate(d.build, d.client, revision); updateErr != nil {
utilruntime.HandleError(fmt.Errorf("error: An error occured while updating the build status: %v", updateErr))
}
}
if err := d.addBuildParameters(buildDir); err != nil {
if err = d.addBuildParameters(buildDir); err != nil {
return err
}

Expand All @@ -91,14 +103,14 @@ func (d *DockerBuilder) Build() error {
dockerfilePath := d.getDockerfilePath(buildDir)
imageNames := getDockerfileFrom(dockerfilePath)
if len(imageNames) == 0 {
return fmt.Errorf("no from image in dockerfile.")
return fmt.Errorf("no FROM image in Dockerfile")
}
for _, imageName := range imageNames {
if imageName == "scratch" {
glog.V(4).Infof("\nSkipping image \"scratch\"")
continue
}
var imageExists bool = true
imageExists := true
_, err = d.dockerClient.InspectImage(imageName)
if err != nil {
if err != docker.ErrNoSuchImage {
Expand All @@ -113,18 +125,33 @@ func (d *DockerBuilder) Build() error {
dockercfg.PullAuthType,
)
glog.V(0).Infof("\nPulling image %s ...", imageName)
if err := pullImage(d.dockerClient, imageName, pullAuthConfig); err != nil {
if err = pullImage(d.dockerClient, imageName, pullAuthConfig); err != nil {
d.build.Status.Reason = api.StatusReasonPullBuilderImageFailed
d.build.Status.Message = api.StatusMessagePullBuilderImageFailed
if updateErr := retryBuildStatusUpdate(d.build, d.client, nil); updateErr != nil {
utilruntime.HandleError(fmt.Errorf("error: An error occured while updating the build status: %v", updateErr))
}
return fmt.Errorf("failed to pull image: %v", err)
}
}
}

if err := d.dockerBuild(buildDir, buildTag, d.build.Spec.Source.Secrets); err != nil {
if err = d.dockerBuild(buildDir, buildTag, d.build.Spec.Source.Secrets); err != nil {
d.build.Status.Reason = api.StatusReasonDockerBuildFailed
d.build.Status.Message = api.StatusMessageDockerBuildFailed
if updateErr := retryBuildStatusUpdate(d.build, d.client, nil); updateErr != nil {
utilruntime.HandleError(fmt.Errorf("error: An error occured while updating the build status: %v", updateErr))
}
return err
}

cname := containerName("docker", d.build.Name, d.build.Namespace, "post-commit")
if err := execPostCommitHook(d.dockerClient, d.build.Spec.PostCommit, buildTag, cname); err != nil {
d.build.Status.Reason = api.StatusReasonPostCommitHookFailed
d.build.Status.Message = api.StatusMessagePostCommitHookFailed
if updateErr := retryBuildStatusUpdate(d.build, d.client, nil); updateErr != nil {
utilruntime.HandleError(fmt.Errorf("error: An error occured while updating the build status: %v", updateErr))
}
return err
}

Expand All @@ -149,6 +176,11 @@ func (d *DockerBuilder) Build() error {
}
glog.V(0).Infof("\nPushing image %s ...", pushTag)
if err := pushImage(d.dockerClient, pushTag, pushAuthConfig); err != nil {
d.build.Status.Reason = api.StatusReasonPushImageToRegistryFailed
d.build.Status.Message = api.StatusMessagePushImageToRegistryFailed
if updateErr := retryBuildStatusUpdate(d.build, d.client, nil); updateErr != nil {
utilruntime.HandleError(fmt.Errorf("error: An error occured while updating the build status: %v", updateErr))
}
return reportPushFailure(err, authPresent, pushAuthConfig)
}
glog.V(0).Infof("Push successful")
Expand Down
Loading