Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define tag policies for images #171

Open
sbose78 opened this issue May 6, 2020 · 17 comments
Open

Define tag policies for images #171

sbose78 opened this issue May 6, 2020 · 17 comments
Labels
api beta BETA related issue design kind/feature Categorizes issue or PR as related to a new feature. ship-required
Milestone

Comments

@sbose78
Copy link
Member

sbose78 commented May 6, 2020

Let's use this issue to discuss the exhaustive list of tag policies we would like to support. We'll do the API design as part of a different discussion.

@qu1queee
Copy link
Contributor

qu1queee commented May 8, 2020

@sbose78 thanks for this. Definitely not to use latest.

I think I´m more inclined on a digest, because a digest ensures that a container will use always the same image, unless the digest is changed. For the digest I see two options:

  1. :@digest : This can come from the last commit of the source repository. Or a combination of things.

  2. :tag@digest: This one is more complex, I know crio-o didn´t supported this till recently and that buildah have strong opinions on this. I think with this one you add one more layer of security making sure that the tag matches the digest.

For the :tag policy, I´m not that much a fan, because then we will need to have control on increasing any of the MAJOR.MINOR.PATCH semantic versioning numbers.

@zhangtbj
Copy link
Contributor

zhangtbj commented May 10, 2020

Hi Enrique,

I think they are two different level requirements.

  • digest is used for internal that the container can choose the unique image by using the digest, but the end-user doesn't care about that, or they even cannot even remember that easily.

  • tag is a configuration in Build for end-user, that the end-user can choose which image version they want to build and use. For example, the end user just want to build some test image, or they want to bump a new tag to their application new release version such as MAJOR.MINOR.PATCH

For the better design, We(our internal) should use the digest somewhere to make sure the container or end user uses the actual image they want. But I think it is not urgent now n, I think it is an enhancement for maybe 3Q or future delivery. (Knative images also with the digest)

For tag it is an important feature for us now, we should allow end user to have this capability to build different tag images without always changing the build definition

@sbose78
Copy link
Member Author

sbose78 commented May 15, 2020

The digest isn't something we can set right? It's a unique identifier which the container registry provides us upon pushing an image.

From a policy perspective, I see two policies that sound implementable to me ( there could be more )

  • semantic versioning x.y.z
  • git hash

@zhangtbj
Copy link
Contributor

zhangtbj commented May 15, 2020

Yep, but I think we still need the NONE and MANUAL policies as well.

for the pay account docker registry, it may not allow me to pull lots of test image with different image tags, it may cost me lots of money. So NONE policy just use the latest tag is enough.

For Manual policy, the user should have the capability to control what tag they would like, the custom tag, like for docker.io/myimage:test, or for docker.io/myimage/test:v1-test.

Both of this basic policies are also not hard to implement in our controller logic :)

@sbose78
Copy link
Member Author

sbose78 commented May 15, 2020

The None and Manual policies are both supported today, without being explicit about it. Users may or may not specify tags today.

When we do introduce tag policies in our API/CRD, when the user skips that attribute, the current behaviour would kick in without the need of explicitly calling it a Manual or None policy.

What do you think?

@zhangtbj
Copy link
Contributor

Yes, I agree the None and Manual policies are supported now.

But the Manual policy is not good now. The end user has to:

  • modify the output.image setting in build CR and save
  • run a new buildRun just for the new tag of the built image.

That is why we are providing a PR: xiujuan95@61f2113

so that we can JUST change to a new output.image or a new tag parameter in buildRun, so that the end user can trigger a new buildRun with the new parameter to build a new different tag of image and WITHOUT modify the build config...

@sbose78
Copy link
Member Author

sbose78 commented May 15, 2020

I feel uncomfortable making changes to the BuildRun to be able to support that when we could modify the spec.output.image in the Build CR itself :(

modify the output.image setting in build CR and save

Curious, why is this an issue?

@zhangtbj
Copy link
Contributor

zhangtbj commented May 15, 2020

It comes from our user experince from UI,

We won't ask end user to just kubectl apply the build resource, we should ask user to do from UI or Cli.

Then what we should provide to end user?

  • end user provide the resource and build related configurations on UI and create a build
  • Auto trigger a new buildRun at the first time, or we click a Start build button to start a new buildRun for the execution
  • Next time, if user want to have a new buildRun and with different settings, like resourceLimit, serviceAccount now, we should pop up a dialog after end user click the Start Build button, then end user to config some parameters, then click Build, then we trigger a new BuildRun with new parameters.
  • so that the end user don't need to modify those setting in Build config from UI then trigger the buildRun by button.

I think it is a good UX for end user.

Build should just keep the static source or build policy related data, and for buildRun, we can have some custom parameters to build a new image without two steps: modify build -> trigger new buildRun from UI for image tag (can use output.image, or a new tag parameter in buildRun).

But yes, as you see, we didn't provide the PR, we can discuss and confirm in the issue first.

Or any other comment about the overwrite rule or comments? :)

@smoser-ibm
Copy link

smoser-ibm commented May 18, 2020

@sbose78 Let me share my viewpoint (using a concrete example): Assume you have a production build setup. So you'd have a BuildDefinition that is shared across the teams and that specifies the base parameters - buildFrom: master, buildOutput: myProdImage, CPU: 4 etc.
Every time someone/something triggers a build, a buildRun is created from that BuildDefinition (and the specs would be copied from the BuildDefinition to the BuildRun, and I could potentially overwrite something on the buildRun if I chose to (eg use 8 CPUs for that run)).
I think so far we agree on that scenario.
Now there are two potential scenarios how I can make a more substantial change to the system: Let's assume I want to test something that is sitting on a different branch (myTestBranch).
Option A ("The Kubernetes Way") would be to take the BuildDef, change branch to myTestBranch, and do a kubectl apply to create a buildRun of that. In our example, that is a bad practice because
a) This build Definition is not only used by me
b) It would still potentially output to buildOutput: myProdImage which I might not want (as there is further automation that picks up that image and deploys it)
So in my view of the world, a BuildDefintion is a higher-level construct than an "K8s resource", and different rules apply. Therefore, we a proposing an option B) where in my scenario I would NOT touch the build definition at all, but rather create a BuildRun from the standard BuildDef but override branch to myTestBranch and buildOutputto myTestImage instead. Then there is no need to change this back in the BuildDefiniton(and if my test is succesful I'd use git to merge myBranch into master and let if flow again). Hope this makes sense.
/FYI @qu1queee @zhangtbj

@qu1queee
Copy link
Contributor

My 2 cents on this:

Purely speaking about overriding via BuildRun, and based on the following statement so that we can JUST change to a new output.image or a new tag parameter in buildRun, this is what Im thinking:

  • We need to have a common understanding on what does each CRD means. I think a Build CRD hosts all configurations on what to build(e.g. source, output, details on the source like contextDir, etc). A BuildRun is the executor block, which host configurations on how to build(e.g. which service-account to use when building, resources constrains for the container that will execute the build, which Build to use, etc). Modifying the output of the image build in the BuildRun does not fit on the concepts of the CRDs meaning. Yes, we can modify configurations on the BuildRun, but only configurations related to the role of the CRD. In the future we might be able to modify more things on the BuildRun like timeouts, but only because timeouts are related to the execution block, not the configuration on what to build.

  • For the example of already having a Build and a BuildRun, and the user requiring a change on the Build, then the user needs to update the Build and two things can happen:

    1. After changing an existing Build, a new BuildRun is automatically triggered. You can change your Build many times, the BuildRun will always have an snapshot of the Build used for that particular build mechanism, so that if you change the Build later again, you can see the original Build config inside your BuildRun.
    2. After changing an existing Build, A new BuildRun needs to be manually applied.
  • For the example of having a single Build used for other people. I honestly will not do this if I was a developer. There is no need to share a Build, a Build is such an small artifact that everyone can define their own, and work on their own Build separately.

  • For It would still potentially output to buildOutput: myProdImage which I might not want (as there is further automation that picks up that image and deploys it) . I think then you should have a particular Build for prod, and another Build for stage, and another for dev, etc. This adds more control for you and less error prone

@zhangtbj
Copy link
Contributor

zhangtbj commented May 18, 2020

  • In my opinion, the serviceaccount or timeouts now can defined in buildRun, but does end user really want to modify or set in each buildRun OFTEN? I think most of the case, one time setting is enough for them.
    But for image tag or even git revision, from a developer perspective, end user will change or use a different one MORE often then serviceaccount, timeouts and resourceLimit which we have.

  • From Tekon implementation, we can see, they also don't allow to overwrite any setting from TaskRun spec, and support set sa, timeout and resource in taskrun. But they provide the Params mechanism, so that end user can define ANYTHING in Task as a param and overwrite it in Taskrun by using a new defined param. So first of all, this should be a vaild requirement, but we don't support this kind of params way.

For the example of having a single Build used for other people. I honestly will not do this if I was a developer. There is no need to share a Build, a Build is such an small artifact that everyone can define their own, and work on their own Build separately.

If then, I think we are back to the old Knative Build way, which just provides a Build CR, that everyone just have a build, if want to run a new Build, then modify any parameter in the build and trigger. No share and define their own.
That is not good and why community moves to Tekton way to provide the Task and Taskrun.

If we can define anything in the Build, why we still need the BuildRun? Why we still need BuildStrategy shared? Everytime, we modify the build and re-trigger it is enough.

So I also think, they are two requirements now:

  • How to run a new build just with the new image tag from end user side
  • How we support parameter overwrite

But for the first Beta release, we just require the first one.
And another idea, I am not sure if we can separate the output.image in Build as:

  output:
    imageUrl: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex
    tag: v0.11.0 (default is latest)

so that ONLY the output.tag can be set from buildRun, then we are also fine for that.
And it also should be ok for the style of Manual image policy, I think then the tag should be the outcome of the image policy(like other auto-generation tag policies) which can be the part of the execution, and be overwritten by the buildRun side.

@otaviof
Copy link
Member

otaviof commented May 18, 2020

Community notes:

  • Since the source-code contains credentials, we should consider not overwriting;
  • When multiple users are sharing the same build, they would want to change the source-code origin;

@qu1queee
Copy link
Contributor

@zhangtbj I think the tekton example you wrote is good, but there is a big difference. Tekton PARAMS are just a set of key:values, values defined on the TaskRun, and keys(including the type) defined on the Task. The whole params approach is to allow user to parametrize their Task steps by rendering ENV variables values during runtime. However, a Task is not overriding a TaskRun API field, while on this discussion we are speaking on the BuildRun overriding a Build API field.

@qu1queee
Copy link
Contributor

I think #184 solves some of you questions around Tekton and overrides @zhangtbj , but as @sbose78 mentions there, these are params from Build that can be used in the BuildStrategy directly(which I find very useful)

@ZhuangYuZY
Copy link

ZhuangYuZY commented May 18, 2020

@sbose78 @SaschaSchwarze0 @qu1queee @zhangtbj Thank you for the discussion. Let me write down in detail for our discussion.

Option 1: update both build and buildrun to build image with specific tag.

  1. UserA defines a build CR for a helloworld application build.
apiVersion: build.dev/v1alpha1
kind: Build
metadata:
  name: helloworld-build
spec:
  source:
    url: https://github.com/helloworld
    revision: master
  strategy:
    name: kaniko
    kind: ClusterBuildStrategy
  output:
    image: us.icr.io/helloworld
  1. UserA kick off multiple builds with continue tags, like 0.1, 0.2, ...
apiVersion: build.dev/v1alpha1
kind: Build
metadata:
  name: helloworld-build
spec:
  output:
    image: us.icr.io/helloworld:0.1
apiVersion: build.dev/v1alpha1
kind: BuildRun
metadata:
  name: helloworld-build-0.1
spec:
  buildRef:
    name: helloworld-build

The problem for this is that Build is possible be changed by other UserB at same time. Then UserA and UserB can not get Build correctly when created BuildRun, as Build overwritten by others.

And it does not make sense to ask UI or CLI to orchestrate these 2 steps. Because the purpose of Build and BuildRun is customer facing, if we still ask UI or CLI help to orchestrate and make build flow easy, it will trigger us to rethink the design as it is not user friendly.

Option 2: Add tag property in BuildRun like below.

apiVersion: build.dev/v1alpha1
kind: BuildRun
metadata:
  name: helloworld-build-0.1
spec:
  buildRef:
    name: helloworld-build
  tag: 0.1

I agree that output image is build parameters. But I think tag is not. Because tag value is only meaningful for image which output from one buildrun. It will be easy for end user to kickoff multiple builds with special tags.

In future, if we support tag policy. Then build will be like

apiVersion: build.dev/v1alpha1
kind: Build
metadata:
  name: helloworld-build
spec:
  output:
    image: us.icr.io/helloworld
    tag-policy: semantic

In this case, end user may also want to build a image with special tag, like bump a new release 1.0. End user could create a buildrun like below

apiVersion: build.dev/v1alpha1
kind: BuildRun
metadata:
  name: helloworld-build-0.1
spec:
  buildRef:
    name: helloworld-build
  tag: 1.0

Only after 1.0, the tag policy will auto generate 1.1, 1.2 ... It will be easy to only create a new BuildRun, but otherwise removing tag-policy: semantic from Build, change output image, create a buildrun, change build back to below.

@sbose78 you can think from Openshift UX perspective. We discuss later. Thank you.

@qu1queee qu1queee added this to the release-v1.0.0 milestone Oct 6, 2020
@qu1queee qu1queee added the beta BETA related issue label Oct 6, 2020
@qu1queee qu1queee mentioned this issue Feb 24, 2021
4 tasks
@qu1queee
Copy link
Contributor

qu1queee commented Apr 20, 2022

From Grooming: We would like to aim for the following:

  • A SHIP on the feature that includes a well-define proposal for the API that supports the definition of a tag policy on generated container images.
  • We aim here for the simplest policy and we do not expect a full spectrum of policies. This should come in the future, once we have an initial API support in Builds & BuildRuns.

The above is what we expect to happen for the v0.10.0 milestone.

@qu1queee qu1queee added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 20, 2022
@qu1queee
Copy link
Contributor

From Refinement, we are not committing this to a release, per the lack of design. Moving it to the Backlog only.

@qu1queee qu1queee modified the milestones: release-v0.13.0, Backlog Jan 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api beta BETA related issue design kind/feature Categorizes issue or PR as related to a new feature. ship-required
Projects
Status: No status
Development

No branches or pull requests

7 participants