-
Notifications
You must be signed in to change notification settings - Fork 18
Conversation
Codecov Report
@@ Coverage Diff @@
## master #87 +/- ##
=========================================
- Coverage 4.95% 2.05% -2.91%
=========================================
Files 46 45 -1
Lines 2742 2774 +32
=========================================
- Hits 136 57 -79
- Misses 2604 2717 +113
+ Partials 2 0 -2
Continue to review full report at Codecov.
|
2ba829b
to
d101ed3
Compare
|
||
// Package v1 contains API Schema definitions for the kpack v1alpha1 API group | ||
// | ||
// This API group is a forked subset of https://github.com/pivotal/kpack/tree/master/pkg/apis/build/v1alpha1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this was a problem for knative since the dependencies there were very large. Is that true with kpack as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, did the same for keda, but not sure I understand the rationale, either wrt size or version:
- size: only code actually used is linked
- versions: if our dependency depends on a different version of say a k8s API, that itself is versioned in a different package (assuming compatibility semantics post-alpha, agreed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The primary reason for this is to avoid version hell latter on, not just with system, but also with anything that consumes system (like the cli, or other tools). There were many times where upgrading Knative was difficult not because of API changes, but because of the dependency graph. In once case, we ended up with a version deadlock because Knative Serving and Knative Build had different dependencies.
By importing the types into our project we have a clean dependency graph that is centrally managed and can be tested. Since we're only including the structs that represent the public API, these are unlikely to change in significant ways absent an API version bump (of course alphas will be alphas).
When we introduce a dependency like Keda or kpack, we're not linking against it's runtime, we're consuming it's CRD types. If these projects (or us) were not using go, we'd have to take the same action of creating structs for the types. I'm avoiding the artificial shortcut that incidentally adds massive dependency graph complexity.
Like it or not, the Kubernetes go APIs are not perfectly backwards compatible. They do make breaking changes. We should pick the version of the Kubernetes APIs we want to consume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you need third party validation of this approach, consider Knative and Istio
This PR now also includes controllers for Applications and Functions to be backed by kpack |
This is implemented against kpack 0.0.4 (which is pending release). One gotcha is that it requires builders to implement buildpacks lifecycle 0.4. The current function builder uses 0.3 and is incompatible (functions will build and publish images, but the status will fail to update). |
|
||
// check if status has changed before updating, unless requeued | ||
if !result.Requeue && !equality.Semantic.DeepEqual(application.Status, originalApplication.Status) { | ||
// update status | ||
log.Info("updating application status", "application", application.Name, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not have additional logging may be at debug level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to address logging. I tend to do better by thinking about a component of the system in a cross cutting fashion. (Address logging for all controllers at once) rather than address is piecemeal with gap in one controller but not another.
There are other similar concerns, like tests that I'd like to defer until we get basic functionality in place and then address comprehensively.
@@ -101,86 +97,30 @@ func (r *ApplicationReconciler) reconcile(ctx context.Context, log logr.Logger, | |||
} | |||
return ctrl.Result{}, err | |||
} | |||
log.V(1).Info("resolved target image", "application", application.Name, "image", targetImage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keep debug level logging?
466a001
to
286f35c
Compare
We now have a builder that is compatible with Buildpack lifecycle 0.4, but are blocked by kpacks incompatibility with unauthenticated registries. |
e6cbb75
to
d32defb
Compare
from https://github.com/pivotal/kpack/tree/master/pkg/apis/build/v1alpha1
Refs #67 projectriff/riff#1348