diff --git a/docs/proposals/20220611-openyurt-application-delivery-cn.md b/docs/proposals/20220611-openyurt-application-delivery-cn.md new file mode 100644 index 00000000000..c12de5f6de6 --- /dev/null +++ b/docs/proposals/20220611-openyurt-application-delivery-cn.md @@ -0,0 +1,160 @@ +--- +title: Proposal Template +authors: + - "@huiwq1990" +reviewers: + - "@rambohe-ch" +creation-date: 2022-06-11 +last-updated: 2022-06-11 +status: provisional +--- + +# OpenYurt应用部署思考 + +## 部署场景 + +1)yurt-app-manager需要部署ingress-controller实例到每个nodepool + +2)yurt-edgex-manager需要部署edgex实例到每个nodepool + +## 当前方案 + +定义edgex、yurtingress crd,分别实现各自的controller,并协调资源创建。 + +## 当前问题 + +1)edge controller、ingress controller存在共性,需要部署实例到各个nodepool; + +2)扩展性不够,无法支持更多资源类型,比如:将来要部署边缘网关、支持上层业务时,需要针对性开发新的controller; + +3)crd抽象的参数永远不够用,比如:yurtingress不支持imagePullSecret、不支持禁止创建webhook等; + +## 问题思考 + +1)如果部署服务只是一个镜像,直接使用`yurtdaemonset`即可; + +2)如果部署服务是多个资源,可以将资源封装为chart包,chart本身具备模板配置属性,同时方便部署。Chart部署使用fluxcd,或者argocd等cd系统解决。(备注:fluxcd的HelmRelease本质是执行helm install,它不能解决节点池问题) + +通过`spec.chart`指定chart包,`spec.values`设置实例的values。 + +```yaml +#https://fluxcd.io/docs/components/helm/helmreleases/ +apiVersion: helm.toolkit.fluxcd.io/v2beta1 +kind: HelmRelease +metadata: + name: backend + namespace: default +spec: + interval: 5m + chart: + spec: + chart: podinfo + version: ">=4.0.0 <5.0.0" + sourceRef: + kind: HelmRepository + name: podinfo + namespace: default + interval: 1m + upgrade: + remediation: + remediateLastFailure: true + test: + enable: true + values: + service: + grpcService: backend + resources: + requests: + cpu: 100m + memory: 64Mi +``` + +3)进一步多个资源可以看作为应用整体,当前社区已经有OAM的方案,并有kubevela的实现。 + +- kubevela可以把chart作为应用组件,底层使用fluxcd部署; +- kubevela的topology可以支持多集群部署 + +https://kubevela.io/docs/tutorials/helm-multi-cluster + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: helm-hello +spec: + components: + - name: hello + type: helm + properties: + repoType: "helm" + url: "https://jhidalgo3.github.io/helm-charts/" + chart: "hello-kubernetes-chart" + version: "3.0.0" + policies: + - name: topology-local + type: topology + properties: + clusters: ["local"] + - name: topology-foo + type: topology + properties: + clusters: ["foo"] + - name: override-local + type: override + properties: + components: + - name: hello + properties: + values: + configs: + MESSAGE: Welcome to Control Plane Cluster! + - name: override-foo + type: override + properties: + components: + - name: hello + properties: + values: + configs: + MESSAGE: Welcome to Your New Foo Cluster! + workflow: + steps: + - name: deploy2local + type: deploy + properties: + policies: ["topology-local", "override-local"] + - name: manual-approval + type: suspend + - name: deploy2foo + type: deploy + properties: + policies: ["topology-foo", "override-foo"] +``` + +## 落地方案调研 + +1)fluxcd的helm-controller(https://github.com/fluxcd/helm-controller),会依赖HelmRepository、HelmChart、HelmRelease等CRD,引入这套机制会增加openyurt部署和维护的复杂性。另外上层devops产品也可能会使用fluxcd,有可能会引起冲突; + +2)kubevela部署chart应用依赖fluxcd,而且fluxcd需要在openyurt集群中部署。因此,这个方案不会比直接使用fluxcd简单。 + +3)opneyurt自己实现基于nodepool的chart部署模型; + +注:helm的调谐逻辑,可以从helm-controller移植过来; + +## 落地步骤 + +1)将nginx-ingress封装到docker镜像里,避免私有化或者内网场景时公网不通、拉取包需要密码等问题; + +2)扩展yurtingress的参数values,类型为`*apiextensionsv1.JSON`,用它存储chart的自定义参数; + +3)开发基于nodepool的调谐逻辑; + - 根据nodepool、自定义资源的存在关系,进行增、删、改自定义资源; + - 对资源增加finalizer; + - 生成chart的默认值,并与自定义值镜像merge; + - 执行helm更新操作; + + 例子实现:https://github.com/openyurtio/yurt-app-manager/pull/124 + +## 缺点 + +1)由于缺少对部署后资源的watch,当某个资源被更新或者删除,controller很难感知; diff --git a/docs/proposals/20220611-openyurt-application-delivery-oam.md b/docs/proposals/20220611-openyurt-application-delivery-oam.md new file mode 100644 index 00000000000..e6b9e2da19c --- /dev/null +++ b/docs/proposals/20220611-openyurt-application-delivery-oam.md @@ -0,0 +1,279 @@ +--- +title: Proposal Template +authors: + - "@huiwq1990" +reviewers: + - "@rambohe-ch" +creation-date: 2022-06-11 +last-updated: 2022-06-11 +status: provisional +--- + +# OpenYurt应用部署实现思考 + +## Subset定义 + +对于多区域管理,业界已经有定义模型,如:openyurt的appset,openkruise的UnitedDeployment。 + +### openkruise模型 + +```yaml +apiVersion: apps.kruise.io/v1alpha1 +kind: UnitedDeployment +metadata: + name: sample-ud +spec: + replicas: 6 + revisionHistoryLimit: 10 + selector: + matchLabels: + app: sample + template: + # statefulSetTemplate or advancedStatefulSetTemplate or cloneSetTemplate or deploymentTemplate + statefulSetTemplate: + metadata: + labels: + app: sample + spec: + selector: + matchLabels: + app: sample + template: + metadata: + labels: + app: sample + spec: + containers: + - image: nginx:alpine + name: nginx + topology: + subsets: + - name: subset-a + nodeSelectorTerm: + matchExpressions: + - key: node + operator: In + values: + - zone-a + replicas: 1 + - name: subset-b + nodeSelectorTerm: + matchExpressions: + - key: node + operator: In + values: + - zone-b + replicas: 50% + - name: subset-c + nodeSelectorTerm: + matchExpressions: + - key: node + operator: In + values: + - zone-c +``` + +https://openkruise.io/zh/docs/user-manuals/uniteddeployment/ + +## OAM 多区域 + +subset本质是带有特定label的节点。kubevela目前已经支持多集群,在多集群的基础上扩展多subset特性,即:集群关联的subset都需要部署实例。部署实例时生成的workload增加节点亲和设置。 + +### 用户体验 + +用户使用层面,在Application里增加subset类型的policy。 + +下面例子整体含义是:将Redis部署到local集群的beijing、hangzhou两个subset中。 + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-app2 +spec: + components: + - name: redis + type: helm + properties: + repoType: "helm" + url: "https://charts.bitnami.com/bitnami" + chart: "redis" + version: "16.8.5" + policies: + - name: target-default + type: topology + properties: + clusters: ["local"] + namespace: "default" + - name: default-subsets + type: subset + properties: + nodeSelectorLabel: "apps.openyurt.io/nodepool" + nodeSelectorValues: ["beijing","hangzhou"] + workflow: + steps: + - name: deploy2default + type: deploy + properties: + policies: ["target-default","default-subsets"] +``` + +### 渲染结果 + +KubeVela应用部署时,应获取当前部署任务的subset信息,并将subset传递给Component,让Component关联的Workload能配置上affinity。 + +```yaml +spec: + selector: + matchLabels: + app: abc + template: + metadata: + labels: + app: abc + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: apps.openyurt.io/nodepool + operator: In + values: + - hangzhou +``` + +## 落地方案 + +### 方案1 + +#### helm改造 + +对helm类型的Component,为满足亲和设置,**需要增加char包规范**:chart的values.yaml中必须有亲和的key和value,即:chart包values.yaml必须包含属性nodeSelectorLabel,nodeSelectorValues。注:优化方案,稍后讨论。 + +#### fluxcd改造 + +fluxcd的addon需要将values渲染增加亲和的配置,亲和值依赖kubevela注入context中。 + +```tex + + if context.subsetEnable == "true" { + values: parameter.values & { + nodeSelectorLabel: context.nodeSelectorLabel + nodeSelectorValues: context.nodeSelectorValues} + } + if context.subsetEnable == "false" && parameter.values != _|_ { + values: parameter.values + } + +``` + +#### kubevela改造 + +kubevela之前会跟进Component,Placement生成任务列表。 + +```go +#pkg/workflow/providers/multicluster/deploy.go:164 +func applyComponents(apply oamProvider.ComponentApply, healthCheck oamProvider.ComponentHealthCheck, components []common.ApplicationComponent, placements []v1alpha1.PlacementDecision, parallelism int) (bool, string, error) { + var tasks []*applyTask + for _, comp := range components { + for _, pl := range placements { + tasks = append(tasks, &applyTask{component: comp, placement: pl}) + } + } + //.... +} +``` + +改造后,需要基于Component、Placement、Subset考虑相关任务生成。 + +```go +#pkg/workflow/providers/multicluster/deploy.go:164 +func applyComponents(apply oamProvider.ComponentApply, healthCheck oamProvider.ComponentHealthCheck, components []common.ApplicationComponent, placements []v1alpha1.PlacementDecision, parallelism int) (bool, string, error) { + var tasks []*applyTask + for _, comp := range components { + for _, pl := range placements { + for _, ss := range subsets { + tasks = append(tasks, &applyTask{component: comp, placement: pl,subset: ss}) + } + } + //.... +} +``` + +#### 内部逻辑 + +本质执行helm install,同时设置亲和值。 + +1 helm install redis-beijing redis:16.8.5 --set nodeSelectorLabel="apps.openyurt.io/nodepool" --set nodeSelectorValues="beijing" + +2 helm install redis-hangzhou redis:16.8.5 --set nodeSelectorLabel="apps.openyurt.io/nodepool" --set nodeSelectorValues="hangzhou" + +### 方案2 + +方案1存在的问题是,chart包设置固定的亲和值,**让kubevela支持属性渲染**,可以去除这个限制。 + +在helm的values的值可以从context里取: + +```yaml + values: + customNodeSelectorLabel: context.nodeSelectorLabel + customodeSelectorValues: context.nodeSelectorValues +``` + +整体配置如下: + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: first-vela-app2 +spec: + components: + - name: redis + type: helm + properties: + repoType: "helm" + url: "https://charts.bitnami.com/bitnami" + chart: "redis" + version: "16.8.5" + values: + customNodeSelectorLabel: context.nodeSelectorLabel + customodeSelectorValues: context.nodeSelectorValues + policies: + - name: target-default + type: topology + properties: + clusters: ["local"] + namespace: "default" + - name: default-subsets + type: subset + properties: + nodeSelectorLabel: "apps.openyurt.io/nodepool" + nodeSelectorValues: ["beijing","hangzhou"] + workflow: + steps: + - name: deploy2default + type: deploy + properties: + policies: ["target-default","default-subsets"] +``` + +## CUE测试 + +```yaml + parameter: { + values?: #nestedmap + } + + #nestedmap: { + ... + } + innerVealues: { + nodeSelectorLabel: "apps.openyurt.io/nodepool" + nodeSelectorValues: ["beijing","hangzhou"] + } + parameter: { + values: {"a": "b"} & innerVealues + } +``` diff --git a/docs/proposals/20220611-openyurt-application-delivery.md b/docs/proposals/20220611-openyurt-application-delivery.md new file mode 100644 index 00000000000..12dc6333742 --- /dev/null +++ b/docs/proposals/20220611-openyurt-application-delivery.md @@ -0,0 +1,116 @@ +--- +title: Proposal Template +authors: + - "@huiwq1990" +reviewers: + - "@rambohe-ch" +creation-date: 2022-06-11 +last-updated: 2022-06-11 +status: provisional +--- + +# OpenYurt Application Delivery + +## Table of Contents + +A table of contents is helpful for quickly jumping to sections of a proposal and for highlighting +any additional information provided beyond the standard proposal template. +[Tools for generating](https://github.com/ekalinin/github-markdown-toc) a table of contents from markdown are available. + +- [Title](#title) + - [Table of Contents](#table-of-contents) + - [Glossary](#glossary) + - [Summary](#summary) + - [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals/Future Work](#non-goalsfuture-work) + - [Proposal](#proposal) + - [User Stories](#user-stories) + - [Implementation Details](#implementation-detailsnotesconstraints) + - [OpenYurt Self-Defined Method](#goals) + - [KubeVela Method](#goals) + +## Glossary + +Refer to the [Open Application Model](https://oam.dev/). + +## Summary + +Applications are usually a combine of workload,ingress,service etc. OpenYurt provides some workload controllers, but it's not friendly for application developers. + +In this proposal, we would like to introduce an application controller which could delivery applications and consider openyurt cluster's features. + +## Motivation + +Currently, project `yurt-app-manager` deploys `ingress-controller` instances to every nodepool, project `yurt-edgex-manager` deploys `edgex` instances to every nodepool. So we could find the common ground is delivery resources to nodepool and the feature is useful if we want to develop new moduel as edge gateway. + +By the way, `uniteddeployment` has the nodepool featrue, but it could only deploy one deployment or statefulset workload, not include other resrouces. + +How to deploy resource collections, the most common way is use helm chart. `FluxCD` already implement the `HelmRelease` controller, but it's not support nodepool feature. + +After investigation, we find out [OAM](https://oam.dev/) and [kubevela](https://kubevela.io/). Kubevela already defines application modules, and could delivery helm charts to multi-clusters. So if kubevela could deploy application to multi-nodepools, it will satisfy our requests. + +### Goals + +- Openyurt support application deploy +- Both ingress-controller and edgex could use the controller to deploy + +### Non-Goals/Future Work + +- Treat application as a whole, application's inner resource not support reconcile + +## Proposal + +### User Stories + +- Package kubernetes resources as a helm chart, and create application crd instance + +### Implementation Details + +#### OpenYurt Self-Defined Method + +Define openyurt's application module and develop the application controller. + +```yaml +apiVersion: apps.openyurt.io/v1alpha1 +kind: Application +metadata: + name: helm-hello +spec: +spec: + interval: 5m + chart: + spec: + chart: chartmuseum + version: "2.14.2" + url: "https://jhidalgo3.github.io/helm-charts/" + values: {} + policies: + nodepools: ["hangzhou","beijing"] +``` + +#### KubeVela Method + +As kubevela already implements application deploy, and application policies. We could extend nodepool policy type to implement. + +```yaml +apiVersion: core.oam.dev/v1beta1 +kind: Application +metadata: + name: helm-hello +spec: + components: + - name: hello + type: helm + properties: + repoType: "helm" + url: "https://jhidalgo3.github.io/helm-charts/" + chart: "hello-kubernetes-chart" + version: "3.0.0" + policies: + - name: foo-cluster-only + type: topology + properties: + clusters: ["foo"] +``` +