diff --git a/docs/api-types/README.md b/docs/api-types/README.md
index 46bcea71d0..5638927888 100644
--- a/docs/api-types/README.md
+++ b/docs/api-types/README.md
@@ -2,6 +2,9 @@
## API types
+Here we list the API types that have some functionality that you can only configure via json/yaml vs the `ark` cli
+(hooks)
+
* [Backup][1]
[1]: backup.md
diff --git a/docs/api-types/backup.md b/docs/api-types/backup.md
index feef7c3dcb..40fe71c65d 100644
--- a/docs/api-types/backup.md
+++ b/docs/api-types/backup.md
@@ -60,6 +60,12 @@ spec:
# AWS. Valid values are true, false, and null/unset. If unset, Ark performs snapshots as long as
# a persistent volume provider is configured for Ark.
snapshotVolumes: null
+ # Where to store the tarball and logs.
+ storageLocation: aws-primary
+ # The list of locations in which to store volume snapshots created for this backup.
+ volumeSnapshotLocations:
+ - aws-primary
+ - gcp-primary
# The amount of time before this backup is eligible for garbage collection.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
diff --git a/docs/backupstoragelocation-definition.md b/docs/api-types/backupstoragelocation.md
similarity index 100%
rename from docs/backupstoragelocation-definition.md
rename to docs/api-types/backupstoragelocation.md
diff --git a/docs/api-types/volumesnapshotlocation.md b/docs/api-types/volumesnapshotlocation.md
new file mode 100644
index 0000000000..95744c4c64
--- /dev/null
+++ b/docs/api-types/volumesnapshotlocation.md
@@ -0,0 +1,60 @@
+# Ark Volume Snapshot Location
+
+## Volume Snapshot Location
+
+A volume snapshot location is the location in which to store the volume snapshots created for a backup.
+
+Ark can be configured to take snapshots of volumes from multiple providers. Ark also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
+
+Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Ark must have at least one `VolumeSnapshotLocation` per cloud provider.
+
+A sample YAML `VolumeSnapshotLocation` looks like the following:
+
+```yaml
+apiVersion: ark.heptio.com/v1
+kind: VolumeSnapshotLocation
+metadata:
+ name: aws-default
+ namespace: heptio-ark
+spec:
+ provider: aws
+ config:
+ region: us-west-2
+```
+
+### Parameter Reference
+
+The configurable parameters are as follows:
+
+#### Main config parameters
+
+| Key | Type | Default | Meaning |
+| --- | --- | --- | --- |
+| `provider` | String (Ark natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
+| `config` | See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.
+
+#### AWS
+
+##### config
+
+| Key | Type | Default | Meaning |
+| --- | --- | --- | --- |
+| `region` | string | Empty | *Example*: "us-east-1"
See [AWS documentation][3] for the full list.
Queried from the AWS S3 API if not provided. |
+
+#### Azure
+
+##### config
+
+| Key | Type | Default | Meaning |
+| --- | --- | --- | --- |
+| `apiTimeout` | metav1.Duration | 2m0s | How long to wait for an Azure API request to complete before timeout. |
+| `resourceGroup` | string | Optional | The name of the resource group where volume snapshots should be stored, if different from the cluster's resource group. |
+
+#### GCP
+
+No parameters required.
+
+[0]: #aws
+[1]: #gcp
+[2]: #azure
+[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions
diff --git a/docs/aws-config.md b/docs/aws-config.md
index a943738137..5cc8c40f3d 100644
--- a/docs/aws-config.md
+++ b/docs/aws-config.md
@@ -303,4 +303,4 @@ It can be set up for Ark by creating a role that will have required permissions,
[6]: config-definition.md#aws
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[20]: faq.md
-[21]: backupstoragelocation-definition.md#aws
+[21]: api-types/backupstoragelocation.md#aws
diff --git a/docs/azure-config.md b/docs/azure-config.md
index ba9d747afc..e0eb503962 100644
--- a/docs/azure-config.md
+++ b/docs/azure-config.md
@@ -165,5 +165,5 @@ In the root of your Ark directory, run:
[18]: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
[19]: https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions#storage
[20]: faq.md
-[21]: backupstoragelocation-definition.md#azure
+[21]: api-types/backupstoragelocation.md#azure
[22]: https://azure.microsoft.com/en-us/services/kubernetes-service/
diff --git a/docs/gcp-config.md b/docs/gcp-config.md
index f916da8be4..dc4466cbe9 100644
--- a/docs/gcp-config.md
+++ b/docs/gcp-config.md
@@ -129,7 +129,7 @@ In the root of your Ark directory, run:
```
[0]: namespace.md
- [7]: backupstoragelocation-definition.md#gcp
+ [7]: api-types/backupstoragelocation.md#gcp
[15]: https://cloud.google.com/compute/docs/access/service-accounts
[16]: https://cloud.google.com/sdk/docs/
[20]: faq.md
diff --git a/docs/ibm-config.md b/docs/ibm-config.md
index b27f2c13f2..64b4b700b5 100644
--- a/docs/ibm-config.md
+++ b/docs/ibm-config.md
@@ -78,5 +78,5 @@ In the root of your Ark directory, run:
[3]: https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
[4]: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0/kc_welcome_containers.html
[5]: https://console.bluemix.net/docs/containers/container_index.html#container_index
- [6]: backupstoragelocation-definition.md#aws
+ [6]: api-types/backupstoragelocation.md#aws
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
diff --git a/examples/aws/06-ark-volumesnapshotlocation.yaml b/examples/aws/06-ark-volumesnapshotlocation.yaml
new file mode 100644
index 0000000000..b93ebabfea
--- /dev/null
+++ b/examples/aws/06-ark-volumesnapshotlocation.yaml
@@ -0,0 +1,24 @@
+# Copyright 2018 the Heptio Ark contributors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+apiVersion: ark.heptio.com/v1
+kind: VolumeSnapshotLocation
+metadata:
+ name: aws-default
+ namespace: heptio-ark
+spec:
+ provider: aws
+ config:
+ region:
\ No newline at end of file
diff --git a/examples/azure/06-ark-volumesnapshotlocation.yaml b/examples/azure/06-ark-volumesnapshotlocation.yaml
new file mode 100644
index 0000000000..14fdd76a60
--- /dev/null
+++ b/examples/azure/06-ark-volumesnapshotlocation.yaml
@@ -0,0 +1,24 @@
+# Copyright 2018 the Heptio Ark contributors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+apiVersion: ark.heptio.com/v1
+kind: VolumeSnapshotLocation
+metadata:
+ name: azure-default
+ namespace: heptio-ark
+spec:
+ provider: azure
+ config:
+ apiTimeout: 2m0s
diff --git a/examples/common/00-prereqs.yaml b/examples/common/00-prereqs.yaml
index c9c096415e..4ffb20b9b6 100644
--- a/examples/common/00-prereqs.yaml
+++ b/examples/common/00-prereqs.yaml
@@ -162,6 +162,21 @@ spec:
plural: backupstoragelocations
kind: BackupStorageLocation
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: volumesnapshotlocations.ark.heptio.com
+ labels:
+ component: ark
+spec:
+ group: ark.heptio.com
+ version: v1
+ scope: Namespaced
+ names:
+ plural: volumesnapshotlocations
+ kind: VolumeSnapshotLocation
+
---
apiVersion: v1
kind: Namespace
diff --git a/examples/gcp/06-ark-volumesnapshotlocation.yaml b/examples/gcp/06-ark-volumesnapshotlocation.yaml
new file mode 100644
index 0000000000..c42c3cfdc6
--- /dev/null
+++ b/examples/gcp/06-ark-volumesnapshotlocation.yaml
@@ -0,0 +1,22 @@
+# Copyright 2018 the Heptio Ark contributors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+apiVersion: ark.heptio.com/v1
+kind: VolumeSnapshotLocation
+metadata:
+ name: gcp-default
+ namespace: heptio-ark
+spec:
+ provider: gcp
\ No newline at end of file
diff --git a/pkg/apis/ark/v1/backup.go b/pkg/apis/ark/v1/backup.go
index 8aef922ca3..1f0e19653f 100644
--- a/pkg/apis/ark/v1/backup.go
+++ b/pkg/apis/ark/v1/backup.go
@@ -61,6 +61,9 @@ type BackupSpec struct {
// StorageLocation is a string containing the name of a BackupStorageLocation where the backup should be stored.
StorageLocation string `json:"storageLocation"`
+
+ // VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations associated with this backup.
+ VolumeSnapshotLocations []string `json:"volumeSnapshotLocations"`
}
// BackupHooks contains custom behaviors that should be executed at different phases of the backup.
diff --git a/pkg/apis/ark/v1/register.go b/pkg/apis/ark/v1/register.go
index 5d70dcc767..5ce3bab1e0 100644
--- a/pkg/apis/ark/v1/register.go
+++ b/pkg/apis/ark/v1/register.go
@@ -59,16 +59,17 @@ func newTypeInfo(pluralName string, itemType, itemListType runtime.Object) typeI
// API group, keyed on Kind.
func CustomResources() map[string]typeInfo {
return map[string]typeInfo{
- "Backup": newTypeInfo("backups", &Backup{}, &BackupList{}),
- "Restore": newTypeInfo("restores", &Restore{}, &RestoreList{}),
- "Schedule": newTypeInfo("schedules", &Schedule{}, &ScheduleList{}),
- "Config": newTypeInfo("configs", &Config{}, &ConfigList{}),
- "DownloadRequest": newTypeInfo("downloadrequests", &DownloadRequest{}, &DownloadRequestList{}),
- "DeleteBackupRequest": newTypeInfo("deletebackuprequests", &DeleteBackupRequest{}, &DeleteBackupRequestList{}),
- "PodVolumeBackup": newTypeInfo("podvolumebackups", &PodVolumeBackup{}, &PodVolumeBackupList{}),
- "PodVolumeRestore": newTypeInfo("podvolumerestores", &PodVolumeRestore{}, &PodVolumeRestoreList{}),
- "ResticRepository": newTypeInfo("resticrepositories", &ResticRepository{}, &ResticRepositoryList{}),
- "BackupStorageLocation": newTypeInfo("backupstoragelocations", &BackupStorageLocation{}, &BackupStorageLocationList{}),
+ "Backup": newTypeInfo("backups", &Backup{}, &BackupList{}),
+ "Restore": newTypeInfo("restores", &Restore{}, &RestoreList{}),
+ "Schedule": newTypeInfo("schedules", &Schedule{}, &ScheduleList{}),
+ "Config": newTypeInfo("configs", &Config{}, &ConfigList{}),
+ "DownloadRequest": newTypeInfo("downloadrequests", &DownloadRequest{}, &DownloadRequestList{}),
+ "DeleteBackupRequest": newTypeInfo("deletebackuprequests", &DeleteBackupRequest{}, &DeleteBackupRequestList{}),
+ "PodVolumeBackup": newTypeInfo("podvolumebackups", &PodVolumeBackup{}, &PodVolumeBackupList{}),
+ "PodVolumeRestore": newTypeInfo("podvolumerestores", &PodVolumeRestore{}, &PodVolumeRestoreList{}),
+ "ResticRepository": newTypeInfo("resticrepositories", &ResticRepository{}, &ResticRepositoryList{}),
+ "BackupStorageLocation": newTypeInfo("backupstoragelocations", &BackupStorageLocation{}, &BackupStorageLocationList{}),
+ "VolumeSnapshotLocation": newTypeInfo("volumesnapshotlocations", &VolumeSnapshotLocation{}, &VolumeSnapshotLocationList{}),
}
}
diff --git a/pkg/apis/ark/v1/volume_snapshot_location.go b/pkg/apis/ark/v1/volume_snapshot_location.go
new file mode 100644
index 0000000000..af14d852d9
--- /dev/null
+++ b/pkg/apis/ark/v1/volume_snapshot_location.go
@@ -0,0 +1,65 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package v1
+
+import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+
+// +genclient
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// VolumeSnapshotLocation is a location where Ark stores volume snapshots.
+type VolumeSnapshotLocation struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ObjectMeta `json:"metadata"`
+
+ Spec VolumeSnapshotLocationSpec `json:"spec"`
+ Status VolumeSnapshotLocationStatus `json:"status"`
+}
+
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// VolumeSnapshotLocationList is a list of VolumeSnapshotLocations.
+type VolumeSnapshotLocationList struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ListMeta `json:"metadata"`
+ Items []VolumeSnapshotLocation `json:"items"`
+}
+
+// VolumeSnapshotLocationSpec defines the specification for an Ark VolumeSnapshotLocation.
+type VolumeSnapshotLocationSpec struct {
+ // Provider is the provider of the volume storage.
+ Provider string `json:"provider"`
+
+ // Config is for provider-specific configuration fields.
+ Config map[string]string `json:"config"`
+}
+
+// VolumeSnapshotLocationPhase is the lifecyle phase of an Ark VolumeSnapshotLocation.
+type VolumeSnapshotLocationPhase string
+
+const (
+ // VolumeSnapshotLocationPhaseAvailable means the location is available to read and write from.
+ VolumeSnapshotLocationPhaseAvailable VolumeSnapshotLocationPhase = "Available"
+
+ // VolumeSnapshotLocationPhaseUnavailable means the location is unavailable to read and write from.
+ VolumeSnapshotLocationPhaseUnavailable VolumeSnapshotLocationPhase = "Unavailable"
+)
+
+// VolumeSnapshotLocationStatus describes the current status of an Ark VolumeSnapshotLocation.
+type VolumeSnapshotLocationStatus struct {
+ Phase VolumeSnapshotLocationPhase `json:"phase,omitempty"`
+}
diff --git a/pkg/apis/ark/v1/zz_generated.deepcopy.go b/pkg/apis/ark/v1/zz_generated.deepcopy.go
index 2a420dd47e..cbc3b4b7e3 100644
--- a/pkg/apis/ark/v1/zz_generated.deepcopy.go
+++ b/pkg/apis/ark/v1/zz_generated.deepcopy.go
@@ -252,6 +252,11 @@ func (in *BackupSpec) DeepCopyInto(out *BackupSpec) {
}
}
in.Hooks.DeepCopyInto(&out.Hooks)
+ if in.VolumeSnapshotLocations != nil {
+ in, out := &in.VolumeSnapshotLocations, &out.VolumeSnapshotLocations
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
return
}
@@ -1370,3 +1375,103 @@ func (in *VolumeBackupInfo) DeepCopy() *VolumeBackupInfo {
in.DeepCopyInto(out)
return out
}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *VolumeSnapshotLocation) DeepCopyInto(out *VolumeSnapshotLocation) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+ in.Spec.DeepCopyInto(&out.Spec)
+ out.Status = in.Status
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeSnapshotLocation.
+func (in *VolumeSnapshotLocation) DeepCopy() *VolumeSnapshotLocation {
+ if in == nil {
+ return nil
+ }
+ out := new(VolumeSnapshotLocation)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *VolumeSnapshotLocation) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *VolumeSnapshotLocationList) DeepCopyInto(out *VolumeSnapshotLocationList) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ out.ListMeta = in.ListMeta
+ if in.Items != nil {
+ in, out := &in.Items, &out.Items
+ *out = make([]VolumeSnapshotLocation, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeSnapshotLocationList.
+func (in *VolumeSnapshotLocationList) DeepCopy() *VolumeSnapshotLocationList {
+ if in == nil {
+ return nil
+ }
+ out := new(VolumeSnapshotLocationList)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *VolumeSnapshotLocationList) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *VolumeSnapshotLocationSpec) DeepCopyInto(out *VolumeSnapshotLocationSpec) {
+ *out = *in
+ if in.Config != nil {
+ in, out := &in.Config, &out.Config
+ *out = make(map[string]string, len(*in))
+ for key, val := range *in {
+ (*out)[key] = val
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeSnapshotLocationSpec.
+func (in *VolumeSnapshotLocationSpec) DeepCopy() *VolumeSnapshotLocationSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(VolumeSnapshotLocationSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *VolumeSnapshotLocationStatus) DeepCopyInto(out *VolumeSnapshotLocationStatus) {
+ *out = *in
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeSnapshotLocationStatus.
+func (in *VolumeSnapshotLocationStatus) DeepCopy() *VolumeSnapshotLocationStatus {
+ if in == nil {
+ return nil
+ }
+ out := new(VolumeSnapshotLocationStatus)
+ in.DeepCopyInto(out)
+ return out
+}
diff --git a/pkg/backup/backup.go b/pkg/backup/backup.go
index 909efa401e..4f5139aff5 100644
--- a/pkg/backup/backup.go
+++ b/pkg/backup/backup.go
@@ -46,7 +46,7 @@ import (
type Backupper interface {
// Backup takes a backup using the specification in the api.Backup and writes backup and log data
// to the given writers.
- Backup(logger logrus.FieldLogger, backup *api.Backup, backupFile io.Writer, actions []ItemAction) error
+ Backup(logger logrus.FieldLogger, backup *Request, backupFile io.Writer, actions []ItemAction, blockStoreGetter BlockStoreGetter) error
}
// kubernetesBackupper implements Backupper.
@@ -55,7 +55,6 @@ type kubernetesBackupper struct {
discoveryHelper discovery.Helper
podCommandExecutor podexec.PodCommandExecutor
groupBackupperFactory groupBackupperFactory
- blockStore cloudprovider.BlockStore
resticBackupperFactory restic.BackupperFactory
resticTimeout time.Duration
}
@@ -93,7 +92,6 @@ func NewKubernetesBackupper(
discoveryHelper discovery.Helper,
dynamicFactory client.DynamicFactory,
podCommandExecutor podexec.PodCommandExecutor,
- blockStore cloudprovider.BlockStore,
resticBackupperFactory restic.BackupperFactory,
resticTimeout time.Duration,
) (Backupper, error) {
@@ -102,7 +100,6 @@ func NewKubernetesBackupper(
dynamicFactory: dynamicFactory,
podCommandExecutor: podCommandExecutor,
groupBackupperFactory: &defaultGroupBackupperFactory{},
- blockStore: blockStore,
resticBackupperFactory: resticBackupperFactory,
resticTimeout: resticTimeout,
}, nil
@@ -209,41 +206,43 @@ func getResourceHook(hookSpec api.BackupResourceHookSpec, discoveryHelper discov
return h, nil
}
+type BlockStoreGetter interface {
+ GetBlockStore(name string) (cloudprovider.BlockStore, error)
+}
+
// Backup backs up the items specified in the Backup, placing them in a gzip-compressed tar file
// written to backupFile. The finalized api.Backup is written to metadata.
-func (kb *kubernetesBackupper) Backup(logger logrus.FieldLogger, backup *api.Backup, backupFile io.Writer, actions []ItemAction) error {
+func (kb *kubernetesBackupper) Backup(logger logrus.FieldLogger, backupRequest *Request, backupFile io.Writer, actions []ItemAction, blockStoreGetter BlockStoreGetter) error {
gzippedData := gzip.NewWriter(backupFile)
defer gzippedData.Close()
tw := tar.NewWriter(gzippedData)
defer tw.Close()
- log := logger.WithField("backup", kubeutil.NamespaceAndName(backup))
+ log := logger.WithField("backup", kubeutil.NamespaceAndName(backupRequest))
log.Info("Starting backup")
- namespaceIncludesExcludes := getNamespaceIncludesExcludes(backup)
- log.Infof("Including namespaces: %s", namespaceIncludesExcludes.IncludesString())
- log.Infof("Excluding namespaces: %s", namespaceIncludesExcludes.ExcludesString())
+ backupRequest.NamespaceIncludesExcludes = getNamespaceIncludesExcludes(backupRequest.Backup)
+ log.Infof("Including namespaces: %s", backupRequest.NamespaceIncludesExcludes.IncludesString())
+ log.Infof("Excluding namespaces: %s", backupRequest.NamespaceIncludesExcludes.ExcludesString())
- resourceIncludesExcludes := getResourceIncludesExcludes(kb.discoveryHelper, backup.Spec.IncludedResources, backup.Spec.ExcludedResources)
- log.Infof("Including resources: %s", resourceIncludesExcludes.IncludesString())
- log.Infof("Excluding resources: %s", resourceIncludesExcludes.ExcludesString())
+ backupRequest.ResourceIncludesExcludes = getResourceIncludesExcludes(kb.discoveryHelper, backupRequest.Spec.IncludedResources, backupRequest.Spec.ExcludedResources)
+ log.Infof("Including resources: %s", backupRequest.ResourceIncludesExcludes.IncludesString())
+ log.Infof("Excluding resources: %s", backupRequest.ResourceIncludesExcludes.ExcludesString())
- resourceHooks, err := getResourceHooks(backup.Spec.Hooks.Resources, kb.discoveryHelper)
+ var err error
+ backupRequest.ResourceHooks, err = getResourceHooks(backupRequest.Spec.Hooks.Resources, kb.discoveryHelper)
if err != nil {
return err
}
- backedUpItems := make(map[itemKey]struct{})
- var errs []error
-
- resolvedActions, err := resolveActions(actions, kb.discoveryHelper)
+ backupRequest.ResolvedActions, err = resolveActions(actions, kb.discoveryHelper)
if err != nil {
return err
}
podVolumeTimeout := kb.resticTimeout
- if val := backup.Annotations[api.PodVolumeOperationTimeoutAnnotation]; val != "" {
+ if val := backupRequest.Annotations[api.PodVolumeOperationTimeoutAnnotation]; val != "" {
parsed, err := time.ParseDuration(val)
if err != nil {
log.WithError(errors.WithStack(err)).Errorf("Unable to parse pod volume timeout annotation %s, using server value.", val)
@@ -257,7 +256,7 @@ func (kb *kubernetesBackupper) Backup(logger logrus.FieldLogger, backup *api.Bac
var resticBackupper restic.Backupper
if kb.resticBackupperFactory != nil {
- resticBackupper, err = kb.resticBackupperFactory.NewBackupper(ctx, backup)
+ resticBackupper, err = kb.resticBackupperFactory.NewBackupper(ctx, backupRequest.Backup)
if err != nil {
return errors.WithStack(err)
}
@@ -265,22 +264,19 @@ func (kb *kubernetesBackupper) Backup(logger logrus.FieldLogger, backup *api.Bac
gb := kb.groupBackupperFactory.newGroupBackupper(
log,
- backup,
- namespaceIncludesExcludes,
- resourceIncludesExcludes,
+ backupRequest,
kb.dynamicFactory,
kb.discoveryHelper,
- backedUpItems,
+ make(map[itemKey]struct{}),
cohabitatingResources(),
- resolvedActions,
kb.podCommandExecutor,
tw,
- resourceHooks,
- kb.blockStore,
resticBackupper,
newPVCSnapshotTracker(),
+ blockStoreGetter,
)
+ var errs []error
for _, group := range kb.discoveryHelper.Resources() {
if err := gb.backupGroup(group); err != nil {
errs = append(errs, err)
diff --git a/pkg/backup/backup_test.go b/pkg/backup/backup_test.go
index 9b5181cda0..552f26b0b6 100644
--- a/pkg/backup/backup_test.go
+++ b/pkg/backup/backup_test.go
@@ -38,7 +38,6 @@ import (
"github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
- "github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/discovery"
"github.com/heptio/ark/pkg/podexec"
"github.com/heptio/ark/pkg/restic"
@@ -372,17 +371,16 @@ func parseLabelSelectorOrDie(s string) labels.Selector {
func TestBackup(t *testing.T) {
tests := []struct {
- name string
- backup *v1.Backup
- expectedNamespaces *collections.IncludesExcludes
- expectedResources *collections.IncludesExcludes
- expectedLabelSelector string
- expectedHooks []resourceHook
- backupGroupErrors map[*metav1.APIResourceList]error
- expectedError error
+ name string
+ backup *v1.Backup
+ expectedNamespaces *collections.IncludesExcludes
+ expectedResources *collections.IncludesExcludes
+ expectedHooks []resourceHook
+ backupGroupErrors map[*metav1.APIResourceList]error
+ expectedError error
}{
{
- name: "happy path, no actions, no label selector, no hooks, no errors",
+ name: "happy path, no actions, no hooks, no errors",
backup: &v1.Backup{
Spec: v1.BackupSpec{
// cm - shortcut in legacy api group
@@ -402,25 +400,6 @@ func TestBackup(t *testing.T) {
rbacGroup: nil,
},
},
- {
- name: "label selector",
- backup: &v1.Backup{
- Spec: v1.BackupSpec{
- LabelSelector: &metav1.LabelSelector{
- MatchLabels: map[string]string{"a": "b"},
- },
- },
- },
- expectedNamespaces: collections.NewIncludesExcludes(),
- expectedResources: collections.NewIncludesExcludes(),
- expectedHooks: []resourceHook{},
- expectedLabelSelector: "a=b",
- backupGroupErrors: map[*metav1.APIResourceList]error{
- v1Group: nil,
- certificatesGroup: nil,
- rbacGroup: nil,
- },
- },
{
name: "backupGroup errors",
backup: &v1.Backup{},
@@ -488,6 +467,10 @@ func TestBackup(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
+ req := &Request{
+ Backup: test.backup,
+ }
+
discoveryHelper := &arktest.FakeDiscoveryHelper{
Mapper: &arktest.FakeMapper{
Resources: map[schema.GroupVersionResource]schema.GroupVersionResource{
@@ -503,77 +486,66 @@ func TestBackup(t *testing.T) {
},
}
- dynamicFactory := &arktest.FakeDynamicFactory{}
+ dynamicFactory := new(arktest.FakeDynamicFactory)
podCommandExecutor := &arktest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
- b, err := NewKubernetesBackupper(
- discoveryHelper,
- dynamicFactory,
- podCommandExecutor,
- nil,
- nil, // restic backupper factory
- 0, // restic timeout
- )
- require.NoError(t, err)
- kb := b.(*kubernetesBackupper)
-
groupBackupperFactory := &mockGroupBackupperFactory{}
defer groupBackupperFactory.AssertExpectations(t)
- kb.groupBackupperFactory = groupBackupperFactory
groupBackupper := &mockGroupBackupper{}
defer groupBackupper.AssertExpectations(t)
groupBackupperFactory.On("newGroupBackupper",
mock.Anything, // log
- test.backup,
- test.expectedNamespaces,
- test.expectedResources,
+ req,
dynamicFactory,
discoveryHelper,
map[itemKey]struct{}{}, // backedUpItems
cohabitatingResources(),
- mock.Anything,
- kb.podCommandExecutor,
+ podCommandExecutor,
mock.Anything, // tarWriter
- test.expectedHooks,
- mock.Anything,
mock.Anything, // restic backupper
mock.Anything, // pvc snapshot tracker
+ mock.Anything, // block store getter
).Return(groupBackupper)
for group, err := range test.backupGroupErrors {
groupBackupper.On("backupGroup", group).Return(err)
}
- var backupFile bytes.Buffer
+ kb := &kubernetesBackupper{
+ discoveryHelper: discoveryHelper,
+ dynamicFactory: dynamicFactory,
+ podCommandExecutor: podCommandExecutor,
+ groupBackupperFactory: groupBackupperFactory,
+ }
+
+ err := kb.Backup(logging.DefaultLogger(logrus.DebugLevel), req, new(bytes.Buffer), nil, nil)
- err = b.Backup(logging.DefaultLogger(logrus.DebugLevel), test.backup, &backupFile, nil)
+ assert.Equal(t, test.expectedNamespaces, req.NamespaceIncludesExcludes)
+ assert.Equal(t, test.expectedResources, req.ResourceIncludesExcludes)
+ assert.Equal(t, test.expectedHooks, req.ResourceHooks)
if test.expectedError != nil {
assert.EqualError(t, err, test.expectedError.Error())
return
}
assert.NoError(t, err)
+
})
}
}
func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {
- discoveryHelper := &arktest.FakeDiscoveryHelper{
- Mapper: &arktest.FakeMapper{
- Resources: map[schema.GroupVersionResource]schema.GroupVersionResource{},
- },
+ groupBackupperFactory := &mockGroupBackupperFactory{}
+ kb := &kubernetesBackupper{
+ discoveryHelper: new(arktest.FakeDiscoveryHelper),
+ groupBackupperFactory: groupBackupperFactory,
}
- b, err := NewKubernetesBackupper(discoveryHelper, nil, nil, nil, nil, 0)
- require.NoError(t, err)
-
- kb := b.(*kubernetesBackupper)
- groupBackupperFactory := &mockGroupBackupperFactory{}
- kb.groupBackupperFactory = groupBackupperFactory
+ defer groupBackupperFactory.AssertExpectations(t)
// assert that newGroupBackupper() is called with the result of cohabitatingResources()
// passed as an argument.
@@ -582,9 +554,7 @@ func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {
mock.Anything,
mock.Anything,
mock.Anything,
- mock.Anything,
- mock.Anything,
- discoveryHelper,
+ kb.discoveryHelper,
mock.Anything,
firstCohabitatingResources,
mock.Anything,
@@ -592,12 +562,9 @@ func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {
mock.Anything,
mock.Anything,
mock.Anything,
- mock.Anything,
- mock.Anything,
).Return(&mockGroupBackupper{})
- assert.NoError(t, b.Backup(arktest.NewLogger(), &v1.Backup{}, &bytes.Buffer{}, nil))
- groupBackupperFactory.AssertExpectations(t)
+ assert.NoError(t, kb.Backup(arktest.NewLogger(), &Request{Backup: &v1.Backup{}}, &bytes.Buffer{}, nil, nil))
// mutate the cohabitatingResources map that was used in the first backup to simulate
// the first backup process having done so.
@@ -614,9 +581,7 @@ func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {
mock.Anything,
mock.Anything,
mock.Anything,
- mock.Anything,
- mock.Anything,
- discoveryHelper,
+ kb.discoveryHelper,
mock.Anything,
secondCohabitatingResources,
mock.Anything,
@@ -624,16 +589,13 @@ func TestBackupUsesNewCohabitatingResourcesForEachBackup(t *testing.T) {
mock.Anything,
mock.Anything,
mock.Anything,
- mock.Anything,
- mock.Anything,
).Return(&mockGroupBackupper{})
- assert.NoError(t, b.Backup(arktest.NewLogger(), &v1.Backup{}, &bytes.Buffer{}, nil))
+ assert.NoError(t, kb.Backup(arktest.NewLogger(), &Request{Backup: new(v1.Backup)}, new(bytes.Buffer), nil, nil))
assert.NotEqual(t, firstCohabitatingResources, secondCohabitatingResources)
for _, resource := range secondCohabitatingResources {
assert.False(t, resource.seen)
}
- groupBackupperFactory.AssertExpectations(t)
}
type mockGroupBackupperFactory struct {
@@ -642,36 +604,29 @@ type mockGroupBackupperFactory struct {
func (f *mockGroupBackupperFactory) newGroupBackupper(
log logrus.FieldLogger,
- backup *v1.Backup,
- namespaces, resources *collections.IncludesExcludes,
+ backup *Request,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
backedUpItems map[itemKey]struct{},
cohabitatingResources map[string]*cohabitatingResource,
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) groupBackupper {
args := f.Called(
log,
backup,
- namespaces,
- resources,
dynamicFactory,
discoveryHelper,
backedUpItems,
cohabitatingResources,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
- blockStore,
resticBackupper,
resticSnapshotTracker,
+ blockStoreGetter,
)
return args.Get(0).(groupBackupper)
}
diff --git a/pkg/backup/group_backupper.go b/pkg/backup/group_backupper.go
index 9ad0ddd5f4..8c1169afc7 100644
--- a/pkg/backup/group_backupper.go
+++ b/pkg/backup/group_backupper.go
@@ -27,31 +27,25 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
kuberrs "k8s.io/apimachinery/pkg/util/errors"
- "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
- "github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/discovery"
"github.com/heptio/ark/pkg/podexec"
"github.com/heptio/ark/pkg/restic"
- "github.com/heptio/ark/pkg/util/collections"
)
type groupBackupperFactory interface {
newGroupBackupper(
log logrus.FieldLogger,
- backup *v1.Backup,
- namespaces, resources *collections.IncludesExcludes,
+ backupRequest *Request,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
backedUpItems map[itemKey]struct{},
cohabitatingResources map[string]*cohabitatingResource,
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) groupBackupper
}
@@ -59,36 +53,30 @@ type defaultGroupBackupperFactory struct{}
func (f *defaultGroupBackupperFactory) newGroupBackupper(
log logrus.FieldLogger,
- backup *v1.Backup,
- namespaces, resources *collections.IncludesExcludes,
+ backupRequest *Request,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
backedUpItems map[itemKey]struct{},
cohabitatingResources map[string]*cohabitatingResource,
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) groupBackupper {
return &defaultGroupBackupper{
- log: log,
- backup: backup,
- namespaces: namespaces,
- resources: resources,
- dynamicFactory: dynamicFactory,
- discoveryHelper: discoveryHelper,
- backedUpItems: backedUpItems,
- cohabitatingResources: cohabitatingResources,
- actions: actions,
- podCommandExecutor: podCommandExecutor,
- tarWriter: tarWriter,
- resourceHooks: resourceHooks,
- blockStore: blockStore,
- resticBackupper: resticBackupper,
- resticSnapshotTracker: resticSnapshotTracker,
+ log: log,
+ backupRequest: backupRequest,
+ dynamicFactory: dynamicFactory,
+ discoveryHelper: discoveryHelper,
+ backedUpItems: backedUpItems,
+ cohabitatingResources: cohabitatingResources,
+ podCommandExecutor: podCommandExecutor,
+ tarWriter: tarWriter,
+ resticBackupper: resticBackupper,
+ resticSnapshotTracker: resticSnapshotTracker,
+ blockStoreGetter: blockStoreGetter,
+
resourceBackupperFactory: &defaultResourceBackupperFactory{},
}
}
@@ -99,20 +87,17 @@ type groupBackupper interface {
type defaultGroupBackupper struct {
log logrus.FieldLogger
- backup *v1.Backup
- namespaces, resources *collections.IncludesExcludes
+ backupRequest *Request
dynamicFactory client.DynamicFactory
discoveryHelper discovery.Helper
backedUpItems map[itemKey]struct{}
cohabitatingResources map[string]*cohabitatingResource
- actions []resolvedAction
podCommandExecutor podexec.PodCommandExecutor
tarWriter tarWriter
- resourceHooks []resourceHook
- blockStore cloudprovider.BlockStore
resticBackupper restic.Backupper
resticSnapshotTracker *pvcSnapshotTracker
resourceBackupperFactory resourceBackupperFactory
+ blockStoreGetter BlockStoreGetter
}
// backupGroup backs up a single API group.
@@ -122,20 +107,16 @@ func (gb *defaultGroupBackupper) backupGroup(group *metav1.APIResourceList) erro
log = gb.log.WithField("group", group.GroupVersion)
rb = gb.resourceBackupperFactory.newResourceBackupper(
log,
- gb.backup,
- gb.namespaces,
- gb.resources,
+ gb.backupRequest,
gb.dynamicFactory,
gb.discoveryHelper,
gb.backedUpItems,
gb.cohabitatingResources,
- gb.actions,
gb.podCommandExecutor,
gb.tarWriter,
- gb.resourceHooks,
- gb.blockStore,
gb.resticBackupper,
gb.resticSnapshotTracker,
+ gb.blockStoreGetter,
)
)
diff --git a/pkg/backup/group_backupper_test.go b/pkg/backup/group_backupper_test.go
index fec7e8e538..cd4535c2ee 100644
--- a/pkg/backup/group_backupper_test.go
+++ b/pkg/backup/group_backupper_test.go
@@ -19,104 +19,44 @@ package backup
import (
"testing"
- "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
- "github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/discovery"
"github.com/heptio/ark/pkg/podexec"
"github.com/heptio/ark/pkg/restic"
- "github.com/heptio/ark/pkg/util/collections"
arktest "github.com/heptio/ark/pkg/util/test"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/runtime/schema"
)
-func TestBackupGroup(t *testing.T) {
- backup := &v1.Backup{}
+func TestBackupGroupBacksUpCorrectResourcesInCorrectOrder(t *testing.T) {
+ resourceBackupperFactory := new(mockResourceBackupperFactory)
+ resourceBackupper := new(mockResourceBackupper)
- namespaces := collections.NewIncludesExcludes().Includes("a")
- resources := collections.NewIncludesExcludes().Includes("b")
-
- dynamicFactory := &arktest.FakeDynamicFactory{}
- defer dynamicFactory.AssertExpectations(t)
-
- discoveryHelper := arktest.NewFakeDiscoveryHelper(true, nil)
-
- backedUpItems := map[itemKey]struct{}{
- {resource: "a", namespace: "b", name: "c"}: {},
- }
-
- cohabitatingResources := map[string]*cohabitatingResource{
- "a": {
- resource: "a",
- groupResource1: schema.GroupResource{Group: "g1", Resource: "a"},
- groupResource2: schema.GroupResource{Group: "g2", Resource: "a"},
- },
- }
-
- actions := []resolvedAction{
- {
- ItemAction: newFakeAction("pods"),
- resourceIncludesExcludes: collections.NewIncludesExcludes().Includes("pods"),
- },
- }
-
- podCommandExecutor := &arktest.MockPodCommandExecutor{}
- defer podCommandExecutor.AssertExpectations(t)
-
- tarWriter := &fakeTarWriter{}
-
- resourceHooks := []resourceHook{
- {name: "myhook"},
- }
-
- gb := (&defaultGroupBackupperFactory{}).newGroupBackupper(
- arktest.NewLogger(),
- backup,
- namespaces,
- resources,
- dynamicFactory,
- discoveryHelper,
- backedUpItems,
- cohabitatingResources,
- actions,
- podCommandExecutor,
- tarWriter,
- resourceHooks,
- nil, // snapshot service
- nil, // restic backupper
- newPVCSnapshotTracker(),
- ).(*defaultGroupBackupper)
-
- resourceBackupperFactory := &mockResourceBackupperFactory{}
defer resourceBackupperFactory.AssertExpectations(t)
- gb.resourceBackupperFactory = resourceBackupperFactory
-
- resourceBackupper := &mockResourceBackupper{}
defer resourceBackupper.AssertExpectations(t)
resourceBackupperFactory.On("newResourceBackupper",
mock.Anything,
- backup,
- namespaces,
- resources,
- dynamicFactory,
- discoveryHelper,
- backedUpItems,
- cohabitatingResources,
- actions,
- podCommandExecutor,
- tarWriter,
- resourceHooks,
- nil,
- mock.Anything, // restic backupper
- mock.Anything, // pvc snapshot tracker
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
+ mock.Anything,
).Return(resourceBackupper)
+ gb := &defaultGroupBackupper{
+ log: arktest.NewLogger(),
+ resourceBackupperFactory: resourceBackupperFactory,
+ }
+
group := &metav1.APIResourceList{
GroupVersion: "v1",
APIResources: []metav1.APIResource{
@@ -126,9 +66,7 @@ func TestBackupGroup(t *testing.T) {
},
}
- expectedOrder := []string{"pods", "persistentvolumeclaims", "persistentvolumes"}
var actualOrder []string
-
runFunc := func(args mock.Arguments) {
actualOrder = append(actualOrder, args.Get(1).(metav1.APIResource).Name)
}
@@ -137,11 +75,10 @@ func TestBackupGroup(t *testing.T) {
resourceBackupper.On("backupResource", group, metav1.APIResource{Name: "persistentvolumeclaims"}).Return(nil).Run(runFunc)
resourceBackupper.On("backupResource", group, metav1.APIResource{Name: "persistentvolumes"}).Return(nil).Run(runFunc)
- err := gb.backupGroup(group)
- require.NoError(t, err)
+ require.NoError(t, gb.backupGroup(group))
// make sure PVs were last
- assert.Equal(t, expectedOrder, actualOrder)
+ assert.Equal(t, []string{"pods", "persistentvolumeclaims", "persistentvolumes"}, actualOrder)
}
type mockResourceBackupperFactory struct {
@@ -150,37 +87,29 @@ type mockResourceBackupperFactory struct {
func (rbf *mockResourceBackupperFactory) newResourceBackupper(
log logrus.FieldLogger,
- backup *v1.Backup,
- namespaces *collections.IncludesExcludes,
- resources *collections.IncludesExcludes,
+ backup *Request,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
backedUpItems map[itemKey]struct{},
cohabitatingResources map[string]*cohabitatingResource,
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) resourceBackupper {
args := rbf.Called(
log,
backup,
- namespaces,
- resources,
dynamicFactory,
discoveryHelper,
backedUpItems,
cohabitatingResources,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
- blockStore,
resticBackupper,
resticSnapshotTracker,
+ blockStoreGetter,
)
return args.Get(0).(resourceBackupper)
}
diff --git a/pkg/backup/item_backupper.go b/pkg/backup/item_backupper.go
index 902d22fcdb..55df74c951 100644
--- a/pkg/backup/item_backupper.go
+++ b/pkg/backup/item_backupper.go
@@ -40,58 +40,49 @@ import (
"github.com/heptio/ark/pkg/kuberesource"
"github.com/heptio/ark/pkg/podexec"
"github.com/heptio/ark/pkg/restic"
- "github.com/heptio/ark/pkg/util/collections"
+ "github.com/heptio/ark/pkg/volume"
)
type itemBackupperFactory interface {
newItemBackupper(
- backup *api.Backup,
- namespaces, resources *collections.IncludesExcludes,
+ backup *Request,
backedUpItems map[itemKey]struct{},
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) ItemBackupper
}
type defaultItemBackupperFactory struct{}
func (f *defaultItemBackupperFactory) newItemBackupper(
- backup *api.Backup,
- namespaces, resources *collections.IncludesExcludes,
+ backupRequest *Request,
backedUpItems map[itemKey]struct{},
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) ItemBackupper {
ib := &defaultItemBackupper{
- backup: backup,
- namespaces: namespaces,
- resources: resources,
- backedUpItems: backedUpItems,
- actions: actions,
- tarWriter: tarWriter,
- resourceHooks: resourceHooks,
- dynamicFactory: dynamicFactory,
- discoveryHelper: discoveryHelper,
- blockStore: blockStore,
+ backupRequest: backupRequest,
+ backedUpItems: backedUpItems,
+ tarWriter: tarWriter,
+ dynamicFactory: dynamicFactory,
+ discoveryHelper: discoveryHelper,
+ resticBackupper: resticBackupper,
+ resticSnapshotTracker: resticSnapshotTracker,
+ blockStoreGetter: blockStoreGetter,
+
itemHookHandler: &defaultItemHookHandler{
podCommandExecutor: podCommandExecutor,
},
- resticBackupper: resticBackupper,
- resticSnapshotTracker: resticSnapshotTracker,
}
// this is for testing purposes
@@ -105,21 +96,18 @@ type ItemBackupper interface {
}
type defaultItemBackupper struct {
- backup *api.Backup
- namespaces *collections.IncludesExcludes
- resources *collections.IncludesExcludes
+ backupRequest *Request
backedUpItems map[itemKey]struct{}
- actions []resolvedAction
tarWriter tarWriter
- resourceHooks []resourceHook
dynamicFactory client.DynamicFactory
discoveryHelper discovery.Helper
- blockStore cloudprovider.BlockStore
resticBackupper restic.Backupper
resticSnapshotTracker *pvcSnapshotTracker
+ blockStoreGetter BlockStoreGetter
- itemHookHandler itemHookHandler
- additionalItemBackupper ItemBackupper
+ itemHookHandler itemHookHandler
+ additionalItemBackupper ItemBackupper
+ snapshotLocationBlockStores map[string]cloudprovider.BlockStore
}
// backupItem backs up an individual item to tarWriter. The item may be excluded based on the
@@ -140,19 +128,19 @@ func (ib *defaultItemBackupper) backupItem(logger logrus.FieldLogger, obj runtim
// NOTE: we have to re-check namespace & resource includes/excludes because it's possible that
// backupItem can be invoked by a custom action.
- if namespace != "" && !ib.namespaces.ShouldInclude(namespace) {
+ if namespace != "" && !ib.backupRequest.NamespaceIncludesExcludes.ShouldInclude(namespace) {
log.Info("Excluding item because namespace is excluded")
return nil
}
// NOTE: we specifically allow namespaces to be backed up even if IncludeClusterResources is
// false.
- if namespace == "" && groupResource != kuberesource.Namespaces && ib.backup.Spec.IncludeClusterResources != nil && !*ib.backup.Spec.IncludeClusterResources {
+ if namespace == "" && groupResource != kuberesource.Namespaces && ib.backupRequest.Spec.IncludeClusterResources != nil && !*ib.backupRequest.Spec.IncludeClusterResources {
log.Info("Excluding item because resource is cluster-scoped and backup.spec.includeClusterResources is false")
return nil
}
- if !ib.resources.ShouldInclude(groupResource.String()) {
+ if !ib.backupRequest.ResourceIncludesExcludes.ShouldInclude(groupResource.String()) {
log.Info("Excluding item because resource is excluded")
return nil
}
@@ -176,7 +164,7 @@ func (ib *defaultItemBackupper) backupItem(logger logrus.FieldLogger, obj runtim
log.Info("Backing up resource")
log.Debug("Executing pre hooks")
- if err := ib.itemHookHandler.handleHooks(log, groupResource, obj, ib.resourceHooks, hookPhasePre); err != nil {
+ if err := ib.itemHookHandler.handleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hookPhasePre); err != nil {
return err
}
@@ -210,7 +198,7 @@ func (ib *defaultItemBackupper) backupItem(logger logrus.FieldLogger, obj runtim
// if there was an error running actions, execute post hooks and return
log.Debug("Executing post hooks")
- if err := ib.itemHookHandler.handleHooks(log, groupResource, obj, ib.resourceHooks, hookPhasePost); err != nil {
+ if err := ib.itemHookHandler.handleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hookPhasePost); err != nil {
backupErrs = append(backupErrs, err)
}
@@ -222,9 +210,7 @@ func (ib *defaultItemBackupper) backupItem(logger logrus.FieldLogger, obj runtim
}
if groupResource == kuberesource.PersistentVolumes {
- if ib.blockStore == nil {
- log.Debug("Skipping Persistent Volume snapshot because they're not enabled.")
- } else if err := ib.takePVSnapshot(obj, ib.backup, log); err != nil {
+ if err := ib.takePVSnapshot(obj, log); err != nil {
backupErrs = append(backupErrs, err)
}
}
@@ -243,7 +229,7 @@ func (ib *defaultItemBackupper) backupItem(logger logrus.FieldLogger, obj runtim
}
log.Debug("Executing post hooks")
- if err := ib.itemHookHandler.handleHooks(log, groupResource, obj, ib.resourceHooks, hookPhasePost); err != nil {
+ if err := ib.itemHookHandler.handleHooks(log, groupResource, obj, ib.backupRequest.ResourceHooks, hookPhasePost); err != nil {
backupErrs = append(backupErrs, err)
}
@@ -294,7 +280,7 @@ func (ib *defaultItemBackupper) backupPodVolumes(log logrus.FieldLogger, pod *co
return nil, nil
}
- return ib.resticBackupper.BackupPodVolumes(ib.backup, pod, log)
+ return ib.resticBackupper.BackupPodVolumes(ib.backupRequest.Backup, pod, log)
}
func (ib *defaultItemBackupper) executeActions(
@@ -304,7 +290,7 @@ func (ib *defaultItemBackupper) executeActions(
name, namespace string,
metadata metav1.Object,
) (runtime.Unstructured, error) {
- for _, action := range ib.actions {
+ for _, action := range ib.backupRequest.ResolvedActions {
if !action.resourceIncludesExcludes.ShouldInclude(groupResource.String()) {
log.Debug("Skipping action because it does not apply to this resource")
continue
@@ -322,7 +308,7 @@ func (ib *defaultItemBackupper) executeActions(
log.Info("Executing custom action")
- updatedItem, additionalItemIdentifiers, err := action.Execute(obj, ib.backup)
+ updatedItem, additionalItemIdentifiers, err := action.Execute(obj, ib.backupRequest.Backup)
if err != nil {
// We want this to show up in the log file at the place where the error occurs. When we return
// the error, it get aggregated with all the other ones at the end of the backup, making it
@@ -358,6 +344,30 @@ func (ib *defaultItemBackupper) executeActions(
return obj, nil
}
+// blockStore instantiates and initializes a BlockStore given a VolumeSnapshotLocation,
+// or returns an existing one if one's already been initialized for the location.
+func (ib *defaultItemBackupper) blockStore(snapshotLocation *api.VolumeSnapshotLocation) (cloudprovider.BlockStore, error) {
+ if bs, ok := ib.snapshotLocationBlockStores[snapshotLocation.Name]; ok {
+ return bs, nil
+ }
+
+ bs, err := ib.blockStoreGetter.GetBlockStore(snapshotLocation.Spec.Provider)
+ if err != nil {
+ return nil, err
+ }
+
+ if err := bs.Init(snapshotLocation.Spec.Config); err != nil {
+ return nil, err
+ }
+
+ if ib.snapshotLocationBlockStores == nil {
+ ib.snapshotLocationBlockStores = make(map[string]cloudprovider.BlockStore)
+ }
+ ib.snapshotLocationBlockStores[snapshotLocation.Name] = bs
+
+ return bs, nil
+}
+
// zoneLabel is the label that stores availability-zone info
// on PVs
const zoneLabel = "failure-domain.beta.kubernetes.io/zone"
@@ -365,10 +375,10 @@ const zoneLabel = "failure-domain.beta.kubernetes.io/zone"
// takePVSnapshot triggers a snapshot for the volume/disk underlying a PersistentVolume if the provided
// backup has volume snapshots enabled and the PV is of a compatible type. Also records cloud
// disk type and IOPS (if applicable) to be able to restore to current state later.
-func (ib *defaultItemBackupper) takePVSnapshot(obj runtime.Unstructured, backup *api.Backup, log logrus.FieldLogger) error {
+func (ib *defaultItemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.FieldLogger) error {
log.Info("Executing takePVSnapshot")
- if backup.Spec.SnapshotVolumes != nil && !*backup.Spec.SnapshotVolumes {
+ if ib.backupRequest.Spec.SnapshotVolumes != nil && !*ib.backupRequest.Spec.SnapshotVolumes {
log.Info("Backup has volume snapshots disabled; skipping volume snapshot action.")
return nil
}
@@ -392,21 +402,44 @@ func (ib *defaultItemBackupper) takePVSnapshot(obj runtime.Unstructured, backup
return errors.WithStack(err)
}
- name := metadata.GetName()
- var pvFailureDomainZone string
- labels := metadata.GetLabels()
-
- if labels[zoneLabel] != "" {
- pvFailureDomainZone = labels[zoneLabel]
- } else {
+ pvFailureDomainZone := metadata.GetLabels()[zoneLabel]
+ if pvFailureDomainZone == "" {
log.Infof("label %q is not present on PersistentVolume", zoneLabel)
}
- volumeID, err := ib.blockStore.GetVolumeID(obj)
- if err != nil {
- return errors.Wrapf(err, "error getting volume ID for PersistentVolume")
+ var (
+ volumeID, location string
+ blockStore cloudprovider.BlockStore
+ )
+
+ for _, snapshotLocation := range ib.backupRequest.SnapshotLocations {
+ bs, err := ib.blockStore(snapshotLocation)
+ if err != nil {
+ log.WithError(err).WithField("volumeSnapshotLocation", snapshotLocation).Error("Error getting block store for volume snapshot location")
+ continue
+ }
+
+ log := log.WithFields(map[string]interface{}{
+ "volumeSnapshotLocation": snapshotLocation.Name,
+ "persistentVolume": metadata.GetName(),
+ })
+
+ if volumeID, err = bs.GetVolumeID(obj); err != nil {
+ log.WithError(err).Errorf("Error attempting to get volume ID for persistent volume")
+ continue
+ }
+ if volumeID == "" {
+ log.Infof("No volume ID returned by block store for persistent volume")
+ continue
+ }
+
+ log.Infof("Got volume ID for persistent volume")
+ blockStore = bs
+ location = snapshotLocation.Name
+ break
}
- if volumeID == "" {
+
+ if blockStore == nil {
log.Info("PersistentVolume is not a supported volume type for snapshots, skipping.")
return nil
}
@@ -414,34 +447,50 @@ func (ib *defaultItemBackupper) takePVSnapshot(obj runtime.Unstructured, backup
log = log.WithField("volumeID", volumeID)
tags := map[string]string{
- "ark.heptio.com/backup": backup.Name,
+ "ark.heptio.com/backup": ib.backupRequest.Name,
"ark.heptio.com/pv": metadata.GetName(),
}
- log.Info("Snapshotting PersistentVolume")
- snapshotID, err := ib.blockStore.CreateSnapshot(volumeID, pvFailureDomainZone, tags)
- if err != nil {
- // log+error on purpose - log goes to the per-backup log file, error goes to the backup
- log.WithError(err).Error("error creating snapshot")
- return errors.WithMessage(err, "error creating snapshot")
- }
-
- volumeType, iops, err := ib.blockStore.GetVolumeInfo(volumeID, pvFailureDomainZone)
+ log.Info("Getting volume information")
+ volumeType, iops, err := blockStore.GetVolumeInfo(volumeID, pvFailureDomainZone)
if err != nil {
log.WithError(err).Error("error getting volume info")
return errors.WithMessage(err, "error getting volume info")
}
- if backup.Status.VolumeBackups == nil {
- backup.Status.VolumeBackups = make(map[string]*api.VolumeBackupInfo)
- }
+ log.Info("Snapshotting PersistentVolume")
+ snapshot := volumeSnapshot(ib.backupRequest.Backup, metadata.GetName(), volumeID, volumeType, pvFailureDomainZone, location, iops)
- backup.Status.VolumeBackups[name] = &api.VolumeBackupInfo{
- SnapshotID: snapshotID,
- Type: volumeType,
- Iops: iops,
- AvailabilityZone: pvFailureDomainZone,
+ var errs []error
+ snapshotID, err := blockStore.CreateSnapshot(snapshot.Spec.ProviderVolumeID, snapshot.Spec.VolumeAZ, tags)
+ if err != nil {
+ log.WithError(err).Error("error creating snapshot")
+ errs = append(errs, errors.Wrap(err, "error taking snapshot of volume"))
+ snapshot.Status.Phase = volume.SnapshotPhaseFailed
+ } else {
+ snapshot.Status.Phase = volume.SnapshotPhaseCompleted
+ snapshot.Status.ProviderSnapshotID = snapshotID
}
+ ib.backupRequest.VolumeSnapshots = append(ib.backupRequest.VolumeSnapshots, snapshot)
- return nil
+ // nil errors are automatically removed
+ return kubeerrs.NewAggregate(errs)
+}
+
+func volumeSnapshot(backup *api.Backup, volumeName, volumeID, volumeType, az, location string, iops *int64) *volume.Snapshot {
+ return &volume.Snapshot{
+ Spec: volume.SnapshotSpec{
+ BackupName: backup.Name,
+ BackupUID: string(backup.UID),
+ Location: location,
+ PersistentVolumeName: volumeName,
+ ProviderVolumeID: volumeID,
+ VolumeType: volumeType,
+ VolumeAZ: az,
+ VolumeIOPS: iops,
+ },
+ Status: volume.SnapshotStatus{
+ Phase: volume.SnapshotPhaseNew,
+ },
+ }
}
diff --git a/pkg/backup/item_backupper_test.go b/pkg/backup/item_backupper_test.go
index 67eadc5c0d..505044198a 100644
--- a/pkg/backup/item_backupper_test.go
+++ b/pkg/backup/item_backupper_test.go
@@ -26,6 +26,7 @@ import (
"github.com/heptio/ark/pkg/apis/ark/v1"
api "github.com/heptio/ark/pkg/apis/ark/v1"
+ "github.com/heptio/ark/pkg/cloudprovider"
resticmocks "github.com/heptio/ark/pkg/restic/mocks"
"github.com/heptio/ark/pkg/util/collections"
arktest "github.com/heptio/ark/pkg/util/test"
@@ -107,10 +108,13 @@ func TestBackupItemSkips(t *testing.T) {
for _, test := range tests {
t.Run(test.testName, func(t *testing.T) {
+ req := &Request{
+ NamespaceIncludesExcludes: test.namespaces,
+ ResourceIncludesExcludes: test.resources,
+ }
ib := &defaultItemBackupper{
- namespaces: test.namespaces,
- resources: test.resources,
+ backupRequest: req,
backedUpItems: test.backedUpItems,
}
@@ -134,13 +138,15 @@ func TestBackupItemSkips(t *testing.T) {
func TestBackupItemSkipsClusterScopedResourceWhenIncludeClusterResourcesFalse(t *testing.T) {
f := false
ib := &defaultItemBackupper{
- backup: &v1.Backup{
- Spec: v1.BackupSpec{
- IncludeClusterResources: &f,
+ backupRequest: &Request{
+ Backup: &v1.Backup{
+ Spec: v1.BackupSpec{
+ IncludeClusterResources: &f,
+ },
},
+ NamespaceIncludesExcludes: collections.NewIncludesExcludes(),
+ ResourceIncludesExcludes: collections.NewIncludesExcludes(),
},
- namespaces: collections.NewIncludesExcludes(),
- resources: collections.NewIncludesExcludes(),
}
u := arktest.UnstructuredOrDie(`{"apiVersion":"v1","kind":"Foo","metadata":{"name":"bar"}}`)
@@ -350,15 +356,20 @@ func TestBackupItemNoSkips(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
var (
- actions []resolvedAction
action *fakeAction
- backup = &v1.Backup{}
+ backup = new(Request)
groupResource = schema.ParseGroupResource("resource.group")
backedUpItems = make(map[itemKey]struct{})
- resources = collections.NewIncludesExcludes()
w = &fakeTarWriter{}
)
+ backup.Backup = new(v1.Backup)
+ backup.NamespaceIncludesExcludes = collections.NewIncludesExcludes()
+ backup.ResourceIncludesExcludes = collections.NewIncludesExcludes()
+ backup.SnapshotLocations = []*v1.VolumeSnapshotLocation{
+ new(v1.VolumeSnapshotLocation),
+ }
+
if test.groupResource != "" {
groupResource = schema.ParseGroupResource(test.groupResource)
}
@@ -384,7 +395,7 @@ func TestBackupItemNoSkips(t *testing.T) {
action = &fakeAction{
additionalItems: test.customActionAdditionalItemIdentifiers,
}
- actions = []resolvedAction{
+ backup.ResolvedActions = []resolvedAction{
{
ItemAction: action,
namespaceIncludesExcludes: collections.NewIncludesExcludes(),
@@ -394,8 +405,6 @@ func TestBackupItemNoSkips(t *testing.T) {
}
}
- resourceHooks := []resourceHook{}
-
podCommandExecutor := &arktest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
@@ -404,20 +413,18 @@ func TestBackupItemNoSkips(t *testing.T) {
discoveryHelper := arktest.NewFakeDiscoveryHelper(true, nil)
+ blockStoreGetter := &blockStoreGetter{}
+
b := (&defaultItemBackupperFactory{}).newItemBackupper(
backup,
- namespaces,
- resources,
backedUpItems,
- actions,
podCommandExecutor,
w,
- resourceHooks,
dynamicFactory,
discoveryHelper,
- nil, // snapshot service
nil, // restic backupper
newPVCSnapshotTracker(),
+ blockStoreGetter,
).(*defaultItemBackupper)
var blockStore *arktest.FakeBlockStore
@@ -427,7 +434,8 @@ func TestBackupItemNoSkips(t *testing.T) {
VolumeID: "vol-abc123",
Error: test.snapshotError,
}
- b.blockStore = blockStore
+
+ blockStoreGetter.blockStore = blockStore
}
if test.trackedPVCs != nil {
@@ -446,8 +454,8 @@ func TestBackupItemNoSkips(t *testing.T) {
b.additionalItemBackupper = additionalItemBackupper
obj := &unstructured.Unstructured{Object: item}
- itemHookHandler.On("handleHooks", mock.Anything, groupResource, obj, resourceHooks, hookPhasePre).Return(nil)
- itemHookHandler.On("handleHooks", mock.Anything, groupResource, obj, resourceHooks, hookPhasePost).Return(nil)
+ itemHookHandler.On("handleHooks", mock.Anything, groupResource, obj, backup.ResourceHooks, hookPhasePre).Return(nil)
+ itemHookHandler.On("handleHooks", mock.Anything, groupResource, obj, backup.ResourceHooks, hookPhasePost).Return(nil)
for i, item := range test.customActionAdditionalItemIdentifiers {
if test.additionalItemError != nil && i > 0 {
@@ -511,23 +519,21 @@ func TestBackupItemNoSkips(t *testing.T) {
}
require.Equal(t, 1, len(action.backups), "unexpected custom action backups: %#v", action.backups)
- assert.Equal(t, backup, &(action.backups[0]), "backup")
+ assert.Equal(t, backup.Backup, &(action.backups[0]), "backup")
}
if test.snapshottableVolumes != nil {
require.Equal(t, len(test.snapshottableVolumes), len(blockStore.SnapshotsTaken))
+ }
- var expectedBackups []api.VolumeBackupInfo
- for _, vbi := range test.snapshottableVolumes {
- expectedBackups = append(expectedBackups, vbi)
- }
-
- var actualBackups []api.VolumeBackupInfo
- for _, vbi := range backup.Status.VolumeBackups {
- actualBackups = append(actualBackups, *vbi)
- }
+ if len(test.snapshottableVolumes) > 0 {
+ require.Len(t, backup.VolumeSnapshots, 1)
+ snapshot := backup.VolumeSnapshots[0]
- assert.Equal(t, expectedBackups, actualBackups)
+ assert.Equal(t, test.snapshottableVolumes["vol-abc123"].SnapshotID, snapshot.Status.ProviderSnapshotID)
+ assert.Equal(t, test.snapshottableVolumes["vol-abc123"].Type, snapshot.Spec.VolumeType)
+ assert.Equal(t, test.snapshottableVolumes["vol-abc123"].Iops, snapshot.Spec.VolumeIOPS)
+ assert.Equal(t, test.snapshottableVolumes["vol-abc123"].AvailabilityZone, snapshot.Spec.VolumeAZ)
}
if test.expectedTrackedPVCs != nil {
@@ -541,6 +547,17 @@ func TestBackupItemNoSkips(t *testing.T) {
}
}
+type blockStoreGetter struct {
+ blockStore cloudprovider.BlockStore
+}
+
+func (b *blockStoreGetter) GetBlockStore(name string) (cloudprovider.BlockStore, error) {
+ if b.blockStore != nil {
+ return b.blockStore, nil
+ }
+ return nil, errors.New("plugin not found")
+}
+
type addAnnotationAction struct{}
func (a *addAnnotationAction) Execute(item runtime.Unstructured, backup *v1.Backup) (runtime.Unstructured, []ResourceIdentifier, error) {
@@ -578,28 +595,29 @@ func TestItemActionModificationsToItemPersist(t *testing.T) {
},
},
}
- actions = []resolvedAction{
- {
- ItemAction: &addAnnotationAction{},
- namespaceIncludesExcludes: collections.NewIncludesExcludes(),
- resourceIncludesExcludes: collections.NewIncludesExcludes(),
- selector: labels.Everything(),
+ req = &Request{
+ NamespaceIncludesExcludes: collections.NewIncludesExcludes(),
+ ResourceIncludesExcludes: collections.NewIncludesExcludes(),
+ ResolvedActions: []resolvedAction{
+ {
+ ItemAction: &addAnnotationAction{},
+ namespaceIncludesExcludes: collections.NewIncludesExcludes(),
+ resourceIncludesExcludes: collections.NewIncludesExcludes(),
+ selector: labels.Everything(),
+ },
},
}
+
b = (&defaultItemBackupperFactory{}).newItemBackupper(
- &v1.Backup{},
- collections.NewIncludesExcludes(),
- collections.NewIncludesExcludes(),
+ req,
make(map[itemKey]struct{}),
- actions,
nil,
w,
- nil,
&arktest.FakeDynamicFactory{},
arktest.NewFakeDiscoveryHelper(true, nil),
nil,
- nil,
newPVCSnapshotTracker(),
+ nil,
).(*defaultItemBackupper)
)
@@ -633,29 +651,29 @@ func TestResticAnnotationsPersist(t *testing.T) {
},
},
}
- actions = []resolvedAction{
- {
- ItemAction: &addAnnotationAction{},
- namespaceIncludesExcludes: collections.NewIncludesExcludes(),
- resourceIncludesExcludes: collections.NewIncludesExcludes(),
- selector: labels.Everything(),
+ req = &Request{
+ NamespaceIncludesExcludes: collections.NewIncludesExcludes(),
+ ResourceIncludesExcludes: collections.NewIncludesExcludes(),
+ ResolvedActions: []resolvedAction{
+ {
+ ItemAction: &addAnnotationAction{},
+ namespaceIncludesExcludes: collections.NewIncludesExcludes(),
+ resourceIncludesExcludes: collections.NewIncludesExcludes(),
+ selector: labels.Everything(),
+ },
},
}
resticBackupper = &resticmocks.Backupper{}
b = (&defaultItemBackupperFactory{}).newItemBackupper(
- &v1.Backup{},
- collections.NewIncludesExcludes(),
- collections.NewIncludesExcludes(),
+ req,
make(map[itemKey]struct{}),
- actions,
nil,
w,
- nil,
&arktest.FakeDynamicFactory{},
arktest.NewFakeDiscoveryHelper(true, nil),
- nil,
resticBackupper,
newPVCSnapshotTracker(),
+ nil,
).(*defaultItemBackupper)
)
@@ -698,7 +716,6 @@ func TestTakePVSnapshot(t *testing.T) {
expectError bool
expectedVolumeID string
expectedSnapshotsTaken int
- existingVolumeBackups map[string]*v1.VolumeBackupInfo
volumeInfo map[string]v1.VolumeBackupInfo
}{
{
@@ -736,21 +753,6 @@ func TestTakePVSnapshot(t *testing.T) {
"vol-abc123": {Type: "io1", Iops: &iops, SnapshotID: "snap-1", AvailabilityZone: "us-east-1c"},
},
},
- {
- name: "preexisting volume backup info in backup status",
- snapshotEnabled: true,
- pv: `{"apiVersion": "v1", "kind": "PersistentVolume", "metadata": {"name": "mypv"}, "spec": {"gcePersistentDisk": {"pdName": "pd-abc123"}}}`,
- expectError: false,
- expectedSnapshotsTaken: 1,
- expectedVolumeID: "pd-abc123",
- ttl: 5 * time.Minute,
- existingVolumeBackups: map[string]*v1.VolumeBackupInfo{
- "anotherpv": {SnapshotID: "anothersnap"},
- },
- volumeInfo: map[string]v1.VolumeBackupInfo{
- "pd-abc123": {Type: "gp", SnapshotID: "snap-1"},
- },
- },
{
name: "create snapshot error",
snapshotEnabled: true,
@@ -783,9 +785,6 @@ func TestTakePVSnapshot(t *testing.T) {
SnapshotVolumes: &test.snapshotEnabled,
TTL: metav1.Duration{Duration: test.ttl},
},
- Status: v1.BackupStatus{
- VolumeBackups: test.existingVolumeBackups,
- },
}
blockStore := &arktest.FakeBlockStore{
@@ -793,7 +792,13 @@ func TestTakePVSnapshot(t *testing.T) {
VolumeID: test.expectedVolumeID,
}
- ib := &defaultItemBackupper{blockStore: blockStore}
+ ib := &defaultItemBackupper{
+ backupRequest: &Request{
+ Backup: backup,
+ SnapshotLocations: []*v1.VolumeSnapshotLocation{new(v1.VolumeSnapshotLocation)},
+ },
+ blockStoreGetter: &blockStoreGetter{blockStore: blockStore},
+ }
pv, err := arktest.GetAsMap(test.pv)
if err != nil {
@@ -801,7 +806,7 @@ func TestTakePVSnapshot(t *testing.T) {
}
// method under test
- err = ib.takePVSnapshot(&unstructured.Unstructured{Object: pv}, backup, arktest.NewLogger())
+ err = ib.takePVSnapshot(&unstructured.Unstructured{Object: pv}, arktest.NewLogger())
gotErr := err != nil
@@ -817,29 +822,18 @@ func TestTakePVSnapshot(t *testing.T) {
return
}
- expectedVolumeBackups := test.existingVolumeBackups
- if expectedVolumeBackups == nil {
- expectedVolumeBackups = make(map[string]*v1.VolumeBackupInfo)
- }
-
- // we should have one snapshot taken exactly
+ // we should have exactly one snapshot taken
require.Equal(t, test.expectedSnapshotsTaken, blockStore.SnapshotsTaken.Len())
if test.expectedSnapshotsTaken > 0 {
- // the snapshotID should be the one in the entry in blockStore.SnapshottableVolumes
- // for the volume we ran the test for
- snapshotID, _ := blockStore.SnapshotsTaken.PopAny()
-
- expectedVolumeBackups["mypv"] = &v1.VolumeBackupInfo{
- SnapshotID: snapshotID,
- Type: test.volumeInfo[test.expectedVolumeID].Type,
- Iops: test.volumeInfo[test.expectedVolumeID].Iops,
- AvailabilityZone: test.volumeInfo[test.expectedVolumeID].AvailabilityZone,
- }
+ require.Len(t, ib.backupRequest.VolumeSnapshots, 1)
+ snapshot := ib.backupRequest.VolumeSnapshots[0]
- if e, a := expectedVolumeBackups, backup.Status.VolumeBackups; !reflect.DeepEqual(e, a) {
- t.Errorf("backup.status.VolumeBackups: expected %v, got %v", e, a)
- }
+ snapshotID, _ := blockStore.SnapshotsTaken.PopAny()
+ assert.Equal(t, snapshotID, snapshot.Status.ProviderSnapshotID)
+ assert.Equal(t, test.volumeInfo[test.expectedVolumeID].Type, snapshot.Spec.VolumeType)
+ assert.Equal(t, test.volumeInfo[test.expectedVolumeID].Iops, snapshot.Spec.VolumeIOPS)
+ assert.Equal(t, test.volumeInfo[test.expectedVolumeID].AvailabilityZone, snapshot.Spec.VolumeAZ)
}
})
}
diff --git a/pkg/backup/request.go b/pkg/backup/request.go
new file mode 100644
index 0000000000..7f12da180a
--- /dev/null
+++ b/pkg/backup/request.go
@@ -0,0 +1,22 @@
+package backup
+
+import (
+ arkv1api "github.com/heptio/ark/pkg/apis/ark/v1"
+ "github.com/heptio/ark/pkg/util/collections"
+ "github.com/heptio/ark/pkg/volume"
+)
+
+// Request is a request for a backup, with all references to other objects
+// materialized (e.g. backup/snapshot locations, includes/excludes, etc.)
+type Request struct {
+ *arkv1api.Backup
+
+ StorageLocation *arkv1api.BackupStorageLocation
+ SnapshotLocations []*arkv1api.VolumeSnapshotLocation
+ NamespaceIncludesExcludes *collections.IncludesExcludes
+ ResourceIncludesExcludes *collections.IncludesExcludes
+ ResourceHooks []resourceHook
+ ResolvedActions []resolvedAction
+
+ VolumeSnapshots []*volume.Snapshot
+}
diff --git a/pkg/backup/resource_backupper.go b/pkg/backup/resource_backupper.go
index 383378f9eb..c4bee425b7 100644
--- a/pkg/backup/resource_backupper.go
+++ b/pkg/backup/resource_backupper.go
@@ -27,9 +27,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
kuberrs "k8s.io/apimachinery/pkg/util/errors"
- api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
- "github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/discovery"
"github.com/heptio/ark/pkg/kuberesource"
"github.com/heptio/ark/pkg/podexec"
@@ -40,20 +38,16 @@ import (
type resourceBackupperFactory interface {
newResourceBackupper(
log logrus.FieldLogger,
- backup *api.Backup,
- namespaces *collections.IncludesExcludes,
- resources *collections.IncludesExcludes,
+ backupRequest *Request,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
backedUpItems map[itemKey]struct{},
cohabitatingResources map[string]*cohabitatingResource,
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) resourceBackupper
}
@@ -61,38 +55,31 @@ type defaultResourceBackupperFactory struct{}
func (f *defaultResourceBackupperFactory) newResourceBackupper(
log logrus.FieldLogger,
- backup *api.Backup,
- namespaces *collections.IncludesExcludes,
- resources *collections.IncludesExcludes,
+ backupRequest *Request,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
backedUpItems map[itemKey]struct{},
cohabitatingResources map[string]*cohabitatingResource,
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) resourceBackupper {
return &defaultResourceBackupper{
log: log,
- backup: backup,
- namespaces: namespaces,
- resources: resources,
+ backupRequest: backupRequest,
dynamicFactory: dynamicFactory,
discoveryHelper: discoveryHelper,
backedUpItems: backedUpItems,
- actions: actions,
cohabitatingResources: cohabitatingResources,
podCommandExecutor: podCommandExecutor,
tarWriter: tarWriter,
- resourceHooks: resourceHooks,
- blockStore: blockStore,
resticBackupper: resticBackupper,
resticSnapshotTracker: resticSnapshotTracker,
- itemBackupperFactory: &defaultItemBackupperFactory{},
+ blockStoreGetter: blockStoreGetter,
+
+ itemBackupperFactory: &defaultItemBackupperFactory{},
}
}
@@ -102,21 +89,17 @@ type resourceBackupper interface {
type defaultResourceBackupper struct {
log logrus.FieldLogger
- backup *api.Backup
- namespaces *collections.IncludesExcludes
- resources *collections.IncludesExcludes
+ backupRequest *Request
dynamicFactory client.DynamicFactory
discoveryHelper discovery.Helper
backedUpItems map[itemKey]struct{}
cohabitatingResources map[string]*cohabitatingResource
- actions []resolvedAction
podCommandExecutor podexec.PodCommandExecutor
tarWriter tarWriter
- resourceHooks []resourceHook
- blockStore cloudprovider.BlockStore
resticBackupper restic.Backupper
resticSnapshotTracker *pvcSnapshotTracker
itemBackupperFactory itemBackupperFactory
+ blockStoreGetter BlockStoreGetter
}
// backupResource backs up all the objects for a given group-version-resource.
@@ -142,8 +125,8 @@ func (rb *defaultResourceBackupper) backupResource(
// If the resource we are backing up is NOT namespaces, and it is cluster-scoped, check to see if
// we should include it based on the IncludeClusterResources setting.
if gr != kuberesource.Namespaces && clusterScoped {
- if rb.backup.Spec.IncludeClusterResources == nil {
- if !rb.namespaces.IncludeEverything() {
+ if rb.backupRequest.Spec.IncludeClusterResources == nil {
+ if !rb.backupRequest.NamespaceIncludesExcludes.IncludeEverything() {
// when IncludeClusterResources == nil (auto), only directly
// back up cluster-scoped resources if we're doing a full-cluster
// (all namespaces) backup. Note that in the case of a subset of
@@ -154,13 +137,13 @@ func (rb *defaultResourceBackupper) backupResource(
log.Info("Skipping resource because it's cluster-scoped and only specific namespaces are included in the backup")
return nil
}
- } else if !*rb.backup.Spec.IncludeClusterResources {
+ } else if !*rb.backupRequest.Spec.IncludeClusterResources {
log.Info("Skipping resource because it's cluster-scoped")
return nil
}
}
- if !rb.resources.ShouldInclude(grString) {
+ if !rb.backupRequest.ResourceIncludesExcludes.ShouldInclude(grString) {
log.Infof("Resource is excluded")
return nil
}
@@ -179,22 +162,18 @@ func (rb *defaultResourceBackupper) backupResource(
}
itemBackupper := rb.itemBackupperFactory.newItemBackupper(
- rb.backup,
- rb.namespaces,
- rb.resources,
+ rb.backupRequest,
rb.backedUpItems,
- rb.actions,
rb.podCommandExecutor,
rb.tarWriter,
- rb.resourceHooks,
rb.dynamicFactory,
rb.discoveryHelper,
- rb.blockStore,
rb.resticBackupper,
rb.resticSnapshotTracker,
+ rb.blockStoreGetter,
)
- namespacesToList := getNamespacesToList(rb.namespaces)
+ namespacesToList := getNamespacesToList(rb.backupRequest.NamespaceIncludesExcludes)
// Check if we're backing up namespaces, and only certain ones
if gr == kuberesource.Namespaces && namespacesToList[0] != "" {
@@ -204,8 +183,8 @@ func (rb *defaultResourceBackupper) backupResource(
}
var labelSelector labels.Selector
- if rb.backup.Spec.LabelSelector != nil {
- labelSelector, err = metav1.LabelSelectorAsSelector(rb.backup.Spec.LabelSelector)
+ if rb.backupRequest.Spec.LabelSelector != nil {
+ labelSelector, err = metav1.LabelSelectorAsSelector(rb.backupRequest.Spec.LabelSelector)
if err != nil {
// This should never happen...
return errors.Wrap(err, "invalid label selector")
@@ -246,7 +225,7 @@ func (rb *defaultResourceBackupper) backupResource(
}
var labelSelector string
- if selector := rb.backup.Spec.LabelSelector; selector != nil {
+ if selector := rb.backupRequest.Spec.LabelSelector; selector != nil {
labelSelector = metav1.FormatLabelSelector(selector)
}
@@ -276,7 +255,7 @@ func (rb *defaultResourceBackupper) backupResource(
continue
}
- if gr == kuberesource.Namespaces && !rb.namespaces.ShouldInclude(metadata.GetName()) {
+ if gr == kuberesource.Namespaces && !rb.backupRequest.NamespaceIncludesExcludes.ShouldInclude(metadata.GetName()) {
log.WithField("name", metadata.GetName()).Info("skipping namespace because it is excluded")
continue
}
diff --git a/pkg/backup/resource_backupper_test.go b/pkg/backup/resource_backupper_test.go
index dd8b01a57d..f6c3a4f3e2 100644
--- a/pkg/backup/resource_backupper_test.go
+++ b/pkg/backup/resource_backupper_test.go
@@ -21,7 +21,6 @@ import (
"github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/client"
- "github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/discovery"
"github.com/heptio/ark/pkg/kuberesource"
"github.com/heptio/ark/pkg/podexec"
@@ -220,10 +219,23 @@ func TestBackupResource(t *testing.T) {
}
for _, test := range tests {
- backup := &v1.Backup{
- Spec: v1.BackupSpec{
- IncludeClusterResources: test.includeClusterResources,
+ req := &Request{
+ Backup: &v1.Backup{
+ Spec: v1.BackupSpec{
+ IncludeClusterResources: test.includeClusterResources,
+ },
},
+ ResolvedActions: []resolvedAction{
+ {
+ ItemAction: newFakeAction("pods"),
+ resourceIncludesExcludes: collections.NewIncludesExcludes().Includes("pods"),
+ },
+ },
+ ResourceHooks: []resourceHook{
+ {name: "myhook"},
+ },
+ ResourceIncludesExcludes: test.resources,
+ NamespaceIncludesExcludes: test.namespaces,
}
dynamicFactory := &arktest.FakeDynamicFactory{}
@@ -240,17 +252,6 @@ func TestBackupResource(t *testing.T) {
"networkpolicies": newCohabitatingResource("networkpolicies", "extensions", "networking.k8s.io"),
}
- actions := []resolvedAction{
- {
- ItemAction: newFakeAction("pods"),
- resourceIncludesExcludes: collections.NewIncludesExcludes().Includes("pods"),
- },
- }
-
- resourceHooks := []resourceHook{
- {name: "myhook"},
- }
-
podCommandExecutor := &arktest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
@@ -259,20 +260,16 @@ func TestBackupResource(t *testing.T) {
t.Run(test.name, func(t *testing.T) {
rb := (&defaultResourceBackupperFactory{}).newResourceBackupper(
arktest.NewLogger(),
- backup,
- test.namespaces,
- test.resources,
+ req,
dynamicFactory,
discoveryHelper,
backedUpItems,
cohabitatingResources,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
- nil, // snapshot service
nil, // restic backupper
newPVCSnapshotTracker(),
+ nil,
).(*defaultResourceBackupper)
itemBackupperFactory := &mockItemBackupperFactory{}
@@ -284,14 +281,10 @@ func TestBackupResource(t *testing.T) {
defer itemBackupper.AssertExpectations(t)
itemBackupperFactory.On("newItemBackupper",
- backup,
- test.namespaces,
- test.resources,
+ req,
backedUpItems,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
dynamicFactory,
discoveryHelper,
mock.Anything,
@@ -382,19 +375,29 @@ func TestBackupResourceCohabitation(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
- backup := &v1.Backup{
- Spec: v1.BackupSpec{
- LabelSelector: &metav1.LabelSelector{
- MatchLabels: map[string]string{
- "foo": "bar",
+ req := &Request{
+ Backup: &v1.Backup{
+ Spec: v1.BackupSpec{
+ LabelSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "foo": "bar",
+ },
},
},
},
+ NamespaceIncludesExcludes: collections.NewIncludesExcludes().Includes("*"),
+ ResourceIncludesExcludes: collections.NewIncludesExcludes().Includes("*"),
+ ResolvedActions: []resolvedAction{
+ {
+ ItemAction: newFakeAction("pods"),
+ resourceIncludesExcludes: collections.NewIncludesExcludes().Includes("pods"),
+ },
+ },
+ ResourceHooks: []resourceHook{
+ {name: "myhook"},
+ },
}
- namespaces := collections.NewIncludesExcludes().Includes("*")
- resources := collections.NewIncludesExcludes().Includes("*")
-
dynamicFactory := &arktest.FakeDynamicFactory{}
defer dynamicFactory.AssertExpectations(t)
@@ -409,17 +412,6 @@ func TestBackupResourceCohabitation(t *testing.T) {
"networkpolicies": newCohabitatingResource("networkpolicies", "extensions", "networking.k8s.io"),
}
- actions := []resolvedAction{
- {
- ItemAction: newFakeAction("pods"),
- resourceIncludesExcludes: collections.NewIncludesExcludes().Includes("pods"),
- },
- }
-
- resourceHooks := []resourceHook{
- {name: "myhook"},
- }
-
podCommandExecutor := &arktest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
@@ -427,20 +419,16 @@ func TestBackupResourceCohabitation(t *testing.T) {
rb := (&defaultResourceBackupperFactory{}).newResourceBackupper(
arktest.NewLogger(),
- backup,
- namespaces,
- resources,
+ req,
dynamicFactory,
discoveryHelper,
backedUpItems,
cohabitatingResources,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
- nil, // snapshot service
nil, // restic backupper
newPVCSnapshotTracker(),
+ nil,
).(*defaultResourceBackupper)
itemBackupperFactory := &mockItemBackupperFactory{}
@@ -451,19 +439,16 @@ func TestBackupResourceCohabitation(t *testing.T) {
defer itemBackupper.AssertExpectations(t)
itemBackupperFactory.On("newItemBackupper",
- backup,
- namespaces,
- resources,
+ req,
backedUpItems,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
dynamicFactory,
discoveryHelper,
- mock.Anything, // snapshot service
mock.Anything, // restic backupper
mock.Anything, // pvc snapshot tracker
+ nil,
+ mock.Anything,
).Return(itemBackupper)
client := &arktest.FakeDynamicClient{}
@@ -471,7 +456,7 @@ func TestBackupResourceCohabitation(t *testing.T) {
// STEP 1: make sure the initial backup goes through
dynamicFactory.On("ClientForGroupVersionResource", test.groupVersion1, test.apiResource, "").Return(client, nil)
- client.On("List", metav1.ListOptions{LabelSelector: metav1.FormatLabelSelector(backup.Spec.LabelSelector)}).Return(&unstructured.UnstructuredList{}, nil)
+ client.On("List", metav1.ListOptions{LabelSelector: metav1.FormatLabelSelector(req.Backup.Spec.LabelSelector)}).Return(&unstructured.UnstructuredList{}, nil)
// STEP 2: do the backup
err := rb.backupResource(test.apiGroup1, test.apiResource)
@@ -485,10 +470,11 @@ func TestBackupResourceCohabitation(t *testing.T) {
}
func TestBackupResourceOnlyIncludesSpecifiedNamespaces(t *testing.T) {
- backup := &v1.Backup{}
-
- namespaces := collections.NewIncludesExcludes().Includes("ns-1")
- resources := collections.NewIncludesExcludes().Includes("*")
+ req := &Request{
+ Backup: &v1.Backup{},
+ NamespaceIncludesExcludes: collections.NewIncludesExcludes().Includes("ns-1"),
+ ResourceIncludesExcludes: collections.NewIncludesExcludes().Includes("*"),
+ }
backedUpItems := map[itemKey]struct{}{}
@@ -499,10 +485,6 @@ func TestBackupResourceOnlyIncludesSpecifiedNamespaces(t *testing.T) {
cohabitatingResources := map[string]*cohabitatingResource{}
- actions := []resolvedAction{}
-
- resourceHooks := []resourceHook{}
-
podCommandExecutor := &arktest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
@@ -510,20 +492,16 @@ func TestBackupResourceOnlyIncludesSpecifiedNamespaces(t *testing.T) {
rb := (&defaultResourceBackupperFactory{}).newResourceBackupper(
arktest.NewLogger(),
- backup,
- namespaces,
- resources,
+ req,
dynamicFactory,
discoveryHelper,
backedUpItems,
cohabitatingResources,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
- nil, // snapshot service
nil, // restic backupper
newPVCSnapshotTracker(),
+ nil,
).(*defaultResourceBackupper)
itemBackupperFactory := &mockItemBackupperFactory{}
@@ -534,27 +512,19 @@ func TestBackupResourceOnlyIncludesSpecifiedNamespaces(t *testing.T) {
defer itemHookHandler.AssertExpectations(t)
itemBackupper := &defaultItemBackupper{
- backup: backup,
- namespaces: namespaces,
- resources: resources,
+ backupRequest: req,
backedUpItems: backedUpItems,
- actions: actions,
tarWriter: tarWriter,
- resourceHooks: resourceHooks,
dynamicFactory: dynamicFactory,
discoveryHelper: discoveryHelper,
itemHookHandler: itemHookHandler,
}
itemBackupperFactory.On("newItemBackupper",
- backup,
- namespaces,
- resources,
+ req,
backedUpItems,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
dynamicFactory,
discoveryHelper,
mock.Anything,
@@ -570,8 +540,8 @@ func TestBackupResourceOnlyIncludesSpecifiedNamespaces(t *testing.T) {
ns1 := arktest.UnstructuredOrDie(`{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"ns-1"}}`)
client.On("Get", "ns-1", metav1.GetOptions{}).Return(ns1, nil)
- itemHookHandler.On("handleHooks", mock.Anything, schema.GroupResource{Group: "", Resource: "namespaces"}, ns1, resourceHooks, hookPhasePre).Return(nil)
- itemHookHandler.On("handleHooks", mock.Anything, schema.GroupResource{Group: "", Resource: "namespaces"}, ns1, resourceHooks, hookPhasePost).Return(nil)
+ itemHookHandler.On("handleHooks", mock.Anything, schema.GroupResource{Group: "", Resource: "namespaces"}, ns1, req.ResourceHooks, hookPhasePre).Return(nil)
+ itemHookHandler.On("handleHooks", mock.Anything, schema.GroupResource{Group: "", Resource: "namespaces"}, ns1, req.ResourceHooks, hookPhasePost).Return(nil)
err := rb.backupResource(v1Group, namespacesResource)
require.NoError(t, err)
@@ -581,19 +551,20 @@ func TestBackupResourceOnlyIncludesSpecifiedNamespaces(t *testing.T) {
}
func TestBackupResourceListAllNamespacesExcludesCorrectly(t *testing.T) {
- backup := &v1.Backup{
- Spec: v1.BackupSpec{
- LabelSelector: &metav1.LabelSelector{
- MatchLabels: map[string]string{
- "foo": "bar",
+ req := &Request{
+ Backup: &v1.Backup{
+ Spec: v1.BackupSpec{
+ LabelSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "foo": "bar",
+ },
},
},
},
+ NamespaceIncludesExcludes: collections.NewIncludesExcludes().Excludes("ns-1"),
+ ResourceIncludesExcludes: collections.NewIncludesExcludes().Includes("*"),
}
- namespaces := collections.NewIncludesExcludes().Excludes("ns-1")
- resources := collections.NewIncludesExcludes().Includes("*")
-
backedUpItems := map[itemKey]struct{}{}
dynamicFactory := &arktest.FakeDynamicFactory{}
@@ -603,10 +574,6 @@ func TestBackupResourceListAllNamespacesExcludesCorrectly(t *testing.T) {
cohabitatingResources := map[string]*cohabitatingResource{}
- actions := []resolvedAction{}
-
- resourceHooks := []resourceHook{}
-
podCommandExecutor := &arktest.MockPodCommandExecutor{}
defer podCommandExecutor.AssertExpectations(t)
@@ -614,20 +581,16 @@ func TestBackupResourceListAllNamespacesExcludesCorrectly(t *testing.T) {
rb := (&defaultResourceBackupperFactory{}).newResourceBackupper(
arktest.NewLogger(),
- backup,
- namespaces,
- resources,
+ req,
dynamicFactory,
discoveryHelper,
backedUpItems,
cohabitatingResources,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
- nil, // snapshot service
nil, // restic backupper
newPVCSnapshotTracker(),
+ nil,
).(*defaultResourceBackupper)
itemBackupperFactory := &mockItemBackupperFactory{}
@@ -641,14 +604,10 @@ func TestBackupResourceListAllNamespacesExcludesCorrectly(t *testing.T) {
defer itemBackupper.AssertExpectations(t)
itemBackupperFactory.On("newItemBackupper",
- backup,
- namespaces,
- resources,
+ req,
backedUpItems,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
dynamicFactory,
discoveryHelper,
mock.Anything,
@@ -667,7 +626,7 @@ func TestBackupResourceListAllNamespacesExcludesCorrectly(t *testing.T) {
list := &unstructured.UnstructuredList{
Items: []unstructured.Unstructured{*ns1, *ns2},
}
- client.On("List", metav1.ListOptions{LabelSelector: metav1.FormatLabelSelector(backup.Spec.LabelSelector)}).Return(list, nil)
+ client.On("List", metav1.ListOptions{LabelSelector: metav1.FormatLabelSelector(req.Backup.Spec.LabelSelector)}).Return(list, nil)
itemBackupper.On("backupItem", mock.AnythingOfType("*logrus.Entry"), ns2, kuberesource.Namespaces).Return(nil)
@@ -680,33 +639,26 @@ type mockItemBackupperFactory struct {
}
func (ibf *mockItemBackupperFactory) newItemBackupper(
- backup *v1.Backup,
- namespaces, resources *collections.IncludesExcludes,
+ backup *Request,
backedUpItems map[itemKey]struct{},
- actions []resolvedAction,
podCommandExecutor podexec.PodCommandExecutor,
tarWriter tarWriter,
- resourceHooks []resourceHook,
dynamicFactory client.DynamicFactory,
discoveryHelper discovery.Helper,
- blockStore cloudprovider.BlockStore,
resticBackupper restic.Backupper,
resticSnapshotTracker *pvcSnapshotTracker,
+ blockStoreGetter BlockStoreGetter,
) ItemBackupper {
args := ibf.Called(
backup,
- namespaces,
- resources,
backedUpItems,
- actions,
podCommandExecutor,
tarWriter,
- resourceHooks,
dynamicFactory,
discoveryHelper,
- blockStore,
resticBackupper,
resticSnapshotTracker,
+ blockStoreGetter,
)
return args.Get(0).(ItemBackupper)
}
diff --git a/pkg/cloudprovider/azure/block_store.go b/pkg/cloudprovider/azure/block_store.go
index 8112c15831..07ee822793 100644
--- a/pkg/cloudprovider/azure/block_store.go
+++ b/pkg/cloudprovider/azure/block_store.go
@@ -40,18 +40,21 @@ import (
const (
resourceGroupEnvVar = "AZURE_RESOURCE_GROUP"
+
apiTimeoutConfigKey = "apiTimeout"
- snapshotsResource = "snapshots"
- disksResource = "disks"
+
+ snapshotsResource = "snapshots"
+ disksResource = "disks"
)
type blockStore struct {
- log logrus.FieldLogger
- disks *disk.DisksClient
- snaps *disk.SnapshotsClient
- subscription string
- resourceGroup string
- apiTimeout time.Duration
+ log logrus.FieldLogger
+ disks *disk.DisksClient
+ snaps *disk.SnapshotsClient
+ subscription string
+ disksResourceGroup string
+ snapsResourceGroup string
+ apiTimeout time.Duration
}
type snapshotIdentifier struct {
@@ -106,7 +109,16 @@ func (b *blockStore) Init(config map[string]string) error {
b.disks = &disksClient
b.snaps = &snapsClient
b.subscription = envVars[subscriptionIDEnvVar]
- b.resourceGroup = envVars[resourceGroupEnvVar]
+ b.disksResourceGroup = envVars[resourceGroupEnvVar]
+ b.snapsResourceGroup = config[resourceGroupConfigKey]
+
+ // if no resource group was explicitly specified in 'config',
+ // use the value from the env var (i.e. the same one as where
+ // the cluster & disks are)
+ if b.snapsResourceGroup == "" {
+ b.snapsResourceGroup = envVars[resourceGroupEnvVar]
+ }
+
b.apiTimeout = apiTimeout
return nil
@@ -142,7 +154,7 @@ func (b *blockStore) CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ s
ctx, cancel := context.WithTimeout(context.Background(), b.apiTimeout)
defer cancel()
- _, errChan := b.disks.CreateOrUpdate(b.resourceGroup, *disk.Name, disk, ctx.Done())
+ _, errChan := b.disks.CreateOrUpdate(b.disksResourceGroup, *disk.Name, disk, ctx.Done())
err = <-errChan
@@ -153,7 +165,7 @@ func (b *blockStore) CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ s
}
func (b *blockStore) GetVolumeInfo(volumeID, volumeAZ string) (string, *int64, error) {
- res, err := b.disks.Get(b.resourceGroup, volumeID)
+ res, err := b.disks.Get(b.disksResourceGroup, volumeID)
if err != nil {
return "", nil, errors.WithStack(err)
}
@@ -163,12 +175,12 @@ func (b *blockStore) GetVolumeInfo(volumeID, volumeAZ string) (string, *int64, e
func (b *blockStore) CreateSnapshot(volumeID, volumeAZ string, tags map[string]string) (string, error) {
// Lookup disk info for its Location
- diskInfo, err := b.disks.Get(b.resourceGroup, volumeID)
+ diskInfo, err := b.disks.Get(b.disksResourceGroup, volumeID)
if err != nil {
return "", errors.WithStack(err)
}
- fullDiskName := getComputeResourceName(b.subscription, b.resourceGroup, disksResource, volumeID)
+ fullDiskName := getComputeResourceName(b.subscription, b.disksResourceGroup, disksResource, volumeID)
// snapshot names must be <= 80 characters long
var snapshotName string
suffix := "-" + uuid.NewV4().String()
@@ -194,14 +206,14 @@ func (b *blockStore) CreateSnapshot(volumeID, volumeAZ string, tags map[string]s
ctx, cancel := context.WithTimeout(context.Background(), b.apiTimeout)
defer cancel()
- _, errChan := b.snaps.CreateOrUpdate(b.resourceGroup, *snap.Name, snap, ctx.Done())
+ _, errChan := b.snaps.CreateOrUpdate(b.snapsResourceGroup, *snap.Name, snap, ctx.Done())
err = <-errChan
if err != nil {
return "", errors.WithStack(err)
}
- return getComputeResourceName(b.subscription, b.resourceGroup, snapshotsResource, snapshotName), nil
+ return getComputeResourceName(b.subscription, b.snapsResourceGroup, snapshotsResource, snapshotName), nil
}
func getSnapshotTags(arkTags map[string]string, diskTags *map[string]*string) *map[string]*string {
@@ -279,8 +291,11 @@ func (b *blockStore) parseSnapshotName(name string) (*snapshotIdentifier, error)
// legacy format - name only (not fully-qualified)
case !strings.Contains(name, "/"):
return &snapshotIdentifier{
- subscription: b.subscription,
- resourceGroup: b.resourceGroup,
+ subscription: b.subscription,
+ // use the disksResourceGroup here because Ark only
+ // supported storing snapshots in that resource group
+ // when the legacy snapshot format was used.
+ resourceGroup: b.disksResourceGroup,
name: name,
}, nil
// current format - fully qualified
@@ -341,7 +356,7 @@ func (b *blockStore) SetVolumeID(pv runtime.Unstructured, volumeID string) (runt
}
azure["diskName"] = volumeID
- azure["diskURI"] = getComputeResourceName(b.subscription, b.resourceGroup, disksResource, volumeID)
+ azure["diskURI"] = getComputeResourceName(b.subscription, b.disksResourceGroup, disksResource, volumeID)
return pv, nil
}
diff --git a/pkg/cloudprovider/azure/block_store_test.go b/pkg/cloudprovider/azure/block_store_test.go
index 42b25c73e4..85e2849143 100644
--- a/pkg/cloudprovider/azure/block_store_test.go
+++ b/pkg/cloudprovider/azure/block_store_test.go
@@ -56,8 +56,8 @@ func TestGetVolumeID(t *testing.T) {
func TestSetVolumeID(t *testing.T) {
b := &blockStore{
- resourceGroup: "rg",
- subscription: "sub",
+ disksResourceGroup: "rg",
+ subscription: "sub",
}
pv := &unstructured.Unstructured{
@@ -99,8 +99,8 @@ func TestSetVolumeID(t *testing.T) {
// format
func TestParseSnapshotName(t *testing.T) {
b := &blockStore{
- subscription: "default-sub",
- resourceGroup: "default-rg",
+ subscription: "default-sub",
+ disksResourceGroup: "default-rg-legacy",
}
// invalid name
@@ -123,7 +123,7 @@ func TestParseSnapshotName(t *testing.T) {
snap, err = b.parseSnapshotName(fullName)
require.NoError(t, err)
assert.Equal(t, b.subscription, snap.subscription)
- assert.Equal(t, b.resourceGroup, snap.resourceGroup)
+ assert.Equal(t, b.disksResourceGroup, snap.resourceGroup)
assert.Equal(t, fullName, snap.name)
}
diff --git a/pkg/cloudprovider/azure/common.go b/pkg/cloudprovider/azure/common.go
index 9e445df498..54dc9ae4f5 100644
--- a/pkg/cloudprovider/azure/common.go
+++ b/pkg/cloudprovider/azure/common.go
@@ -29,6 +29,8 @@ const (
subscriptionIDEnvVar = "AZURE_SUBSCRIPTION_ID"
clientIDEnvVar = "AZURE_CLIENT_ID"
clientSecretEnvVar = "AZURE_CLIENT_SECRET"
+
+ resourceGroupConfigKey = "resourceGroup"
)
// GetResticEnvVars gets the environment variables that restic
diff --git a/pkg/cloudprovider/azure/object_store.go b/pkg/cloudprovider/azure/object_store.go
index 3ffacc28fb..6fe4262e0a 100644
--- a/pkg/cloudprovider/azure/object_store.go
+++ b/pkg/cloudprovider/azure/object_store.go
@@ -33,7 +33,6 @@ import (
)
const (
- resourceGroupConfigKey = "resourceGroup"
storageAccountConfigKey = "storageAccount"
)
diff --git a/pkg/cmd/ark/ark.go b/pkg/cmd/ark/ark.go
index ddf4532dbd..c84526893a 100644
--- a/pkg/cmd/ark/ark.go
+++ b/pkg/cmd/ark/ark.go
@@ -35,6 +35,7 @@ import (
"github.com/heptio/ark/pkg/cmd/cli/restic"
"github.com/heptio/ark/pkg/cmd/cli/restore"
"github.com/heptio/ark/pkg/cmd/cli/schedule"
+ "github.com/heptio/ark/pkg/cmd/cli/snapshotlocation"
"github.com/heptio/ark/pkg/cmd/server"
runplugin "github.com/heptio/ark/pkg/cmd/server/plugin"
"github.com/heptio/ark/pkg/cmd/version"
@@ -73,6 +74,7 @@ operations can also be performed as 'ark backup get' and 'ark schedule create'.`
restic.NewCommand(f),
bug.NewCommand(),
backuplocation.NewCommand(f),
+ snapshotlocation.NewCommand(f),
)
// add the glog flags
diff --git a/pkg/cmd/cli/backup/create.go b/pkg/cmd/cli/backup/create.go
index 837a87db7c..2f32f81386 100644
--- a/pkg/cmd/cli/backup/create.go
+++ b/pkg/cmd/cli/backup/create.go
@@ -70,6 +70,7 @@ type CreateOptions struct {
IncludeClusterResources flag.OptionalBool
Wait bool
StorageLocation string
+ SnapshotLocations []string
client arkclient.Interface
}
@@ -92,6 +93,7 @@ func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
flags.Var(&o.ExcludeResources, "exclude-resources", "resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io")
flags.Var(&o.Labels, "labels", "labels to apply to the backup")
flags.StringVar(&o.StorageLocation, "storage-location", "", "location in which to store the backup")
+ flags.StringSliceVar(&o.SnapshotLocations, "volume-snapshot-locations", o.SnapshotLocations, "list of locations (at most one per provider) where volume snapshots should be stored")
flags.VarP(&o.Selector, "selector", "l", "only back up resources matching this label selector")
f := flags.VarPF(&o.SnapshotVolumes, "snapshot-volumes", "", "take snapshots of PersistentVolumes as part of the backup")
// this allows the user to just specify "--snapshot-volumes" as shorthand for "--snapshot-volumes=true"
@@ -119,6 +121,12 @@ func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Facto
}
}
+ for _, loc := range o.SnapshotLocations {
+ if _, err := o.client.ArkV1().VolumeSnapshotLocations(f.Namespace()).Get(loc, metav1.GetOptions{}); err != nil {
+ return err
+ }
+ }
+
return nil
}
@@ -133,7 +141,6 @@ func (o *CreateOptions) Complete(args []string, f client.Factory) error {
}
func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
-
backup := &api.Backup{
ObjectMeta: metav1.ObjectMeta{
Namespace: f.Namespace(),
@@ -150,6 +157,7 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
TTL: metav1.Duration{Duration: o.TTL},
IncludeClusterResources: o.IncludeClusterResources.Value,
StorageLocation: o.StorageLocation,
+ VolumeSnapshotLocations: o.SnapshotLocations,
},
}
diff --git a/pkg/cmd/cli/get/get.go b/pkg/cmd/cli/get/get.go
index 27b3eee556..1bfc7c0f5e 100644
--- a/pkg/cmd/cli/get/get.go
+++ b/pkg/cmd/cli/get/get.go
@@ -24,6 +24,7 @@ import (
"github.com/heptio/ark/pkg/cmd/cli/backuplocation"
"github.com/heptio/ark/pkg/cmd/cli/restore"
"github.com/heptio/ark/pkg/cmd/cli/schedule"
+ "github.com/heptio/ark/pkg/cmd/cli/snapshotlocation"
)
func NewCommand(f client.Factory) *cobra.Command {
@@ -45,11 +46,15 @@ func NewCommand(f client.Factory) *cobra.Command {
backupLocationCommand := backuplocation.NewGetCommand(f, "backup-locations")
backupLocationCommand.Aliases = []string{"backup-location"}
+ snapshotLocationCommand := snapshotlocation.NewGetCommand(f, "snapshot-locations")
+ snapshotLocationCommand.Aliases = []string{"snapshot-location"}
+
c.AddCommand(
backupCommand,
scheduleCommand,
restoreCommand,
backupLocationCommand,
+ snapshotLocationCommand,
)
return c
diff --git a/pkg/cmd/cli/schedule/create.go b/pkg/cmd/cli/schedule/create.go
index be0c92e929..58d137bdd4 100644
--- a/pkg/cmd/cli/schedule/create.go
+++ b/pkg/cmd/cli/schedule/create.go
@@ -117,6 +117,7 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
SnapshotVolumes: o.BackupOptions.SnapshotVolumes.Value,
TTL: metav1.Duration{Duration: o.BackupOptions.TTL},
StorageLocation: o.BackupOptions.StorageLocation,
+ VolumeSnapshotLocations: o.BackupOptions.SnapshotLocations,
},
Schedule: o.Schedule,
},
diff --git a/pkg/cmd/cli/snapshotlocation/create.go b/pkg/cmd/cli/snapshotlocation/create.go
new file mode 100644
index 0000000000..24a730f47b
--- /dev/null
+++ b/pkg/cmd/cli/snapshotlocation/create.go
@@ -0,0 +1,120 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package snapshotlocation
+
+import (
+ "fmt"
+
+ "github.com/pkg/errors"
+ "github.com/spf13/cobra"
+ "github.com/spf13/pflag"
+
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+
+ api "github.com/heptio/ark/pkg/apis/ark/v1"
+ "github.com/heptio/ark/pkg/client"
+ "github.com/heptio/ark/pkg/cmd"
+ "github.com/heptio/ark/pkg/cmd/util/flag"
+ "github.com/heptio/ark/pkg/cmd/util/output"
+)
+
+func NewCreateCommand(f client.Factory, use string) *cobra.Command {
+ o := NewCreateOptions()
+
+ c := &cobra.Command{
+ Use: use + " NAME",
+ Short: "Create a volume snapshot location",
+ Args: cobra.ExactArgs(1),
+ Run: func(c *cobra.Command, args []string) {
+ cmd.CheckError(o.Complete(args, f))
+ cmd.CheckError(o.Validate(c, args, f))
+ cmd.CheckError(o.Run(c, f))
+ },
+ }
+
+ o.BindFlags(c.Flags())
+ output.BindFlags(c.Flags())
+ output.ClearOutputFlagDefault(c)
+
+ return c
+}
+
+type CreateOptions struct {
+ Name string
+ Provider string
+ Config flag.Map
+ Labels flag.Map
+}
+
+func NewCreateOptions() *CreateOptions {
+ return &CreateOptions{
+ Config: flag.NewMap(),
+ }
+}
+
+func (o *CreateOptions) BindFlags(flags *pflag.FlagSet) {
+ flags.StringVar(&o.Provider, "provider", o.Provider, "name of the volume snapshot provider (e.g. aws, azure, gcp)")
+ flags.Var(&o.Config, "config", "configuration key-value pairs")
+ flags.Var(&o.Labels, "labels", "labels to apply to the volume snapshot location")
+}
+
+func (o *CreateOptions) Validate(c *cobra.Command, args []string, f client.Factory) error {
+ if err := output.ValidateFlags(c); err != nil {
+ return err
+ }
+
+ if o.Provider == "" {
+ return errors.New("--provider is required")
+ }
+
+ return nil
+}
+
+func (o *CreateOptions) Complete(args []string, f client.Factory) error {
+ o.Name = args[0]
+ return nil
+}
+
+func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
+ volumeSnapshotLocation := &api.VolumeSnapshotLocation{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: f.Namespace(),
+ Name: o.Name,
+ Labels: o.Labels.Data(),
+ },
+ Spec: api.VolumeSnapshotLocationSpec{
+ Provider: o.Provider,
+ Config: o.Config.Data(),
+ },
+ }
+
+ if printed, err := output.PrintWithFormat(c, volumeSnapshotLocation); printed || err != nil {
+ return err
+ }
+
+ client, err := f.Client()
+ if err != nil {
+ return err
+ }
+
+ if _, err := client.ArkV1().VolumeSnapshotLocations(volumeSnapshotLocation.Namespace).Create(volumeSnapshotLocation); err != nil {
+ return errors.WithStack(err)
+ }
+
+ fmt.Printf("Snapshot volume location %q configured successfully.\n", volumeSnapshotLocation.Name)
+ return nil
+}
diff --git a/pkg/cmd/cli/snapshotlocation/get.go b/pkg/cmd/cli/snapshotlocation/get.go
new file mode 100644
index 0000000000..7b4108761e
--- /dev/null
+++ b/pkg/cmd/cli/snapshotlocation/get.go
@@ -0,0 +1,57 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+
+You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package snapshotlocation
+
+import (
+ api "github.com/heptio/ark/pkg/apis/ark/v1"
+ "github.com/heptio/ark/pkg/client"
+ "github.com/heptio/ark/pkg/cmd"
+ "github.com/heptio/ark/pkg/cmd/util/output"
+ "github.com/spf13/cobra"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+func NewGetCommand(f client.Factory, use string) *cobra.Command {
+ var listOptions metav1.ListOptions
+ c := &cobra.Command{
+ Use: use,
+ Short: "Get snapshot locations",
+ Run: func(c *cobra.Command, args []string) {
+ err := output.ValidateFlags(c)
+ cmd.CheckError(err)
+ arkClient, err := f.Client()
+ cmd.CheckError(err)
+ var locations *api.VolumeSnapshotLocationList
+ if len(args) > 0 {
+ locations = new(api.VolumeSnapshotLocationList)
+ for _, name := range args {
+ location, err := arkClient.Ark().VolumeSnapshotLocations(f.Namespace()).Get(name, metav1.GetOptions{})
+ cmd.CheckError(err)
+ locations.Items = append(locations.Items, *location)
+ }
+ } else {
+ locations, err = arkClient.ArkV1().VolumeSnapshotLocations(f.Namespace()).List(listOptions)
+ cmd.CheckError(err)
+ }
+ _, err = output.PrintWithFormat(c, locations)
+ cmd.CheckError(err)
+ },
+ }
+ c.Flags().StringVarP(&listOptions.LabelSelector, "selector", "l", listOptions.LabelSelector, "only show items matching this label selector")
+ output.BindFlags(c.Flags())
+ return c
+}
diff --git a/pkg/cmd/cli/snapshotlocation/snapshot_location.go b/pkg/cmd/cli/snapshotlocation/snapshot_location.go
new file mode 100644
index 0000000000..e7d7d05b9d
--- /dev/null
+++ b/pkg/cmd/cli/snapshotlocation/snapshot_location.go
@@ -0,0 +1,38 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package snapshotlocation
+
+import (
+ "github.com/spf13/cobra"
+
+ "github.com/heptio/ark/pkg/client"
+)
+
+func NewCommand(f client.Factory) *cobra.Command {
+ c := &cobra.Command{
+ Use: "snapshot-location",
+ Short: "Work with snapshot locations",
+ Long: "Work with snapshot locations",
+ }
+
+ c.AddCommand(
+ NewCreateCommand(f, "create"),
+ NewGetCommand(f, "get"),
+ )
+
+ return c
+}
diff --git a/pkg/cmd/server/server.go b/pkg/cmd/server/server.go
index 4764dbf79b..b886d5c4b4 100644
--- a/pkg/cmd/server/server.go
+++ b/pkg/cmd/server/server.go
@@ -53,8 +53,8 @@ import (
"github.com/heptio/ark/pkg/backup"
"github.com/heptio/ark/pkg/buildinfo"
"github.com/heptio/ark/pkg/client"
- "github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/cmd"
+ "github.com/heptio/ark/pkg/cmd/util/flag"
"github.com/heptio/ark/pkg/cmd/util/signals"
"github.com/heptio/ark/pkg/controller"
arkdiscovery "github.com/heptio/ark/pkg/discovery"
@@ -80,19 +80,22 @@ type serverConfig struct {
pluginDir, metricsAddress, defaultBackupLocation string
backupSyncPeriod, podVolumeOperationTimeout time.Duration
restoreResourcePriorities []string
+ defaultVolumeSnapshotLocations map[string]string
restoreOnly bool
}
func NewCommand() *cobra.Command {
var (
- logLevelFlag = logging.LogLevelFlag(logrus.InfoLevel)
- config = serverConfig{
- pluginDir: "/plugins",
- metricsAddress: defaultMetricsAddress,
- defaultBackupLocation: "default",
- backupSyncPeriod: defaultBackupSyncPeriod,
- podVolumeOperationTimeout: defaultPodVolumeOperationTimeout,
- restoreResourcePriorities: defaultRestorePriorities,
+ volumeSnapshotLocations = flag.NewMap().WithKeyValueDelimiter(":")
+ logLevelFlag = logging.LogLevelFlag(logrus.InfoLevel)
+ config = serverConfig{
+ pluginDir: "/plugins",
+ metricsAddress: defaultMetricsAddress,
+ defaultBackupLocation: "default",
+ defaultVolumeSnapshotLocations: make(map[string]string),
+ backupSyncPeriod: defaultBackupSyncPeriod,
+ podVolumeOperationTimeout: defaultPodVolumeOperationTimeout,
+ restoreResourcePriorities: defaultRestorePriorities,
}
)
@@ -128,6 +131,10 @@ func NewCommand() *cobra.Command {
}
namespace := getServerNamespace(namespaceFlag)
+ if volumeSnapshotLocations.Data() != nil {
+ config.defaultVolumeSnapshotLocations = volumeSnapshotLocations.Data()
+ }
+
s, err := newServer(namespace, fmt.Sprintf("%s-%s", c.Parent().Name(), c.Name()), config, logger)
cmd.CheckError(err)
@@ -143,6 +150,7 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&config.restoreOnly, "restore-only", config.restoreOnly, "run in a mode where only restores are allowed; backups, schedules, and garbage-collection are all disabled")
command.Flags().StringSliceVar(&config.restoreResourcePriorities, "restore-resource-priorities", config.restoreResourcePriorities, "desired order of resource restores; any resource not in the list will be restored alphabetically after the prioritized resources")
command.Flags().StringVar(&config.defaultBackupLocation, "default-backup-storage-location", config.defaultBackupLocation, "name of the default backup storage location")
+ command.Flags().Var(&volumeSnapshotLocations, "default-volume-snapshot-locations", "list of unique volume providers and default volume snapshot location (provider1:location-01,provider2:location-02,...)")
return command
}
@@ -167,7 +175,6 @@ type server struct {
kubeClientConfig *rest.Config
kubeClient kubernetes.Interface
arkClient clientset.Interface
- blockStore cloudprovider.BlockStore
discoveryClient discovery.DiscoveryInterface
discoveryHelper arkdiscovery.Helper
dynamicClient dynamic.Interface
@@ -277,28 +284,60 @@ func (s *server) run() error {
Warnf("Default backup storage location %q not found; backups must explicitly specify a location", s.config.defaultBackupLocation)
}
- if config.PersistentVolumeProvider == nil {
- s.logger.Info("PersistentVolumeProvider config not provided, volume snapshots and restores are disabled")
- } else {
- s.logger.Info("Configuring cloud provider for snapshot service")
- blockStore, err := getBlockStore(*config.PersistentVolumeProvider, s.pluginManager)
- if err != nil {
- return err
- }
- s.blockStore = blockStore
+ defaultVolumeSnapshotLocations, err := getDefaultVolumeSnapshotLocations(s.arkClient, s.namespace, s.config.defaultVolumeSnapshotLocations)
+ if err != nil {
+ return err
}
if err := s.initRestic(); err != nil {
return err
}
- if err := s.runControllers(config); err != nil {
+ if err := s.runControllers(config, defaultVolumeSnapshotLocations); err != nil {
return err
}
return nil
}
+func getDefaultVolumeSnapshotLocations(arkClient clientset.Interface, namespace string, defaultVolumeSnapshotLocations map[string]string) (map[string]*api.VolumeSnapshotLocation, error) {
+ providerDefaults := make(map[string]*api.VolumeSnapshotLocation)
+ if len(defaultVolumeSnapshotLocations) == 0 {
+ return providerDefaults, nil
+ }
+
+ volumeSnapshotLocations, err := arkClient.ArkV1().VolumeSnapshotLocations(namespace).List(metav1.ListOptions{})
+ if err != nil {
+ return providerDefaults, errors.WithStack(err)
+ }
+
+ providerLocations := make(map[string][]*api.VolumeSnapshotLocation)
+ for i, vsl := range volumeSnapshotLocations.Items {
+ locations := providerLocations[vsl.Spec.Provider]
+ providerLocations[vsl.Spec.Provider] = append(locations, &volumeSnapshotLocations.Items[i])
+ }
+
+ for provider, locations := range providerLocations {
+ defaultLocation, ok := defaultVolumeSnapshotLocations[provider]
+ if !ok {
+ return providerDefaults, errors.Errorf("missing provider %s. When using default volume snapshot locations, one must exist for every known provider.", provider)
+ }
+
+ for _, location := range locations {
+ if location.ObjectMeta.Name == defaultLocation {
+ providerDefaults[provider] = location
+ break
+ }
+ }
+
+ if _, ok := providerDefaults[provider]; !ok {
+ return providerDefaults, errors.Errorf("%s is not a valid volume snapshot location for %s", defaultLocation, provider)
+ }
+ }
+
+ return providerDefaults, nil
+}
+
func (s *server) applyConfigDefaults(c *api.Config) {
if s.config.backupSyncPeriod == 0 {
s.config.backupSyncPeriod = defaultBackupSyncPeriod
@@ -507,23 +546,6 @@ func (s *server) watchConfig(config *api.Config) {
})
}
-func getBlockStore(cloudConfig api.CloudProviderConfig, manager plugin.Manager) (cloudprovider.BlockStore, error) {
- if cloudConfig.Name == "" {
- return nil, errors.New("block storage provider name must not be empty")
- }
-
- blockStore, err := manager.GetBlockStore(cloudConfig.Name)
- if err != nil {
- return nil, err
- }
-
- if err := blockStore.Init(cloudConfig.Config); err != nil {
- return nil, err
- }
-
- return blockStore, nil
-}
-
func (s *server) initRestic() error {
// warn if restic daemonset does not exist
if _, err := s.kubeClient.AppsV1().DaemonSets(s.namespace).Get(restic.DaemonSet, metav1.GetOptions{}); apierrors.IsNotFound(err) {
@@ -572,7 +594,7 @@ func (s *server) initRestic() error {
return nil
}
-func (s *server) runControllers(config *api.Config) error {
+func (s *server) runControllers(config *api.Config, defaultVolumeSnapshotLocations map[string]*api.VolumeSnapshotLocation) error {
s.logger.Info("Starting controllers")
ctx := s.ctx
@@ -619,7 +641,6 @@ func (s *server) runControllers(config *api.Config) error {
s.discoveryHelper,
client.NewDynamicFactory(s.dynamicClient),
podexec.NewPodCommandExecutor(s.kubeClientConfig, s.kubeClient.CoreV1().RESTClient()),
- s.blockStore,
s.resticManager,
s.config.podVolumeOperationTimeout,
)
@@ -629,13 +650,14 @@ func (s *server) runControllers(config *api.Config) error {
s.sharedInformerFactory.Ark().V1().Backups(),
s.arkClient.ArkV1(),
backupper,
- s.blockStore != nil,
s.logger,
s.logLevel,
newPluginManager,
backupTracker,
s.sharedInformerFactory.Ark().V1().BackupStorageLocations(),
s.config.defaultBackupLocation,
+ s.sharedInformerFactory.Ark().V1().VolumeSnapshotLocations(),
+ defaultVolumeSnapshotLocations,
s.metrics,
)
wg.Add(1)
@@ -675,13 +697,13 @@ func (s *server) runControllers(config *api.Config) error {
s.sharedInformerFactory.Ark().V1().DeleteBackupRequests(),
s.arkClient.ArkV1(), // deleteBackupRequestClient
s.arkClient.ArkV1(), // backupClient
- s.blockStore,
s.sharedInformerFactory.Ark().V1().Restores(),
s.arkClient.ArkV1(), // restoreClient
backupTracker,
s.resticManager,
s.sharedInformerFactory.Ark().V1().PodVolumeBackups(),
s.sharedInformerFactory.Ark().V1().BackupStorageLocations(),
+ s.sharedInformerFactory.Ark().V1().VolumeSnapshotLocations(),
newPluginManager,
)
wg.Add(1)
@@ -695,9 +717,7 @@ func (s *server) runControllers(config *api.Config) error {
restorer, err := restore.NewKubernetesRestorer(
s.discoveryHelper,
client.NewDynamicFactory(s.dynamicClient),
- s.blockStore,
s.config.restoreResourcePriorities,
- s.arkClient.ArkV1(),
s.kubeClient.CoreV1().Namespaces(),
s.resticManager,
s.config.podVolumeOperationTimeout,
@@ -713,7 +733,8 @@ func (s *server) runControllers(config *api.Config) error {
restorer,
s.sharedInformerFactory.Ark().V1().Backups(),
s.sharedInformerFactory.Ark().V1().BackupStorageLocations(),
- s.blockStore != nil,
+ s.sharedInformerFactory.Ark().V1().VolumeSnapshotLocations(),
+ false,
s.logger,
s.logLevel,
newPluginManager,
diff --git a/pkg/cmd/server/server_test.go b/pkg/cmd/server/server_test.go
index 69478b0f07..0128485430 100644
--- a/pkg/cmd/server/server_test.go
+++ b/pkg/cmd/server/server_test.go
@@ -25,6 +25,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/heptio/ark/pkg/apis/ark/v1"
+ fakeclientset "github.com/heptio/ark/pkg/generated/clientset/versioned/fake"
arktest "github.com/heptio/ark/pkg/util/test"
)
@@ -97,3 +98,59 @@ func TestArkResourcesExist(t *testing.T) {
arkAPIResourceList.APIResources = arkAPIResourceList.APIResources[:3]
assert.Error(t, server.arkResourcesExist())
}
+
+func TestDefaultVolumeSnapshotLocations(t *testing.T) {
+ namespace := "heptio-ark"
+ arkClient := fakeclientset.NewSimpleClientset()
+
+ location := &v1.VolumeSnapshotLocation{ObjectMeta: metav1.ObjectMeta{Name: "location1"}, Spec: v1.VolumeSnapshotLocationSpec{Provider: "provider1"}}
+ arkClient.ArkV1().VolumeSnapshotLocations(namespace).Create(location)
+
+ defaultVolumeSnapshotLocations := make(map[string]string)
+
+ // No defaults
+ volumeSnapshotLocations, err := getDefaultVolumeSnapshotLocations(arkClient, namespace, defaultVolumeSnapshotLocations)
+ assert.Equal(t, 0, len(volumeSnapshotLocations))
+ assert.NoError(t, err)
+
+ // Bad location
+ defaultVolumeSnapshotLocations["provider1"] = "badlocation"
+ volumeSnapshotLocations, err = getDefaultVolumeSnapshotLocations(arkClient, namespace, defaultVolumeSnapshotLocations)
+ assert.Equal(t, 0, len(volumeSnapshotLocations))
+ assert.Error(t, err)
+
+ // Bad provider
+ defaultVolumeSnapshotLocations["provider2"] = "badlocation"
+ volumeSnapshotLocations, err = getDefaultVolumeSnapshotLocations(arkClient, namespace, defaultVolumeSnapshotLocations)
+ assert.Equal(t, 0, len(volumeSnapshotLocations))
+ assert.Error(t, err)
+
+ // Good provider, good location
+ delete(defaultVolumeSnapshotLocations, "provider2")
+ defaultVolumeSnapshotLocations["provider1"] = "location1"
+ volumeSnapshotLocations, err = getDefaultVolumeSnapshotLocations(arkClient, namespace, defaultVolumeSnapshotLocations)
+ assert.Equal(t, 1, len(volumeSnapshotLocations))
+ assert.NoError(t, err)
+
+ location2 := &v1.VolumeSnapshotLocation{ObjectMeta: metav1.ObjectMeta{Name: "location2"}, Spec: v1.VolumeSnapshotLocationSpec{Provider: "provider2"}}
+ arkClient.ArkV1().VolumeSnapshotLocations(namespace).Create(location2)
+
+ // Mutliple Provider/Location 1 good, 1 bad
+ defaultVolumeSnapshotLocations["provider2"] = "badlocation"
+ volumeSnapshotLocations, err = getDefaultVolumeSnapshotLocations(arkClient, namespace, defaultVolumeSnapshotLocations)
+ assert.Error(t, err)
+
+ location21 := &v1.VolumeSnapshotLocation{ObjectMeta: metav1.ObjectMeta{Name: "location2-1"}, Spec: v1.VolumeSnapshotLocationSpec{Provider: "provider2"}}
+ arkClient.ArkV1().VolumeSnapshotLocations(namespace).Create(location21)
+
+ location11 := &v1.VolumeSnapshotLocation{ObjectMeta: metav1.ObjectMeta{Name: "location1-1"}, Spec: v1.VolumeSnapshotLocationSpec{Provider: "provider1"}}
+ arkClient.ArkV1().VolumeSnapshotLocations(namespace).Create(location11)
+
+ // Mutliple Provider/Location all good
+ defaultVolumeSnapshotLocations["provider2"] = "location2"
+ volumeSnapshotLocations, err = getDefaultVolumeSnapshotLocations(arkClient, namespace, defaultVolumeSnapshotLocations)
+ assert.Equal(t, 2, len(volumeSnapshotLocations))
+ assert.NoError(t, err)
+ assert.Equal(t, volumeSnapshotLocations["provider1"].ObjectMeta.Name, "location1")
+ assert.Equal(t, volumeSnapshotLocations["provider2"].ObjectMeta.Name, "location2")
+}
diff --git a/pkg/cmd/util/output/output.go b/pkg/cmd/util/output/output.go
index 02c651054a..3d6802e84d 100644
--- a/pkg/cmd/util/output/output.go
+++ b/pkg/cmd/util/output/output.go
@@ -145,6 +145,8 @@ func printTable(cmd *cobra.Command, obj runtime.Object) (bool, error) {
printer.Handler(resticRepoColumns, nil, printResticRepoList)
printer.Handler(backupStorageLocationColumns, nil, printBackupStorageLocation)
printer.Handler(backupStorageLocationColumns, nil, printBackupStorageLocationList)
+ printer.Handler(volumeSnapshotLocationColumns, nil, printVolumeSnapshotLocation)
+ printer.Handler(volumeSnapshotLocationColumns, nil, printVolumeSnapshotLocationList)
err = printer.PrintObj(obj, os.Stdout)
if err != nil {
diff --git a/pkg/cmd/util/output/volume_snapshot_location_printer.go b/pkg/cmd/util/output/volume_snapshot_location_printer.go
new file mode 100644
index 0000000000..29d831d220
--- /dev/null
+++ b/pkg/cmd/util/output/volume_snapshot_location_printer.go
@@ -0,0 +1,65 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package output
+
+import (
+ "fmt"
+ "io"
+
+ "k8s.io/kubernetes/pkg/printers"
+
+ "github.com/heptio/ark/pkg/apis/ark/v1"
+)
+
+var (
+ volumeSnapshotLocationColumns = []string{"NAME", "PROVIDER"}
+)
+
+func printVolumeSnapshotLocationList(list *v1.VolumeSnapshotLocationList, w io.Writer, options printers.PrintOptions) error {
+ for i := range list.Items {
+ if err := printVolumeSnapshotLocation(&list.Items[i], w, options); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func printVolumeSnapshotLocation(location *v1.VolumeSnapshotLocation, w io.Writer, options printers.PrintOptions) error {
+ name := printers.FormatResourceName(options.Kind, location.Name, options.WithKind)
+
+ if options.WithNamespace {
+ if _, err := fmt.Fprintf(w, "%s\t", location.Namespace); err != nil {
+ return err
+ }
+ }
+
+ if _, err := fmt.Fprintf(
+ w,
+ "%s\t%s",
+ name,
+ location.Spec.Provider,
+ ); err != nil {
+ return err
+ }
+
+ if _, err := fmt.Fprint(w, printers.AppendLabels(location.Labels, options.ColumnLabels)); err != nil {
+ return err
+ }
+
+ _, err := fmt.Fprint(w, printers.AppendAllLabels(options.ShowLabels, location.Labels))
+ return err
+}
diff --git a/pkg/controller/backup_controller.go b/pkg/controller/backup_controller.go
index 4cc5ee23d3..e70f1b5379 100644
--- a/pkg/controller/backup_controller.go
+++ b/pkg/controller/backup_controller.go
@@ -37,7 +37,7 @@ import (
"k8s.io/client-go/tools/cache"
api "github.com/heptio/ark/pkg/apis/ark/v1"
- "github.com/heptio/ark/pkg/backup"
+ pkgbackup "github.com/heptio/ark/pkg/backup"
arkv1client "github.com/heptio/ark/pkg/generated/clientset/versioned/typed/ark/v1"
informers "github.com/heptio/ark/pkg/generated/informers/externalversions/ark/v1"
listers "github.com/heptio/ark/pkg/generated/listers/ark/v1"
@@ -55,46 +55,49 @@ const backupVersion = 1
type backupController struct {
*genericController
- backupper backup.Backupper
- pvProviderExists bool
- lister listers.BackupLister
- client arkv1client.BackupsGetter
- clock clock.Clock
- backupLogLevel logrus.Level
- newPluginManager func(logrus.FieldLogger) plugin.Manager
- backupTracker BackupTracker
- backupLocationLister listers.BackupStorageLocationLister
- defaultBackupLocation string
- metrics *metrics.ServerMetrics
- newBackupStore func(*api.BackupStorageLocation, persistence.ObjectStoreGetter, logrus.FieldLogger) (persistence.BackupStore, error)
+ backupper pkgbackup.Backupper
+ lister listers.BackupLister
+ client arkv1client.BackupsGetter
+ clock clock.Clock
+ backupLogLevel logrus.Level
+ newPluginManager func(logrus.FieldLogger) plugin.Manager
+ backupTracker BackupTracker
+ backupLocationLister listers.BackupStorageLocationLister
+ defaultBackupLocation string
+ snapshotLocationLister listers.VolumeSnapshotLocationLister
+ defaultSnapshotLocations map[string]*api.VolumeSnapshotLocation
+ metrics *metrics.ServerMetrics
+ newBackupStore func(*api.BackupStorageLocation, persistence.ObjectStoreGetter, logrus.FieldLogger) (persistence.BackupStore, error)
}
func NewBackupController(
backupInformer informers.BackupInformer,
client arkv1client.BackupsGetter,
- backupper backup.Backupper,
- pvProviderExists bool,
+ backupper pkgbackup.Backupper,
logger logrus.FieldLogger,
backupLogLevel logrus.Level,
newPluginManager func(logrus.FieldLogger) plugin.Manager,
backupTracker BackupTracker,
backupLocationInformer informers.BackupStorageLocationInformer,
defaultBackupLocation string,
+ volumeSnapshotLocationInformer informers.VolumeSnapshotLocationInformer,
+ defaultSnapshotLocations map[string]*api.VolumeSnapshotLocation,
metrics *metrics.ServerMetrics,
) Interface {
c := &backupController{
- genericController: newGenericController("backup", logger),
- backupper: backupper,
- pvProviderExists: pvProviderExists,
- lister: backupInformer.Lister(),
- client: client,
- clock: &clock.RealClock{},
- backupLogLevel: backupLogLevel,
- newPluginManager: newPluginManager,
- backupTracker: backupTracker,
- backupLocationLister: backupLocationInformer.Lister(),
- defaultBackupLocation: defaultBackupLocation,
- metrics: metrics,
+ genericController: newGenericController("backup", logger),
+ backupper: backupper,
+ lister: backupInformer.Lister(),
+ client: client,
+ clock: &clock.RealClock{},
+ backupLogLevel: backupLogLevel,
+ newPluginManager: newPluginManager,
+ backupTracker: backupTracker,
+ backupLocationLister: backupLocationInformer.Lister(),
+ defaultBackupLocation: defaultBackupLocation,
+ snapshotLocationLister: volumeSnapshotLocationInformer.Lister(),
+ defaultSnapshotLocations: defaultSnapshotLocations,
+ metrics: metrics,
newBackupStore: persistence.NewObjectBackupStore,
}
@@ -103,6 +106,7 @@ func NewBackupController(
c.cacheSyncWaiters = append(c.cacheSyncWaiters,
backupInformer.Informer().HasSynced,
backupLocationInformer.Informer().HasSynced,
+ volumeSnapshotLocationInformer.Informer().HasSynced,
)
backupInformer.Informer().AddEventHandler(
@@ -144,7 +148,7 @@ func (c *backupController) processBackup(key string) error {
}
log.Debug("Getting backup")
- backup, err := c.lister.Backups(ns).Get(name)
+ original, err := c.lister.Backups(ns).Get(name)
if err != nil {
return errors.Wrap(err, "error getting backup")
}
@@ -157,66 +161,53 @@ func (c *backupController) processBackup(key string) error {
// informer sees the update. In the latter case, after the informer has seen the update to
// InProgress, we still need this check so we can return nil to indicate we've finished processing
// this key (even though it was a no-op).
- switch backup.Status.Phase {
+ switch original.Status.Phase {
case "", api.BackupPhaseNew:
// only process new backups
default:
return nil
}
- log.Debug("Cloning backup")
- // store ref to original for creating patch
- original := backup
- // don't modify items in the cache
- backup = backup.DeepCopy()
+ log.Debug("Preparing backup request")
+ request := c.prepareBackupRequest(original)
- // set backup version
- backup.Status.Version = backupVersion
-
- // calculate expiration
- if backup.Spec.TTL.Duration > 0 {
- backup.Status.Expiration = metav1.NewTime(c.clock.Now().Add(backup.Spec.TTL.Duration))
- }
-
- var backupLocation *api.BackupStorageLocation
- // validation
- if backupLocation, backup.Status.ValidationErrors = c.getLocationAndValidate(backup, c.defaultBackupLocation); len(backup.Status.ValidationErrors) > 0 {
- backup.Status.Phase = api.BackupPhaseFailedValidation
+ if len(request.Status.ValidationErrors) > 0 {
+ request.Status.Phase = api.BackupPhaseFailedValidation
} else {
- backup.Status.Phase = api.BackupPhaseInProgress
+ request.Status.Phase = api.BackupPhaseInProgress
}
// update status
- updatedBackup, err := patchBackup(original, backup, c.client)
+ updatedBackup, err := patchBackup(original, request.Backup, c.client)
if err != nil {
- return errors.Wrapf(err, "error updating Backup status to %s", backup.Status.Phase)
+ return errors.Wrapf(err, "error updating Backup status to %s", request.Status.Phase)
}
// store ref to just-updated item for creating patch
original = updatedBackup
- backup = updatedBackup.DeepCopy()
+ request.Backup = updatedBackup.DeepCopy()
- if backup.Status.Phase == api.BackupPhaseFailedValidation {
+ if request.Status.Phase == api.BackupPhaseFailedValidation {
return nil
}
- c.backupTracker.Add(backup.Namespace, backup.Name)
- defer c.backupTracker.Delete(backup.Namespace, backup.Name)
+ c.backupTracker.Add(request.Namespace, request.Name)
+ defer c.backupTracker.Delete(request.Namespace, request.Name)
log.Debug("Running backup")
// execution & upload of backup
- backupScheduleName := backup.GetLabels()["ark-schedule"]
+ backupScheduleName := request.GetLabels()["ark-schedule"]
c.metrics.RegisterBackupAttempt(backupScheduleName)
- if err := c.runBackup(backup, backupLocation); err != nil {
+ if err := c.runBackup(request); err != nil {
log.WithError(err).Error("backup failed")
- backup.Status.Phase = api.BackupPhaseFailed
+ request.Status.Phase = api.BackupPhaseFailed
c.metrics.RegisterBackupFailed(backupScheduleName)
} else {
c.metrics.RegisterBackupSuccess(backupScheduleName)
}
log.Debug("Updating backup's final status")
- if _, err := patchBackup(original, backup, c.client); err != nil {
+ if _, err := patchBackup(original, request.Backup, c.client); err != nil {
log.WithError(err).Error("error updating backup's final status")
}
@@ -247,41 +238,107 @@ func patchBackup(original, updated *api.Backup, client arkv1client.BackupsGetter
return res, nil
}
-func (c *backupController) getLocationAndValidate(itm *api.Backup, defaultBackupLocation string) (*api.BackupStorageLocation, []string) {
- var validationErrors []string
+func (c *backupController) prepareBackupRequest(backup *api.Backup) *pkgbackup.Request {
+ request := &pkgbackup.Request{
+ Backup: backup.DeepCopy(), // don't modify items in the cache
+ }
- for _, err := range collections.ValidateIncludesExcludes(itm.Spec.IncludedResources, itm.Spec.ExcludedResources) {
- validationErrors = append(validationErrors, fmt.Sprintf("Invalid included/excluded resource lists: %v", err))
+ // set backup version
+ request.Status.Version = backupVersion
+
+ // calculate expiration
+ if request.Spec.TTL.Duration > 0 {
+ request.Status.Expiration = metav1.NewTime(c.clock.Now().Add(request.Spec.TTL.Duration))
}
- for _, err := range collections.ValidateIncludesExcludes(itm.Spec.IncludedNamespaces, itm.Spec.ExcludedNamespaces) {
- validationErrors = append(validationErrors, fmt.Sprintf("Invalid included/excluded namespace lists: %v", err))
+ // default storage location if not specified
+ if request.Spec.StorageLocation == "" {
+ request.Spec.StorageLocation = c.defaultBackupLocation
}
- if !c.pvProviderExists && itm.Spec.SnapshotVolumes != nil && *itm.Spec.SnapshotVolumes {
- validationErrors = append(validationErrors, "Server is not configured for PV snapshots")
+ // add the storage location as a label for easy filtering later.
+ if request.Labels == nil {
+ request.Labels = make(map[string]string)
}
+ request.Labels[api.StorageLocationLabel] = request.Spec.StorageLocation
- if itm.Spec.StorageLocation == "" {
- itm.Spec.StorageLocation = defaultBackupLocation
+ // validate the included/excluded resources and namespaces
+ for _, err := range collections.ValidateIncludesExcludes(request.Spec.IncludedResources, request.Spec.ExcludedResources) {
+ request.Status.ValidationErrors = append(request.Status.ValidationErrors, fmt.Sprintf("Invalid included/excluded resource lists: %v", err))
}
- // add the storage location as a label for easy filtering later.
- if itm.Labels == nil {
- itm.Labels = make(map[string]string)
+ for _, err := range collections.ValidateIncludesExcludes(request.Spec.IncludedNamespaces, request.Spec.ExcludedNamespaces) {
+ request.Status.ValidationErrors = append(request.Status.ValidationErrors, fmt.Sprintf("Invalid included/excluded namespace lists: %v", err))
}
- itm.Labels[api.StorageLocationLabel] = itm.Spec.StorageLocation
- var backupLocation *api.BackupStorageLocation
- backupLocation, err := c.backupLocationLister.BackupStorageLocations(itm.Namespace).Get(itm.Spec.StorageLocation)
- if err != nil {
- validationErrors = append(validationErrors, fmt.Sprintf("Error getting backup storage location: %v", err))
+ // validate the storage location, and store the BackupStorageLocation API obj on the request
+ if storageLocation, err := c.backupLocationLister.BackupStorageLocations(request.Namespace).Get(request.Spec.StorageLocation); err != nil {
+ request.Status.ValidationErrors = append(request.Status.ValidationErrors, fmt.Sprintf("Error getting backup storage location: %v", err))
+ } else {
+ request.StorageLocation = storageLocation
}
- return backupLocation, validationErrors
+ // validate and get the backup's VolumeSnapshotLocations, and store the
+ // VolumeSnapshotLocation API objs on the request
+ if locs, errs := c.validateAndGetSnapshotLocations(request.Backup); len(errs) > 0 {
+ request.Status.ValidationErrors = append(request.Status.ValidationErrors, errs...)
+ } else {
+ request.Spec.VolumeSnapshotLocations = nil
+ for _, loc := range locs {
+ request.Spec.VolumeSnapshotLocations = append(request.Spec.VolumeSnapshotLocations, loc.Name)
+ request.SnapshotLocations = append(request.SnapshotLocations, loc)
+ }
+ }
+
+ return request
+}
+
+// validateAndGetSnapshotLocations gets a collection of VolumeSnapshotLocation objects that
+// this backup will use (returned as a map of provider name -> VSL), and ensures:
+// - each location name in .spec.volumeSnapshotLocations exists as a location
+// - exactly 1 location per provider
+// - a given provider's default location name is added to .spec.volumeSnapshotLocations if one
+// is not explicitly specified for the provider
+func (c *backupController) validateAndGetSnapshotLocations(backup *api.Backup) (map[string]*api.VolumeSnapshotLocation, []string) {
+ errors := []string{}
+ providerLocations := make(map[string]*api.VolumeSnapshotLocation)
+
+ for _, locationName := range backup.Spec.VolumeSnapshotLocations {
+ // validate each locationName exists as a VolumeSnapshotLocation
+ location, err := c.snapshotLocationLister.VolumeSnapshotLocations(backup.Namespace).Get(locationName)
+ if err != nil {
+ errors = append(errors, fmt.Sprintf("error getting volume snapshot location named %s: %v", locationName, err))
+ continue
+ }
+
+ // ensure we end up with exactly 1 location *per provider*
+ if providerLocation, ok := providerLocations[location.Spec.Provider]; ok {
+ // if > 1 location name per provider as in ["aws-us-east-1" | "aws-us-west-1"] (same provider, multiple names)
+ if providerLocation.Name != locationName {
+ errors = append(errors, fmt.Sprintf("more than one VolumeSnapshotLocation name specified for provider %s: %s; unexpected name was %s", location.Spec.Provider, locationName, providerLocation.Name))
+ continue
+ }
+ } else {
+ // keep track of all valid existing locations, per provider
+ providerLocations[location.Spec.Provider] = location
+ }
+ }
+
+ if len(errors) > 0 {
+ return nil, errors
+ }
+
+ for provider, defaultLocation := range c.defaultSnapshotLocations {
+ // if a location name for a given provider does not already exist, add the provider's default
+ if _, ok := providerLocations[provider]; !ok {
+ providerLocations[provider] = defaultLocation
+ }
+ }
+
+ return providerLocations, nil
}
-func (c *backupController) runBackup(backup *api.Backup, backupLocation *api.BackupStorageLocation) error {
+func (c *backupController) runBackup(backup *pkgbackup.Request) error {
log := c.logger.WithField("backup", kubeutil.NamespaceAndName(backup))
log.Info("Starting backup")
backup.Status.StartTimestamp.Time = c.clock.Now()
@@ -318,62 +375,87 @@ func (c *backupController) runBackup(backup *api.Backup, backupLocation *api.Bac
return err
}
- backupStore, err := c.newBackupStore(backupLocation, pluginManager, log)
+ backupStore, err := c.newBackupStore(backup.StorageLocation, pluginManager, log)
if err != nil {
return err
}
var errs []error
- var backupJSONToUpload, backupFileToUpload io.Reader
-
// Do the actual backup
- if err := c.backupper.Backup(log, backup, backupFile, actions); err != nil {
+ if err := c.backupper.Backup(log, backup, backupFile, actions, pluginManager); err != nil {
errs = append(errs, err)
-
backup.Status.Phase = api.BackupPhaseFailed
} else {
backup.Status.Phase = api.BackupPhaseCompleted
}
+ if err := gzippedLogFile.Close(); err != nil {
+ c.logger.WithError(err).Error("error closing gzippedLogFile")
+ }
+
// Mark completion timestamp before serializing and uploading.
// Otherwise, the JSON file in object storage has a CompletionTimestamp of 'null'.
backup.Status.CompletionTimestamp.Time = c.clock.Now()
- backupJSON := new(bytes.Buffer)
- if err := encode.EncodeTo(backup, "json", backupJSON); err != nil {
- errs = append(errs, errors.Wrap(err, "error encoding backup"))
- } else {
- // Only upload the json and backup tarball if encoding to json succeeded.
- backupJSONToUpload = backupJSON
- backupFileToUpload = backupFile
- }
+ errs = append(errs, persistBackup(backup, backupFile, logFile, backupStore, c.logger)...)
+ errs = append(errs, recordBackupMetrics(backup.Backup, backupFile, c.metrics))
+
+ log.Info("Backup completed")
+
+ return kerrors.NewAggregate(errs)
+}
+
+func recordBackupMetrics(backup *api.Backup, backupFile *os.File, serverMetrics *metrics.ServerMetrics) error {
+ backupScheduleName := backup.GetLabels()["ark-schedule"]
var backupSizeBytes int64
+ var err error
if backupFileStat, err := backupFile.Stat(); err != nil {
- errs = append(errs, errors.Wrap(err, "error getting file info"))
+ err = errors.Wrap(err, "error getting file info")
} else {
backupSizeBytes = backupFileStat.Size()
}
+ serverMetrics.SetBackupTarballSizeBytesGauge(backupScheduleName, backupSizeBytes)
- if err := gzippedLogFile.Close(); err != nil {
- c.logger.WithError(err).Error("error closing gzippedLogFile")
- }
+ backupDuration := backup.Status.CompletionTimestamp.Time.Sub(backup.Status.StartTimestamp.Time)
+ backupDurationSeconds := float64(backupDuration / time.Second)
+ serverMetrics.RegisterBackupDuration(backupScheduleName, backupDurationSeconds)
- if err := backupStore.PutBackup(backup.Name, backupJSONToUpload, backupFileToUpload, logFile); err != nil {
- errs = append(errs, err)
+ return err
+}
+
+func persistBackup(backup *pkgbackup.Request, backupContents, backupLog *os.File, backupStore persistence.BackupStore, log logrus.FieldLogger) []error {
+ errs := []error{}
+ backupJSON := new(bytes.Buffer)
+
+ if err := encode.EncodeTo(backup.Backup, "json", backupJSON); err != nil {
+ errs = append(errs, errors.Wrap(err, "error encoding backup"))
}
- backupScheduleName := backup.GetLabels()["ark-schedule"]
- c.metrics.SetBackupTarballSizeBytesGauge(backupScheduleName, backupSizeBytes)
+ volumeSnapshots := new(bytes.Buffer)
+ gzw := gzip.NewWriter(volumeSnapshots)
+ defer gzw.Close()
- backupDuration := backup.Status.CompletionTimestamp.Time.Sub(backup.Status.StartTimestamp.Time)
- backupDurationSeconds := float64(backupDuration / time.Second)
- c.metrics.RegisterBackupDuration(backupScheduleName, backupDurationSeconds)
+ if err := json.NewEncoder(gzw).Encode(backup.VolumeSnapshots); err != nil {
+ errs = append(errs, errors.Wrap(err, "error encoding list of volume snapshots"))
+ }
+ if err := gzw.Close(); err != nil {
+ errs = append(errs, errors.Wrap(err, "error closing gzip writer"))
+ }
- log.Info("Backup completed")
+ if len(errs) > 0 {
+ // Don't upload the JSON files or backup tarball if encoding to json fails.
+ backupJSON = nil
+ backupContents = nil
+ volumeSnapshots = nil
+ }
- return kerrors.NewAggregate(errs)
+ if err := backupStore.PutBackup(backup.Name, backupJSON, backupContents, backupLog, volumeSnapshots); err != nil {
+ errs = append(errs, err)
+ }
+
+ return errs
}
func closeAndRemoveFile(file *os.File, log logrus.FieldLogger) {
diff --git a/pkg/controller/backup_controller_test.go b/pkg/controller/backup_controller_test.go
index 8896cc63d9..be16ce0573 100644
--- a/pkg/controller/backup_controller_test.go
+++ b/pkg/controller/backup_controller_test.go
@@ -18,23 +18,24 @@ package controller
import (
"bytes"
- "encoding/json"
+ "fmt"
"io"
+ "sort"
"strings"
"testing"
"time"
+ "github.com/pkg/errors"
"github.com/sirupsen/logrus"
+ "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/clock"
- core "k8s.io/client-go/testing"
"github.com/heptio/ark/pkg/apis/ark/v1"
- "github.com/heptio/ark/pkg/backup"
+ pkgbackup "github.com/heptio/ark/pkg/backup"
"github.com/heptio/ark/pkg/generated/clientset/versioned/fake"
informers "github.com/heptio/ark/pkg/generated/informers/externalversions"
"github.com/heptio/ark/pkg/metrics"
@@ -42,7 +43,6 @@ import (
persistencemocks "github.com/heptio/ark/pkg/persistence/mocks"
"github.com/heptio/ark/pkg/plugin"
pluginmocks "github.com/heptio/ark/pkg/plugin/mocks"
- "github.com/heptio/ark/pkg/util/collections"
"github.com/heptio/ark/pkg/util/logging"
arktest "github.com/heptio/ark/pkg/util/test"
)
@@ -51,374 +51,433 @@ type fakeBackupper struct {
mock.Mock
}
-func (b *fakeBackupper) Backup(logger logrus.FieldLogger, backup *v1.Backup, backupFile io.Writer, actions []backup.ItemAction) error {
- args := b.Called(logger, backup, backupFile, actions)
+func (b *fakeBackupper) Backup(logger logrus.FieldLogger, backup *pkgbackup.Request, backupFile io.Writer, actions []pkgbackup.ItemAction, blockStoreGetter pkgbackup.BlockStoreGetter) error {
+ args := b.Called(logger, backup, backupFile, actions, blockStoreGetter)
return args.Error(0)
}
-func TestProcessBackup(t *testing.T) {
+func TestProcessBackupNonProcessedItems(t *testing.T) {
tests := []struct {
- name string
- key string
- expectError bool
- expectedIncludes []string
- expectedExcludes []string
- backup *arktest.TestBackup
- expectBackup bool
- allowSnapshots bool
+ name string
+ key string
+ backup *v1.Backup
+ expectedErr string
}{
{
- name: "bad key",
+ name: "bad key returns error",
key: "bad/key/here",
- expectError: true,
+ expectedErr: "error splitting queue key: unexpected key format: \"bad/key/here\"",
},
{
- name: "lister failed",
- key: "heptio-ark/backup1",
- expectError: true,
+ name: "backup not found in lister returns error",
+ key: "nonexistent/backup",
+ expectedErr: "error getting backup: backup.ark.heptio.com \"backup\" not found",
},
{
- name: "do not process phase FailedValidation",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseFailedValidation),
- expectBackup: false,
+ name: "FailedValidation backup is not processed",
+ key: "heptio-ark/backup-1",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithPhase(v1.BackupPhaseFailedValidation).Backup,
},
{
- name: "do not process phase InProgress",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseInProgress),
- expectBackup: false,
+ name: "InProgress backup is not processed",
+ key: "heptio-ark/backup-1",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithPhase(v1.BackupPhaseInProgress).Backup,
},
{
- name: "do not process phase Completed",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseCompleted),
- expectBackup: false,
+ name: "Completed backup is not processed",
+ key: "heptio-ark/backup-1",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithPhase(v1.BackupPhaseCompleted).Backup,
},
{
- name: "do not process phase Failed",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseFailed),
- expectBackup: false,
- },
- {
- name: "do not process phase other",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase("arg"),
- expectBackup: false,
- },
- {
- name: "invalid included/excluded resources fails validation",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithIncludedResources("foo").WithExcludedResources("foo"),
- expectBackup: false,
- },
- {
- name: "invalid included/excluded namespaces fails validation",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithIncludedNamespaces("foo").WithExcludedNamespaces("foo"),
- expectBackup: false,
- },
- {
- name: "make sure specified included and excluded resources are honored",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithIncludedResources("i", "j").WithExcludedResources("k", "l"),
- expectedIncludes: []string{"i", "j"},
- expectedExcludes: []string{"k", "l"},
- expectBackup: true,
- },
- {
- name: "if includednamespaces are specified, don't default to *",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithIncludedNamespaces("ns-1"),
- expectBackup: true,
+ name: "Failed backup is not processed",
+ key: "heptio-ark/backup-1",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithPhase(v1.BackupPhaseFailed).Backup,
},
+ }
+
+ for _, test := range tests {
+ t.Run(test.name, func(t *testing.T) {
+ var (
+ sharedInformers = informers.NewSharedInformerFactory(fake.NewSimpleClientset(), 0)
+ logger = logging.DefaultLogger(logrus.DebugLevel)
+ )
+
+ c := &backupController{
+ genericController: newGenericController("backup-test", logger),
+ lister: sharedInformers.Ark().V1().Backups().Lister(),
+ }
+
+ if test.backup != nil {
+ require.NoError(t, sharedInformers.Ark().V1().Backups().Informer().GetStore().Add(test.backup))
+ }
+
+ err := c.processBackup(test.key)
+ if test.expectedErr != "" {
+ require.Error(t, err)
+ assert.Equal(t, test.expectedErr, err.Error())
+ } else {
+ assert.Nil(t, err)
+ }
+
+ // Any backup that would actually proceed to validation will cause a segfault because this
+ // test hasn't set up the necessary controller dependencies for validation/etc. So the lack
+ // of segfaults during test execution here imply that backups are not being processed, which
+ // is what we expect.
+ })
+ }
+}
+
+func TestProcessBackupValidationFailures(t *testing.T) {
+ defaultBackupLocation := arktest.NewTestBackupStorageLocation().WithName("loc-1").BackupStorageLocation
+
+ tests := []struct {
+ name string
+ backup *v1.Backup
+ backupLocation *v1.BackupStorageLocation
+ expectedErrs []string
+ }{
{
- name: "ttl",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithTTL(10 * time.Minute),
- expectBackup: true,
+ name: "invalid included/excluded resources fails validation",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithIncludedResources("foo").WithExcludedResources("foo").Backup,
+ backupLocation: defaultBackupLocation,
+ expectedErrs: []string{"Invalid included/excluded resource lists: excludes list cannot contain an item in the includes list: foo"},
},
{
- name: "backup with SnapshotVolumes when allowSnapshots=false fails validation",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithSnapshotVolumes(true),
- expectBackup: false,
+ name: "invalid included/excluded namespaces fails validation",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithIncludedNamespaces("foo").WithExcludedNamespaces("foo").Backup,
+ backupLocation: defaultBackupLocation,
+ expectedErrs: []string{"Invalid included/excluded namespace lists: excludes list cannot contain an item in the includes list: foo"},
},
{
- name: "backup with SnapshotVolumes when allowSnapshots=true gets executed",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithSnapshotVolumes(true),
- allowSnapshots: true,
- expectBackup: true,
+ name: "non-existent backup location fails validation",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithStorageLocation("nonexistent").Backup,
+ expectedErrs: []string{"Error getting backup storage location: backupstoragelocation.ark.heptio.com \"nonexistent\" not found"},
},
+ }
+
+ for _, test := range tests {
+ t.Run(test.name, func(t *testing.T) {
+ var (
+ clientset = fake.NewSimpleClientset(test.backup)
+ sharedInformers = informers.NewSharedInformerFactory(clientset, 0)
+ logger = logging.DefaultLogger(logrus.DebugLevel)
+ )
+
+ c := &backupController{
+ genericController: newGenericController("backup-test", logger),
+ client: clientset.ArkV1(),
+ lister: sharedInformers.Ark().V1().Backups().Lister(),
+ backupLocationLister: sharedInformers.Ark().V1().BackupStorageLocations().Lister(),
+ defaultBackupLocation: defaultBackupLocation.Name,
+ }
+
+ require.NotNil(t, test.backup)
+ require.NoError(t, sharedInformers.Ark().V1().Backups().Informer().GetStore().Add(test.backup))
+
+ if test.backupLocation != nil {
+ _, err := clientset.ArkV1().BackupStorageLocations(test.backupLocation.Namespace).Create(test.backupLocation)
+ require.NoError(t, err)
+
+ require.NoError(t, sharedInformers.Ark().V1().BackupStorageLocations().Informer().GetStore().Add(test.backupLocation))
+ }
+
+ require.NoError(t, c.processBackup(fmt.Sprintf("%s/%s", test.backup.Namespace, test.backup.Name)))
+
+ res, err := clientset.ArkV1().Backups(test.backup.Namespace).Get(test.backup.Name, metav1.GetOptions{})
+ require.NoError(t, err)
+
+ assert.Equal(t, v1.BackupPhaseFailedValidation, res.Status.Phase)
+ assert.Equal(t, test.expectedErrs, res.Status.ValidationErrors)
+
+ // Any backup that would actually proceed to processing will cause a segfault because this
+ // test hasn't set up the necessary controller dependencies for running backups. So the lack
+ // of segfaults during test execution here imply that backups are not being processed, which
+ // is what we expect.
+ })
+ }
+}
+
+func TestProcessBackupCompletions(t *testing.T) {
+ defaultBackupLocation := arktest.NewTestBackupStorageLocation().WithName("loc-1").BackupStorageLocation
+
+ now, err := time.Parse(time.RFC1123Z, time.RFC1123Z)
+ require.NoError(t, err)
+ now = now.Local()
+
+ tests := []struct {
+ name string
+ backup *v1.Backup
+ backupLocation *v1.BackupStorageLocation
+ expectedResult *v1.Backup
+ }{
{
- name: "Backup without a location will have it set to the default",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew),
- expectBackup: true,
+ name: "backup with no backup location gets the default",
+ backup: arktest.NewTestBackup().WithName("backup-1").Backup,
+ backupLocation: defaultBackupLocation,
+ expectedResult: &v1.Backup{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: v1.DefaultNamespace,
+ Name: "backup-1",
+ Labels: map[string]string{
+ "ark.heptio.com/storage-location": "loc-1",
+ },
+ },
+ Spec: v1.BackupSpec{
+ StorageLocation: defaultBackupLocation.Name,
+ },
+ Status: v1.BackupStatus{
+ Phase: v1.BackupPhaseCompleted,
+ Version: 1,
+ StartTimestamp: metav1.NewTime(now),
+ CompletionTimestamp: metav1.NewTime(now),
+ },
+ },
},
{
- name: "Backup with a location completes",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithStorageLocation("loc1"),
- expectBackup: true,
+ name: "backup with a specific backup location keeps it",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithStorageLocation("alt-loc").Backup,
+ backupLocation: arktest.NewTestBackupStorageLocation().WithName("alt-loc").BackupStorageLocation,
+ expectedResult: &v1.Backup{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: v1.DefaultNamespace,
+ Name: "backup-1",
+ Labels: map[string]string{
+ "ark.heptio.com/storage-location": "alt-loc",
+ },
+ },
+ Spec: v1.BackupSpec{
+ StorageLocation: "alt-loc",
+ },
+ Status: v1.BackupStatus{
+ Phase: v1.BackupPhaseCompleted,
+ Version: 1,
+ StartTimestamp: metav1.NewTime(now),
+ CompletionTimestamp: metav1.NewTime(now),
+ },
+ },
},
{
- name: "Backup with non-existent location will fail validation",
- key: "heptio-ark/backup1",
- backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithStorageLocation("loc2"),
- expectBackup: false,
+ name: "backup with a TTL has expiration set",
+ backup: arktest.NewTestBackup().WithName("backup-1").WithTTL(10 * time.Minute).Backup,
+ backupLocation: defaultBackupLocation,
+ expectedResult: &v1.Backup{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: v1.DefaultNamespace,
+ Name: "backup-1",
+ Labels: map[string]string{
+ "ark.heptio.com/storage-location": "loc-1",
+ },
+ },
+ Spec: v1.BackupSpec{
+ TTL: metav1.Duration{Duration: 10 * time.Minute},
+ StorageLocation: defaultBackupLocation.Name,
+ },
+ Status: v1.BackupStatus{
+ Phase: v1.BackupPhaseCompleted,
+ Version: 1,
+ Expiration: metav1.NewTime(now.Add(10 * time.Minute)),
+ StartTimestamp: metav1.NewTime(now),
+ CompletionTimestamp: metav1.NewTime(now),
+ },
+ },
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
var (
- client = fake.NewSimpleClientset()
- backupper = &fakeBackupper{}
- sharedInformers = informers.NewSharedInformerFactory(client, 0)
+ clientset = fake.NewSimpleClientset(test.backup)
+ sharedInformers = informers.NewSharedInformerFactory(clientset, 0)
logger = logging.DefaultLogger(logrus.DebugLevel)
- clockTime, _ = time.Parse("Mon Jan 2 15:04:05 2006", "Mon Jan 2 15:04:05 2006")
- pluginManager = &pluginmocks.Manager{}
- backupStore = &persistencemocks.BackupStore{}
+ pluginManager = new(pluginmocks.Manager)
+ backupStore = new(persistencemocks.BackupStore)
+ backupper = new(fakeBackupper)
)
- defer backupper.AssertExpectations(t)
- defer pluginManager.AssertExpectations(t)
- defer backupStore.AssertExpectations(t)
-
- c := NewBackupController(
- sharedInformers.Ark().V1().Backups(),
- client.ArkV1(),
- backupper,
- test.allowSnapshots,
- logger,
- logrus.InfoLevel,
- func(logrus.FieldLogger) plugin.Manager { return pluginManager },
- NewBackupTracker(),
- sharedInformers.Ark().V1().BackupStorageLocations(),
- "default",
- metrics.NewServerMetrics(),
- ).(*backupController)
-
- c.clock = clock.NewFakeClock(clockTime)
-
- c.newBackupStore = func(*v1.BackupStorageLocation, persistence.ObjectStoreGetter, logrus.FieldLogger) (persistence.BackupStore, error) {
- return backupStore, nil
- }
- var expiration, startTime time.Time
+ c := &backupController{
+ genericController: newGenericController("backup-test", logger),
+ client: clientset.ArkV1(),
+ lister: sharedInformers.Ark().V1().Backups().Lister(),
+ backupLocationLister: sharedInformers.Ark().V1().BackupStorageLocations().Lister(),
+ defaultBackupLocation: defaultBackupLocation.Name,
+ backupTracker: NewBackupTracker(),
+ metrics: metrics.NewServerMetrics(),
+ clock: clock.NewFakeClock(now),
+ newPluginManager: func(logrus.FieldLogger) plugin.Manager { return pluginManager },
+ newBackupStore: func(*v1.BackupStorageLocation, persistence.ObjectStoreGetter, logrus.FieldLogger) (persistence.BackupStore, error) {
+ return backupStore, nil
+ },
+ backupper: backupper,
+ }
- if test.backup != nil {
- // add directly to the informer's store so the lister can function and so we don't have to
- // start the shared informers.
- sharedInformers.Ark().V1().Backups().Informer().GetStore().Add(test.backup.Backup)
+ pluginManager.On("GetBackupItemActions").Return(nil, nil)
+ pluginManager.On("CleanupClients").Return(nil)
- startTime = c.clock.Now()
+ backupper.On("Backup", mock.Anything, mock.Anything, mock.Anything, []pkgbackup.ItemAction(nil), pluginManager).Return(nil)
- if test.backup.Spec.TTL.Duration > 0 {
- expiration = c.clock.Now().Add(test.backup.Spec.TTL.Duration)
- }
+ // Ensure we have a CompletionTimestamp when uploading.
+ // Failures will display the bytes in buf.
+ completionTimestampIsPresent := func(buf *bytes.Buffer) bool {
+ return strings.Contains(buf.String(), `"completionTimestamp": "2006-01-02T22:04:05Z"`)
}
+ backupStore.On("PutBackup", test.backup.Name, mock.MatchedBy(completionTimestampIsPresent), mock.Anything, mock.Anything, mock.Anything).Return(nil)
- if test.expectBackup {
- // set up a Backup object to represent what we expect to be passed to backupper.Backup()
- backup := test.backup.DeepCopy()
- backup.Spec.IncludedResources = test.expectedIncludes
- backup.Spec.ExcludedResources = test.expectedExcludes
- backup.Spec.IncludedNamespaces = test.backup.Spec.IncludedNamespaces
- backup.Spec.SnapshotVolumes = test.backup.Spec.SnapshotVolumes
- backup.Status.Phase = v1.BackupPhaseInProgress
- backup.Status.Expiration.Time = expiration
- backup.Status.StartTimestamp.Time = startTime
- backup.Status.Version = 1
- backupper.On("Backup",
- mock.Anything, // logger
- backup,
- mock.Anything, // backup file
- mock.Anything, // actions
- ).Return(nil)
-
- defaultLocation := &v1.BackupStorageLocation{
- ObjectMeta: metav1.ObjectMeta{
- Namespace: backup.Namespace,
- Name: "default",
- },
- Spec: v1.BackupStorageLocationSpec{
- Provider: "myCloud",
- StorageType: v1.StorageType{
- ObjectStorage: &v1.ObjectStorageLocation{
- Bucket: "bucket",
- },
- },
- },
- }
- loc1 := &v1.BackupStorageLocation{
- ObjectMeta: metav1.ObjectMeta{
- Namespace: backup.Namespace,
- Name: "loc1",
- },
- Spec: v1.BackupStorageLocationSpec{
- Provider: "myCloud",
- StorageType: v1.StorageType{
- ObjectStorage: &v1.ObjectStorageLocation{
- Bucket: "bucket",
- },
- },
- },
- }
- require.NoError(t, sharedInformers.Ark().V1().BackupStorageLocations().Informer().GetStore().Add(defaultLocation))
- require.NoError(t, sharedInformers.Ark().V1().BackupStorageLocations().Informer().GetStore().Add(loc1))
+ // add the test's backup to the informer/lister store
+ require.NotNil(t, test.backup)
+ require.NoError(t, sharedInformers.Ark().V1().Backups().Informer().GetStore().Add(test.backup))
- pluginManager.On("GetBackupItemActions").Return(nil, nil)
+ // add the default backup storage location to the clientset and the informer/lister store
+ _, err := clientset.ArkV1().BackupStorageLocations(defaultBackupLocation.Namespace).Create(defaultBackupLocation)
+ require.NoError(t, err)
- // Ensure we have a CompletionTimestamp when uploading.
- // Failures will display the bytes in buf.
- completionTimestampIsPresent := func(buf *bytes.Buffer) bool {
- json := buf.String()
- timeString := `"completionTimestamp": "2006-01-02T15:04:05Z"`
+ require.NoError(t, sharedInformers.Ark().V1().BackupStorageLocations().Informer().GetStore().Add(defaultBackupLocation))
- return strings.Contains(json, timeString)
- }
- backupStore.On("PutBackup", test.backup.Name, mock.MatchedBy(completionTimestampIsPresent), mock.Anything, mock.Anything).Return(nil)
- pluginManager.On("CleanupClients").Return()
+ // add the test's backup storage location to the clientset and the informer/lister store
+ // if it's different than the default
+ if test.backupLocation != nil && test.backupLocation != defaultBackupLocation {
+ _, err := clientset.ArkV1().BackupStorageLocations(test.backupLocation.Namespace).Create(test.backupLocation)
+ require.NoError(t, err)
+
+ require.NoError(t, sharedInformers.Ark().V1().BackupStorageLocations().Informer().GetStore().Add(test.backupLocation))
}
- // this is necessary so the Patch() call returns the appropriate object
- client.PrependReactor("patch", "backups", func(action core.Action) (bool, runtime.Object, error) {
- if test.backup == nil {
- return true, nil, nil
- }
+ require.NoError(t, c.processBackup(fmt.Sprintf("%s/%s", test.backup.Namespace, test.backup.Name)))
- patch := action.(core.PatchAction).GetPatch()
- patchMap := make(map[string]interface{})
+ res, err := clientset.ArkV1().Backups(test.backup.Namespace).Get(test.backup.Name, metav1.GetOptions{})
+ require.NoError(t, err)
- if err := json.Unmarshal(patch, &patchMap); err != nil {
- t.Logf("error unmarshalling patch: %s\n", err)
- return false, nil, err
- }
+ assert.Equal(t, test.expectedResult, res)
+ })
+ }
+}
- phase, err := collections.GetString(patchMap, "status.phase")
- if err != nil {
- t.Logf("error getting status.phase: %s\n", err)
- return false, nil, err
- }
+func TestValidateAndGetSnapshotLocations(t *testing.T) {
+ defaultLocationsAWS := map[string]*v1.VolumeSnapshotLocation{
+ "aws": arktest.NewTestVolumeSnapshotLocation().WithName("aws-us-east-2").VolumeSnapshotLocation,
+ }
+ defaultLocationsFake := map[string]*v1.VolumeSnapshotLocation{
+ "fake-provider": arktest.NewTestVolumeSnapshotLocation().WithName("some-name").VolumeSnapshotLocation,
+ }
- res := test.backup.DeepCopy()
-
- // these are the fields that we expect to be set by
- // the controller
- res.Status.Version = 1
- res.Status.Expiration.Time = expiration
- res.Status.Phase = v1.BackupPhase(phase)
-
- // If there's an error, it's mostly likely that the key wasn't found
- // which is fine since not all patches will have them.
- completionString, err := collections.GetString(patchMap, "status.completionTimestamp")
- if err == nil {
- completionTime, err := time.Parse(time.RFC3339Nano, completionString)
- require.NoError(t, err, "unexpected completionTimestamp parsing error %v", err)
- res.Status.CompletionTimestamp.Time = completionTime
- }
- startString, err := collections.GetString(patchMap, "status.startTimestamp")
- if err == nil {
- startTime, err := time.Parse(time.RFC3339Nano, startString)
- require.NoError(t, err, "unexpected startTimestamp parsing error %v", err)
- res.Status.StartTimestamp.Time = startTime
- }
+ multipleLocationNames := []string{"aws-us-west-1", "aws-us-east-1"}
- return true, res, nil
- })
+ multipleLocation1 := arktest.LocationInfo{
+ Name: multipleLocationNames[0],
+ Provider: "aws",
+ Config: map[string]string{"region": "us-west-1"},
+ }
+ multipleLocation2 := arktest.LocationInfo{
+ Name: multipleLocationNames[1],
+ Provider: "aws",
+ Config: map[string]string{"region": "us-west-1"},
+ }
- // method under test
- err := c.processBackup(test.key)
+ multipleLocationList := []arktest.LocationInfo{multipleLocation1, multipleLocation2}
- if test.expectError {
- require.Error(t, err, "processBackup should error")
- return
- }
- require.NoError(t, err, "processBackup unexpected error: %v", err)
+ dupLocationNames := []string{"aws-us-west-1", "aws-us-west-1"}
+ dupLocation1 := arktest.LocationInfo{
+ Name: dupLocationNames[0],
+ Provider: "aws",
+ Config: map[string]string{"region": "us-west-1"},
+ }
+ dupLocation2 := arktest.LocationInfo{
+ Name: dupLocationNames[0],
+ Provider: "aws",
+ Config: map[string]string{"region": "us-west-1"},
+ }
+ dupLocationList := []arktest.LocationInfo{dupLocation1, dupLocation2}
- if !test.expectBackup {
- // the AssertExpectations calls above make sure we aren't running anything we shouldn't be
- return
- }
+ tests := []struct {
+ name string
+ backup *arktest.TestBackup
+ locations []*arktest.TestVolumeSnapshotLocation
+ defaultLocations map[string]*v1.VolumeSnapshotLocation
+ expectedVolumeSnapshotLocationNames []string // adding these in the expected order will allow to test with better msgs in case of a test failure
+ expectedErrors string
+ expectedSuccess bool
+ }{
+ {
+ name: "location name does not correspond to any existing location",
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithVolumeSnapshotLocations([]string{"random-name"}),
+ locations: arktest.NewTestVolumeSnapshotLocation().WithName(dupLocationNames[0]).WithProviderConfig(dupLocationList),
+ expectedErrors: "error getting volume snapshot location named random-name: volumesnapshotlocation.ark.heptio.com \"random-name\" not found",
+ expectedSuccess: false,
+ },
+ {
+ name: "duplicate locationName per provider: should filter out dups",
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithVolumeSnapshotLocations(dupLocationNames),
+ locations: arktest.NewTestVolumeSnapshotLocation().WithName(dupLocationNames[0]).WithProviderConfig(dupLocationList),
+ expectedVolumeSnapshotLocationNames: []string{dupLocationNames[0]},
+ expectedSuccess: true,
+ },
+ {
+ name: "multiple location names per provider",
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithVolumeSnapshotLocations(multipleLocationNames),
+ locations: arktest.NewTestVolumeSnapshotLocation().WithName(multipleLocationNames[0]).WithProviderConfig(multipleLocationList),
+ expectedErrors: "more than one VolumeSnapshotLocation name specified for provider aws: aws-us-east-1; unexpected name was aws-us-west-1",
+ expectedSuccess: false,
+ },
+ {
+ name: "no location name for the provider exists: the provider's default should be added",
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew),
+ defaultLocations: defaultLocationsAWS,
+ expectedVolumeSnapshotLocationNames: []string{defaultLocationsAWS["aws"].Name},
+ expectedSuccess: true,
+ },
+ {
+ name: "no existing location name and no default location name given",
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew),
+ expectedSuccess: true,
+ },
+ {
+ name: "multiple location names for a provider, default location name for another provider",
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(v1.BackupPhaseNew).WithVolumeSnapshotLocations(dupLocationNames),
+ locations: arktest.NewTestVolumeSnapshotLocation().WithName(dupLocationNames[0]).WithProviderConfig(dupLocationList),
+ defaultLocations: defaultLocationsFake,
+ expectedVolumeSnapshotLocationNames: []string{dupLocationNames[0], defaultLocationsFake["fake-provider"].Name},
+ expectedSuccess: true,
+ },
+ }
- actions := client.Actions()
- require.Equal(t, 2, len(actions))
+ for _, test := range tests {
+ t.Run(test.name, func(t *testing.T) {
+ var (
+ client = fake.NewSimpleClientset()
+ sharedInformers = informers.NewSharedInformerFactory(client, 0)
+ )
- // structs and func for decoding patch content
- type StatusPatch struct {
- Expiration time.Time `json:"expiration"`
- Version int `json:"version"`
- Phase v1.BackupPhase `json:"phase"`
- StartTimestamp metav1.Time `json:"startTimestamp"`
- CompletionTimestamp metav1.Time `json:"completionTimestamp"`
- }
- type SpecPatch struct {
- StorageLocation string `json:"storageLocation"`
- }
- type ObjectMetaPatch struct {
- Labels map[string]string `json:"labels"`
+ c := &backupController{
+ snapshotLocationLister: sharedInformers.Ark().V1().VolumeSnapshotLocations().Lister(),
+ defaultSnapshotLocations: test.defaultLocations,
}
- type Patch struct {
- Status StatusPatch `json:"status"`
- Spec SpecPatch `json:"spec,omitempty"`
- ObjectMeta ObjectMetaPatch `json:"metadata,omitempty"`
+ // set up a Backup object to represent what we expect to be passed to backupper.Backup()
+ backup := test.backup.DeepCopy()
+ backup.Spec.VolumeSnapshotLocations = test.backup.Spec.VolumeSnapshotLocations
+ for _, location := range test.locations {
+ require.NoError(t, sharedInformers.Ark().V1().VolumeSnapshotLocations().Informer().GetStore().Add(location.VolumeSnapshotLocation))
}
- decode := func(decoder *json.Decoder) (interface{}, error) {
- actual := new(Patch)
- err := decoder.Decode(actual)
-
- return *actual, err
- }
+ providerLocations, errs := c.validateAndGetSnapshotLocations(backup)
+ if test.expectedSuccess {
+ for _, err := range errs {
+ require.NoError(t, errors.New(err), "validateAndGetSnapshotLocations unexpected error: %v", err)
+ }
- // validate Patch call 1 (setting version, expiration, phase, and storage location)
- var expected Patch
- if test.backup.Spec.StorageLocation == "" {
- expected = Patch{
- Status: StatusPatch{
- Version: 1,
- Phase: v1.BackupPhaseInProgress,
- Expiration: expiration,
- },
- Spec: SpecPatch{
- StorageLocation: "default",
- },
- ObjectMeta: ObjectMetaPatch{
- Labels: map[string]string{
- v1.StorageLocationLabel: "default",
- },
- },
+ var locations []string
+ for _, loc := range providerLocations {
+ locations = append(locations, loc.Name)
}
+
+ sort.Strings(test.expectedVolumeSnapshotLocationNames)
+ sort.Strings(locations)
+ require.Equal(t, test.expectedVolumeSnapshotLocationNames, locations)
} else {
- expected = Patch{
- Status: StatusPatch{
- Version: 1,
- Phase: v1.BackupPhaseInProgress,
- Expiration: expiration,
- },
- ObjectMeta: ObjectMetaPatch{
- Labels: map[string]string{
- v1.StorageLocationLabel: test.backup.Spec.StorageLocation,
- },
- },
+ if len(errs) == 0 {
+ require.Error(t, nil, "validateAndGetSnapshotLocations expected error")
}
+ require.Contains(t, errs, test.expectedErrors)
}
-
- arktest.ValidatePatch(t, actions[0], expected, decode)
-
- // validate Patch call 2 (setting phase, startTimestamp, completionTimestamp)
- expected = Patch{
- Status: StatusPatch{
- Phase: v1.BackupPhaseCompleted,
- StartTimestamp: metav1.Time{Time: c.clock.Now()},
- CompletionTimestamp: metav1.Time{Time: c.clock.Now()},
- },
- }
- arktest.ValidatePatch(t, actions[1], expected, decode)
})
}
}
diff --git a/pkg/controller/backup_deletion_controller.go b/pkg/controller/backup_deletion_controller.go
index 9a6813161b..2abe76c9ad 100644
--- a/pkg/controller/backup_deletion_controller.go
+++ b/pkg/controller/backup_deletion_controller.go
@@ -51,13 +51,13 @@ type backupDeletionController struct {
deleteBackupRequestClient arkv1client.DeleteBackupRequestsGetter
deleteBackupRequestLister listers.DeleteBackupRequestLister
backupClient arkv1client.BackupsGetter
- blockStore cloudprovider.BlockStore
restoreLister listers.RestoreLister
restoreClient arkv1client.RestoresGetter
backupTracker BackupTracker
resticMgr restic.RepositoryManager
podvolumeBackupLister listers.PodVolumeBackupLister
backupLocationLister listers.BackupStorageLocationLister
+ snapshotLocationLister listers.VolumeSnapshotLocationLister
processRequestFunc func(*v1.DeleteBackupRequest) error
clock clock.Clock
newPluginManager func(logrus.FieldLogger) plugin.Manager
@@ -70,13 +70,13 @@ func NewBackupDeletionController(
deleteBackupRequestInformer informers.DeleteBackupRequestInformer,
deleteBackupRequestClient arkv1client.DeleteBackupRequestsGetter,
backupClient arkv1client.BackupsGetter,
- blockStore cloudprovider.BlockStore,
restoreInformer informers.RestoreInformer,
restoreClient arkv1client.RestoresGetter,
backupTracker BackupTracker,
resticMgr restic.RepositoryManager,
podvolumeBackupInformer informers.PodVolumeBackupInformer,
backupLocationInformer informers.BackupStorageLocationInformer,
+ snapshotLocationInformer informers.VolumeSnapshotLocationInformer,
newPluginManager func(logrus.FieldLogger) plugin.Manager,
) Interface {
c := &backupDeletionController{
@@ -84,13 +84,13 @@ func NewBackupDeletionController(
deleteBackupRequestClient: deleteBackupRequestClient,
deleteBackupRequestLister: deleteBackupRequestInformer.Lister(),
backupClient: backupClient,
- blockStore: blockStore,
restoreLister: restoreInformer.Lister(),
restoreClient: restoreClient,
backupTracker: backupTracker,
resticMgr: resticMgr,
podvolumeBackupLister: podvolumeBackupInformer.Lister(),
backupLocationLister: backupLocationInformer.Lister(),
+ snapshotLocationLister: snapshotLocationInformer.Lister(),
// use variables to refer to these functions so they can be
// replaced with fakes for testing.
@@ -107,6 +107,7 @@ func NewBackupDeletionController(
restoreInformer.Informer().HasSynced,
podvolumeBackupInformer.Informer().HasSynced,
backupLocationInformer.Informer().HasSynced,
+ snapshotLocationInformer.Informer().HasSynced,
)
c.processRequestFunc = c.processRequest
@@ -223,17 +224,6 @@ func (c *backupDeletionController) processRequest(req *v1.DeleteBackupRequest) e
}
}
- // If the backup includes snapshots but we don't currently have a PVProvider, we don't
- // want to orphan the snapshots so skip deletion.
- if c.blockStore == nil && len(backup.Status.VolumeBackups) > 0 {
- req, err = c.patchDeleteBackupRequest(req, func(r *v1.DeleteBackupRequest) {
- r.Status.Phase = v1.DeleteBackupRequestPhaseProcessed
- r.Status.Errors = []string{"unable to delete backup because it includes PV snapshots and Ark is not configured with a PersistentVolumeProvider"}
- })
-
- return err
- }
-
// Set backup status to Deleting
backup, err = c.patchBackup(backup, func(b *v1.Backup) {
b.Status.Phase = v1.BackupPhaseDeleting
@@ -245,11 +235,37 @@ func (c *backupDeletionController) processRequest(req *v1.DeleteBackupRequest) e
var errs []string
- log.Info("Removing PV snapshots")
- for _, volumeBackup := range backup.Status.VolumeBackups {
- log.WithField("snapshotID", volumeBackup.SnapshotID).Info("Removing snapshot associated with backup")
- if err := c.blockStore.DeleteSnapshot(volumeBackup.SnapshotID); err != nil {
- errs = append(errs, errors.Wrapf(err, "error deleting snapshot %s", volumeBackup.SnapshotID).Error())
+ pluginManager := c.newPluginManager(log)
+ defer pluginManager.CleanupClients()
+
+ backupStore, backupStoreErr := c.backupStoreForBackup(backup, pluginManager, log)
+ if backupStoreErr != nil {
+ errs = append(errs, backupStoreErr.Error())
+ }
+
+ if backupStore != nil {
+ log.Info("Removing PV snapshots")
+ if snapshots, err := backupStore.GetBackupVolumeSnapshots(backup.Name); err != nil {
+ errs = append(errs, errors.Wrap(err, "error getting backup's volume snapshots").Error())
+ } else {
+ blockStores := make(map[string]cloudprovider.BlockStore)
+
+ for _, snapshot := range snapshots {
+ log.WithField("providerSnapshotID", snapshot.Status.ProviderSnapshotID).Info("Removing snapshot associated with backup")
+
+ blockStore, ok := blockStores[snapshot.Spec.Location]
+ if !ok {
+ if blockStore, err = blockStoreForSnapshotLocation(backup.Namespace, snapshot.Spec.Location, c.snapshotLocationLister, pluginManager); err != nil {
+ errs = append(errs, err.Error())
+ continue
+ }
+ blockStores[snapshot.Spec.Location] = blockStore
+ }
+
+ if err := blockStore.DeleteSnapshot(snapshot.Status.ProviderSnapshotID); err != nil {
+ errs = append(errs, errors.Wrapf(err, "error deleting snapshot %s", snapshot.Status.ProviderSnapshotID).Error())
+ }
+ }
}
}
@@ -260,17 +276,11 @@ func (c *backupDeletionController) processRequest(req *v1.DeleteBackupRequest) e
}
}
- log.Info("Removing backup from backup storage")
- pluginManager := c.newPluginManager(log)
- defer pluginManager.CleanupClients()
-
- backupStore, backupStoreErr := c.backupStoreForBackup(backup, pluginManager, log)
- if backupStoreErr != nil {
- errs = append(errs, backupStoreErr.Error())
- }
-
- if err := backupStore.DeleteBackup(backup.Name); err != nil {
- errs = append(errs, err.Error())
+ if backupStore != nil {
+ log.Info("Removing backup from backup storage")
+ if err := backupStore.DeleteBackup(backup.Name); err != nil {
+ errs = append(errs, err.Error())
+ }
}
log.Info("Removing restores")
@@ -328,6 +338,28 @@ func (c *backupDeletionController) processRequest(req *v1.DeleteBackupRequest) e
return nil
}
+func blockStoreForSnapshotLocation(
+ namespace, snapshotLocationName string,
+ snapshotLocationLister listers.VolumeSnapshotLocationLister,
+ pluginManager plugin.Manager,
+) (cloudprovider.BlockStore, error) {
+ snapshotLocation, err := snapshotLocationLister.VolumeSnapshotLocations(namespace).Get(snapshotLocationName)
+ if err != nil {
+ return nil, errors.Wrapf(err, "error getting volume snapshot location %s", snapshotLocationName)
+ }
+
+ blockStore, err := pluginManager.GetBlockStore(snapshotLocation.Spec.Provider)
+ if err != nil {
+ return nil, errors.Wrapf(err, "error getting block store for provider %s", snapshotLocation.Spec.Provider)
+ }
+
+ if err = blockStore.Init(snapshotLocation.Spec.Config); err != nil {
+ return nil, errors.Wrapf(err, "error initializing block store for volume snapshot location %s", snapshotLocationName)
+ }
+
+ return blockStore, nil
+}
+
func (c *backupDeletionController) backupStoreForBackup(backup *v1.Backup, pluginManager plugin.Manager, log logrus.FieldLogger) (persistence.BackupStore, error) {
backupLocation, err := c.backupLocationLister.BackupStorageLocations(backup.Namespace).Get(backup.Spec.StorageLocation)
if err != nil {
diff --git a/pkg/controller/backup_deletion_controller_test.go b/pkg/controller/backup_deletion_controller_test.go
index 28301056f1..425e735b4e 100644
--- a/pkg/controller/backup_deletion_controller_test.go
+++ b/pkg/controller/backup_deletion_controller_test.go
@@ -42,6 +42,7 @@ import (
"github.com/heptio/ark/pkg/plugin"
pluginmocks "github.com/heptio/ark/pkg/plugin/mocks"
arktest "github.com/heptio/ark/pkg/util/test"
+ "github.com/heptio/ark/pkg/volume"
)
func TestBackupDeletionControllerProcessQueueItem(t *testing.T) {
@@ -53,13 +54,13 @@ func TestBackupDeletionControllerProcessQueueItem(t *testing.T) {
sharedInformers.Ark().V1().DeleteBackupRequests(),
client.ArkV1(), // deleteBackupRequestClient
client.ArkV1(), // backupClient
- nil, // blockStore
sharedInformers.Ark().V1().Restores(),
client.ArkV1(), // restoreClient
NewBackupTracker(),
nil, // restic repository manager
sharedInformers.Ark().V1().PodVolumeBackups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
nil, // new plugin manager func
).(*backupDeletionController)
@@ -139,13 +140,13 @@ func setupBackupDeletionControllerTest(objects ...runtime.Object) *backupDeletio
sharedInformers.Ark().V1().DeleteBackupRequests(),
client.ArkV1(), // deleteBackupRequestClient
client.ArkV1(), // backupClient
- blockStore,
sharedInformers.Ark().V1().Restores(),
client.ArkV1(), // restoreClient
NewBackupTracker(),
nil, // restic repository manager
sharedInformers.Ark().V1().PodVolumeBackups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
func(logrus.FieldLogger) plugin.Manager { return pluginManager },
).(*backupDeletionController),
@@ -323,47 +324,8 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
assert.Equal(t, expectedActions, td.client.Actions())
})
- t.Run("no block store, backup has snapshots", func(t *testing.T) {
- td := setupBackupDeletionControllerTest()
- td.controller.blockStore = nil
-
- td.client.PrependReactor("get", "backups", func(action core.Action) (bool, runtime.Object, error) {
- backup := arktest.NewTestBackup().WithName("backup-1").WithSnapshot("pv-1", "snap-1").Backup
- return true, backup, nil
- })
-
- td.client.PrependReactor("patch", "deletebackuprequests", func(action core.Action) (bool, runtime.Object, error) {
- return true, td.req, nil
- })
-
- err := td.controller.processRequest(td.req)
- require.NoError(t, err)
-
- expectedActions := []core.Action{
- core.NewPatchAction(
- v1.SchemeGroupVersion.WithResource("deletebackuprequests"),
- td.req.Namespace,
- td.req.Name,
- []byte(`{"status":{"phase":"InProgress"}}`),
- ),
- core.NewGetAction(
- v1.SchemeGroupVersion.WithResource("backups"),
- td.req.Namespace,
- td.req.Spec.BackupName,
- ),
- core.NewPatchAction(
- v1.SchemeGroupVersion.WithResource("deletebackuprequests"),
- td.req.Namespace,
- td.req.Name,
- []byte(`{"status":{"errors":["unable to delete backup because it includes PV snapshots and Ark is not configured with a PersistentVolumeProvider"],"phase":"Processed"}}`),
- ),
- }
-
- assert.Equal(t, expectedActions, td.client.Actions())
- })
-
t.Run("full delete, no errors", func(t *testing.T) {
- backup := arktest.NewTestBackup().WithName("foo").WithSnapshot("pv-1", "snap-1").Backup
+ backup := arktest.NewTestBackup().WithName("foo").Backup
backup.UID = "uid"
backup.Spec.StorageLocation = "primary"
@@ -393,6 +355,17 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
}
require.NoError(t, td.sharedInformers.Ark().V1().BackupStorageLocations().Informer().GetStore().Add(location))
+ snapshotLocation := &v1.VolumeSnapshotLocation{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: backup.Namespace,
+ Name: "vsl-1",
+ },
+ Spec: v1.VolumeSnapshotLocationSpec{
+ Provider: "provider-1",
+ },
+ }
+ require.NoError(t, td.sharedInformers.Ark().V1().VolumeSnapshotLocations().Informer().GetStore().Add(snapshotLocation))
+
// Clear out req labels to make sure the controller adds them
td.req.Labels = make(map[string]string)
@@ -409,6 +382,23 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
return true, backup, nil
})
+ snapshots := []*volume.Snapshot{
+ {
+ Spec: volume.SnapshotSpec{
+ Location: "vsl-1",
+ },
+ Status: volume.SnapshotStatus{
+ ProviderSnapshotID: "snap-1",
+ },
+ },
+ }
+
+ pluginManager := &pluginmocks.Manager{}
+ pluginManager.On("GetBlockStore", "provider-1").Return(td.blockStore, nil)
+ pluginManager.On("CleanupClients")
+ td.controller.newPluginManager = func(logrus.FieldLogger) plugin.Manager { return pluginManager }
+
+ td.backupStore.On("GetBackupVolumeSnapshots", td.req.Spec.BackupName).Return(snapshots, nil)
td.backupStore.On("DeleteBackup", td.req.Spec.BackupName).Return(nil)
td.backupStore.On("DeleteRestore", "restore-1").Return(nil)
td.backupStore.On("DeleteRestore", "restore-2").Return(nil)
@@ -593,13 +583,13 @@ func TestBackupDeletionControllerDeleteExpiredRequests(t *testing.T) {
sharedInformers.Ark().V1().DeleteBackupRequests(),
client.ArkV1(), // deleteBackupRequestClient
client.ArkV1(), // backupClient
- nil, // blockStore
sharedInformers.Ark().V1().Restores(),
client.ArkV1(), // restoreClient
NewBackupTracker(),
nil,
sharedInformers.Ark().V1().PodVolumeBackups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
nil, // new plugin manager func
).(*backupDeletionController)
diff --git a/pkg/controller/backup_sync_controller.go b/pkg/controller/backup_sync_controller.go
index 75f6d6bc7a..ee85bff285 100644
--- a/pkg/controller/backup_sync_controller.go
+++ b/pkg/controller/backup_sync_controller.go
@@ -198,8 +198,10 @@ func (c *backupSyncController) run() {
switch {
case err != nil && kuberrs.IsAlreadyExists(err):
log.Debug("Backup already exists in cluster")
+ continue
case err != nil && !kuberrs.IsAlreadyExists(err):
log.WithError(errors.WithStack(err)).Error("Error syncing backup into cluster")
+ continue
default:
log.Debug("Synced backup into cluster")
}
diff --git a/pkg/controller/restore_controller.go b/pkg/controller/restore_controller.go
index 9162cee065..e153e1fa0f 100644
--- a/pkg/controller/restore_controller.go
+++ b/pkg/controller/restore_controller.go
@@ -68,17 +68,18 @@ var nonRestorableResources = []string{
type restoreController struct {
*genericController
- namespace string
- restoreClient arkv1client.RestoresGetter
- backupClient arkv1client.BackupsGetter
- restorer restore.Restorer
- pvProviderExists bool
- backupLister listers.BackupLister
- restoreLister listers.RestoreLister
- backupLocationLister listers.BackupStorageLocationLister
- restoreLogLevel logrus.Level
- defaultBackupLocation string
- metrics *metrics.ServerMetrics
+ namespace string
+ restoreClient arkv1client.RestoresGetter
+ backupClient arkv1client.BackupsGetter
+ restorer restore.Restorer
+ pvProviderExists bool
+ backupLister listers.BackupLister
+ restoreLister listers.RestoreLister
+ backupLocationLister listers.BackupStorageLocationLister
+ snapshotLocationLister listers.VolumeSnapshotLocationLister
+ restoreLogLevel logrus.Level
+ defaultBackupLocation string
+ metrics *metrics.ServerMetrics
newPluginManager func(logger logrus.FieldLogger) plugin.Manager
newBackupStore func(*api.BackupStorageLocation, persistence.ObjectStoreGetter, logrus.FieldLogger) (persistence.BackupStore, error)
@@ -92,6 +93,7 @@ func NewRestoreController(
restorer restore.Restorer,
backupInformer informers.BackupInformer,
backupLocationInformer informers.BackupStorageLocationInformer,
+ snapshotLocationInformer informers.VolumeSnapshotLocationInformer,
pvProviderExists bool,
logger logrus.FieldLogger,
restoreLogLevel logrus.Level,
@@ -100,18 +102,19 @@ func NewRestoreController(
metrics *metrics.ServerMetrics,
) Interface {
c := &restoreController{
- genericController: newGenericController("restore", logger),
- namespace: namespace,
- restoreClient: restoreClient,
- backupClient: backupClient,
- restorer: restorer,
- pvProviderExists: pvProviderExists,
- backupLister: backupInformer.Lister(),
- restoreLister: restoreInformer.Lister(),
- backupLocationLister: backupLocationInformer.Lister(),
- restoreLogLevel: restoreLogLevel,
- defaultBackupLocation: defaultBackupLocation,
- metrics: metrics,
+ genericController: newGenericController("restore", logger),
+ namespace: namespace,
+ restoreClient: restoreClient,
+ backupClient: backupClient,
+ restorer: restorer,
+ pvProviderExists: pvProviderExists,
+ backupLister: backupInformer.Lister(),
+ restoreLister: restoreInformer.Lister(),
+ backupLocationLister: backupLocationInformer.Lister(),
+ snapshotLocationLister: snapshotLocationInformer.Lister(),
+ restoreLogLevel: restoreLogLevel,
+ defaultBackupLocation: defaultBackupLocation,
+ metrics: metrics,
// use variables to refer to these functions so they can be
// replaced with fakes for testing.
@@ -124,6 +127,7 @@ func NewRestoreController(
backupInformer.Informer().HasSynced,
restoreInformer.Informer().HasSynced,
backupLocationInformer.Informer().HasSynced,
+ snapshotLocationInformer.Informer().HasSynced,
)
restoreInformer.Informer().AddEventHandler(
@@ -233,6 +237,7 @@ func (c *restoreController) processRestore(key string) error {
restore,
actions,
info,
+ pluginManager,
)
restore.Status.Warnings = len(restoreWarnings.Ark) + len(restoreWarnings.Cluster)
@@ -482,6 +487,7 @@ func (c *restoreController) runRestore(
restore *api.Restore,
actions []restore.ItemAction,
info backupInfo,
+ pluginManager plugin.Manager,
) (restoreWarnings, restoreErrors api.RestoreResult, restoreFailure error) {
logFile, err := ioutil.TempFile("", "")
if err != nil {
@@ -531,10 +537,18 @@ func (c *restoreController) runRestore(
}
defer closeAndRemoveFile(resultsFile, c.logger)
+ volumeSnapshots, err := info.backupStore.GetBackupVolumeSnapshots(restore.Spec.BackupName)
+ if err != nil {
+ log.WithError(errors.WithStack(err)).Error("Error fetching volume snapshots")
+ restoreErrors.Ark = append(restoreErrors.Ark, err.Error())
+ restoreFailure = err
+ return
+ }
+
// Any return statement above this line means a total restore failure
// Some failures after this line *may* be a total restore failure
log.Info("starting restore")
- restoreWarnings, restoreErrors = c.restorer.Restore(log, restore, info.backup, backupFile, actions)
+ restoreWarnings, restoreErrors = c.restorer.Restore(log, restore, info.backup, volumeSnapshots, backupFile, actions, c.snapshotLocationLister, pluginManager)
log.Info("restore completed")
// Try to upload the log file. This is best-effort. If we fail, we'll add to the ark errors.
diff --git a/pkg/controller/restore_controller_test.go b/pkg/controller/restore_controller_test.go
index 257645cfd5..3f307214a7 100644
--- a/pkg/controller/restore_controller_test.go
+++ b/pkg/controller/restore_controller_test.go
@@ -24,20 +24,10 @@ import (
"testing"
"time"
- "github.com/pkg/errors"
- "github.com/sirupsen/logrus"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/mock"
- "github.com/stretchr/testify/require"
-
- metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/runtime"
- core "k8s.io/client-go/testing"
- "k8s.io/client-go/tools/cache"
-
api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/generated/clientset/versioned/fake"
informers "github.com/heptio/ark/pkg/generated/informers/externalversions"
+ listers "github.com/heptio/ark/pkg/generated/listers/ark/v1"
"github.com/heptio/ark/pkg/metrics"
"github.com/heptio/ark/pkg/persistence"
persistencemocks "github.com/heptio/ark/pkg/persistence/mocks"
@@ -46,6 +36,16 @@ import (
"github.com/heptio/ark/pkg/restore"
"github.com/heptio/ark/pkg/util/collections"
arktest "github.com/heptio/ark/pkg/util/test"
+ "github.com/heptio/ark/pkg/volume"
+ "github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/mock"
+ "github.com/stretchr/testify/require"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
+ core "k8s.io/client-go/testing"
+ "k8s.io/client-go/tools/cache"
)
func TestFetchBackupInfo(t *testing.T) {
@@ -104,6 +104,7 @@ func TestFetchBackupInfo(t *testing.T) {
restorer,
sharedInformers.Ark().V1().Backups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
false,
logger,
logrus.InfoLevel,
@@ -197,6 +198,7 @@ func TestProcessRestoreSkips(t *testing.T) {
restorer,
sharedInformers.Ark().V1().Backups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
false, // pvProviderExists
logger,
logrus.InfoLevel,
@@ -422,6 +424,7 @@ func TestProcessRestore(t *testing.T) {
restorer,
sharedInformers.Ark().V1().Backups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
test.allowRestoreSnapshots,
logger,
logrus.InfoLevel,
@@ -498,6 +501,16 @@ func TestProcessRestore(t *testing.T) {
backupStore.On("PutRestoreLog", test.backup.Name, test.restore.Name, mock.Anything).Return(test.putRestoreLogErr)
backupStore.On("PutRestoreResults", test.backup.Name, test.restore.Name, mock.Anything).Return(nil)
+
+ volumeSnapshots := []*volume.Snapshot{
+ {
+ Spec: volume.SnapshotSpec{
+ PersistentVolumeName: "test-pv",
+ BackupName: test.backup.Name,
+ },
+ },
+ }
+ backupStore.On("GetBackupVolumeSnapshots", test.backup.Name).Return(volumeSnapshots, nil)
}
var (
@@ -629,6 +642,7 @@ func TestvalidateAndCompleteWhenScheduleNameSpecified(t *testing.T) {
nil,
sharedInformers.Ark().V1().Backups(),
sharedInformers.Ark().V1().BackupStorageLocations(),
+ sharedInformers.Ark().V1().VolumeSnapshotLocations(),
false,
logger,
logrus.DebugLevel,
@@ -815,8 +829,11 @@ func (r *fakeRestorer) Restore(
log logrus.FieldLogger,
restore *api.Restore,
backup *api.Backup,
+ volumeSnapshots []*volume.Snapshot,
backupReader io.Reader,
actions []restore.ItemAction,
+ snapshotLocationLister listers.VolumeSnapshotLocationLister,
+ blockStoreGetter restore.BlockStoreGetter,
) (api.RestoreResult, api.RestoreResult) {
res := r.Called(log, restore, backup, backupReader, actions)
diff --git a/pkg/generated/clientset/versioned/typed/ark/v1/ark_client.go b/pkg/generated/clientset/versioned/typed/ark/v1/ark_client.go
index 8abe24f1e3..a835394066 100644
--- a/pkg/generated/clientset/versioned/typed/ark/v1/ark_client.go
+++ b/pkg/generated/clientset/versioned/typed/ark/v1/ark_client.go
@@ -37,6 +37,7 @@ type ArkV1Interface interface {
ResticRepositoriesGetter
RestoresGetter
SchedulesGetter
+ VolumeSnapshotLocationsGetter
}
// ArkV1Client is used to interact with features provided by the ark.heptio.com group.
@@ -84,6 +85,10 @@ func (c *ArkV1Client) Schedules(namespace string) ScheduleInterface {
return newSchedules(c, namespace)
}
+func (c *ArkV1Client) VolumeSnapshotLocations(namespace string) VolumeSnapshotLocationInterface {
+ return newVolumeSnapshotLocations(c, namespace)
+}
+
// NewForConfig creates a new ArkV1Client for the given config.
func NewForConfig(c *rest.Config) (*ArkV1Client, error) {
config := *c
diff --git a/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_ark_client.go b/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_ark_client.go
index 656fc0e57d..988261b667 100644
--- a/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_ark_client.go
+++ b/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_ark_client.go
@@ -68,6 +68,10 @@ func (c *FakeArkV1) Schedules(namespace string) v1.ScheduleInterface {
return &FakeSchedules{c, namespace}
}
+func (c *FakeArkV1) VolumeSnapshotLocations(namespace string) v1.VolumeSnapshotLocationInterface {
+ return &FakeVolumeSnapshotLocations{c, namespace}
+}
+
// RESTClient returns a RESTClient that is used to communicate
// with API server by this client implementation.
func (c *FakeArkV1) RESTClient() rest.Interface {
diff --git a/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_volumesnapshotlocation.go b/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_volumesnapshotlocation.go
new file mode 100644
index 0000000000..2574e53a37
--- /dev/null
+++ b/pkg/generated/clientset/versioned/typed/ark/v1/fake/fake_volumesnapshotlocation.go
@@ -0,0 +1,140 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by client-gen. DO NOT EDIT.
+
+package fake
+
+import (
+ ark_v1 "github.com/heptio/ark/pkg/apis/ark/v1"
+ v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ labels "k8s.io/apimachinery/pkg/labels"
+ schema "k8s.io/apimachinery/pkg/runtime/schema"
+ types "k8s.io/apimachinery/pkg/types"
+ watch "k8s.io/apimachinery/pkg/watch"
+ testing "k8s.io/client-go/testing"
+)
+
+// FakeVolumeSnapshotLocations implements VolumeSnapshotLocationInterface
+type FakeVolumeSnapshotLocations struct {
+ Fake *FakeArkV1
+ ns string
+}
+
+var volumesnapshotlocationsResource = schema.GroupVersionResource{Group: "ark.heptio.com", Version: "v1", Resource: "volumesnapshotlocations"}
+
+var volumesnapshotlocationsKind = schema.GroupVersionKind{Group: "ark.heptio.com", Version: "v1", Kind: "VolumeSnapshotLocation"}
+
+// Get takes name of the volumeSnapshotLocation, and returns the corresponding volumeSnapshotLocation object, and an error if there is any.
+func (c *FakeVolumeSnapshotLocations) Get(name string, options v1.GetOptions) (result *ark_v1.VolumeSnapshotLocation, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewGetAction(volumesnapshotlocationsResource, c.ns, name), &ark_v1.VolumeSnapshotLocation{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*ark_v1.VolumeSnapshotLocation), err
+}
+
+// List takes label and field selectors, and returns the list of VolumeSnapshotLocations that match those selectors.
+func (c *FakeVolumeSnapshotLocations) List(opts v1.ListOptions) (result *ark_v1.VolumeSnapshotLocationList, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewListAction(volumesnapshotlocationsResource, volumesnapshotlocationsKind, c.ns, opts), &ark_v1.VolumeSnapshotLocationList{})
+
+ if obj == nil {
+ return nil, err
+ }
+
+ label, _, _ := testing.ExtractFromListOptions(opts)
+ if label == nil {
+ label = labels.Everything()
+ }
+ list := &ark_v1.VolumeSnapshotLocationList{ListMeta: obj.(*ark_v1.VolumeSnapshotLocationList).ListMeta}
+ for _, item := range obj.(*ark_v1.VolumeSnapshotLocationList).Items {
+ if label.Matches(labels.Set(item.Labels)) {
+ list.Items = append(list.Items, item)
+ }
+ }
+ return list, err
+}
+
+// Watch returns a watch.Interface that watches the requested volumeSnapshotLocations.
+func (c *FakeVolumeSnapshotLocations) Watch(opts v1.ListOptions) (watch.Interface, error) {
+ return c.Fake.
+ InvokesWatch(testing.NewWatchAction(volumesnapshotlocationsResource, c.ns, opts))
+
+}
+
+// Create takes the representation of a volumeSnapshotLocation and creates it. Returns the server's representation of the volumeSnapshotLocation, and an error, if there is any.
+func (c *FakeVolumeSnapshotLocations) Create(volumeSnapshotLocation *ark_v1.VolumeSnapshotLocation) (result *ark_v1.VolumeSnapshotLocation, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewCreateAction(volumesnapshotlocationsResource, c.ns, volumeSnapshotLocation), &ark_v1.VolumeSnapshotLocation{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*ark_v1.VolumeSnapshotLocation), err
+}
+
+// Update takes the representation of a volumeSnapshotLocation and updates it. Returns the server's representation of the volumeSnapshotLocation, and an error, if there is any.
+func (c *FakeVolumeSnapshotLocations) Update(volumeSnapshotLocation *ark_v1.VolumeSnapshotLocation) (result *ark_v1.VolumeSnapshotLocation, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewUpdateAction(volumesnapshotlocationsResource, c.ns, volumeSnapshotLocation), &ark_v1.VolumeSnapshotLocation{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*ark_v1.VolumeSnapshotLocation), err
+}
+
+// UpdateStatus was generated because the type contains a Status member.
+// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
+func (c *FakeVolumeSnapshotLocations) UpdateStatus(volumeSnapshotLocation *ark_v1.VolumeSnapshotLocation) (*ark_v1.VolumeSnapshotLocation, error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewUpdateSubresourceAction(volumesnapshotlocationsResource, "status", c.ns, volumeSnapshotLocation), &ark_v1.VolumeSnapshotLocation{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*ark_v1.VolumeSnapshotLocation), err
+}
+
+// Delete takes name of the volumeSnapshotLocation and deletes it. Returns an error if one occurs.
+func (c *FakeVolumeSnapshotLocations) Delete(name string, options *v1.DeleteOptions) error {
+ _, err := c.Fake.
+ Invokes(testing.NewDeleteAction(volumesnapshotlocationsResource, c.ns, name), &ark_v1.VolumeSnapshotLocation{})
+
+ return err
+}
+
+// DeleteCollection deletes a collection of objects.
+func (c *FakeVolumeSnapshotLocations) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error {
+ action := testing.NewDeleteCollectionAction(volumesnapshotlocationsResource, c.ns, listOptions)
+
+ _, err := c.Fake.Invokes(action, &ark_v1.VolumeSnapshotLocationList{})
+ return err
+}
+
+// Patch applies the patch and returns the patched volumeSnapshotLocation.
+func (c *FakeVolumeSnapshotLocations) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *ark_v1.VolumeSnapshotLocation, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewPatchSubresourceAction(volumesnapshotlocationsResource, c.ns, name, data, subresources...), &ark_v1.VolumeSnapshotLocation{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*ark_v1.VolumeSnapshotLocation), err
+}
diff --git a/pkg/generated/clientset/versioned/typed/ark/v1/generated_expansion.go b/pkg/generated/clientset/versioned/typed/ark/v1/generated_expansion.go
index 4181aa73a6..47768e6b02 100644
--- a/pkg/generated/clientset/versioned/typed/ark/v1/generated_expansion.go
+++ b/pkg/generated/clientset/versioned/typed/ark/v1/generated_expansion.go
@@ -37,3 +37,5 @@ type ResticRepositoryExpansion interface{}
type RestoreExpansion interface{}
type ScheduleExpansion interface{}
+
+type VolumeSnapshotLocationExpansion interface{}
diff --git a/pkg/generated/clientset/versioned/typed/ark/v1/volumesnapshotlocation.go b/pkg/generated/clientset/versioned/typed/ark/v1/volumesnapshotlocation.go
new file mode 100644
index 0000000000..7a285ecaa7
--- /dev/null
+++ b/pkg/generated/clientset/versioned/typed/ark/v1/volumesnapshotlocation.go
@@ -0,0 +1,174 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by client-gen. DO NOT EDIT.
+
+package v1
+
+import (
+ v1 "github.com/heptio/ark/pkg/apis/ark/v1"
+ scheme "github.com/heptio/ark/pkg/generated/clientset/versioned/scheme"
+ meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ types "k8s.io/apimachinery/pkg/types"
+ watch "k8s.io/apimachinery/pkg/watch"
+ rest "k8s.io/client-go/rest"
+)
+
+// VolumeSnapshotLocationsGetter has a method to return a VolumeSnapshotLocationInterface.
+// A group's client should implement this interface.
+type VolumeSnapshotLocationsGetter interface {
+ VolumeSnapshotLocations(namespace string) VolumeSnapshotLocationInterface
+}
+
+// VolumeSnapshotLocationInterface has methods to work with VolumeSnapshotLocation resources.
+type VolumeSnapshotLocationInterface interface {
+ Create(*v1.VolumeSnapshotLocation) (*v1.VolumeSnapshotLocation, error)
+ Update(*v1.VolumeSnapshotLocation) (*v1.VolumeSnapshotLocation, error)
+ UpdateStatus(*v1.VolumeSnapshotLocation) (*v1.VolumeSnapshotLocation, error)
+ Delete(name string, options *meta_v1.DeleteOptions) error
+ DeleteCollection(options *meta_v1.DeleteOptions, listOptions meta_v1.ListOptions) error
+ Get(name string, options meta_v1.GetOptions) (*v1.VolumeSnapshotLocation, error)
+ List(opts meta_v1.ListOptions) (*v1.VolumeSnapshotLocationList, error)
+ Watch(opts meta_v1.ListOptions) (watch.Interface, error)
+ Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.VolumeSnapshotLocation, err error)
+ VolumeSnapshotLocationExpansion
+}
+
+// volumeSnapshotLocations implements VolumeSnapshotLocationInterface
+type volumeSnapshotLocations struct {
+ client rest.Interface
+ ns string
+}
+
+// newVolumeSnapshotLocations returns a VolumeSnapshotLocations
+func newVolumeSnapshotLocations(c *ArkV1Client, namespace string) *volumeSnapshotLocations {
+ return &volumeSnapshotLocations{
+ client: c.RESTClient(),
+ ns: namespace,
+ }
+}
+
+// Get takes name of the volumeSnapshotLocation, and returns the corresponding volumeSnapshotLocation object, and an error if there is any.
+func (c *volumeSnapshotLocations) Get(name string, options meta_v1.GetOptions) (result *v1.VolumeSnapshotLocation, err error) {
+ result = &v1.VolumeSnapshotLocation{}
+ err = c.client.Get().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ Name(name).
+ VersionedParams(&options, scheme.ParameterCodec).
+ Do().
+ Into(result)
+ return
+}
+
+// List takes label and field selectors, and returns the list of VolumeSnapshotLocations that match those selectors.
+func (c *volumeSnapshotLocations) List(opts meta_v1.ListOptions) (result *v1.VolumeSnapshotLocationList, err error) {
+ result = &v1.VolumeSnapshotLocationList{}
+ err = c.client.Get().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Do().
+ Into(result)
+ return
+}
+
+// Watch returns a watch.Interface that watches the requested volumeSnapshotLocations.
+func (c *volumeSnapshotLocations) Watch(opts meta_v1.ListOptions) (watch.Interface, error) {
+ opts.Watch = true
+ return c.client.Get().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Watch()
+}
+
+// Create takes the representation of a volumeSnapshotLocation and creates it. Returns the server's representation of the volumeSnapshotLocation, and an error, if there is any.
+func (c *volumeSnapshotLocations) Create(volumeSnapshotLocation *v1.VolumeSnapshotLocation) (result *v1.VolumeSnapshotLocation, err error) {
+ result = &v1.VolumeSnapshotLocation{}
+ err = c.client.Post().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ Body(volumeSnapshotLocation).
+ Do().
+ Into(result)
+ return
+}
+
+// Update takes the representation of a volumeSnapshotLocation and updates it. Returns the server's representation of the volumeSnapshotLocation, and an error, if there is any.
+func (c *volumeSnapshotLocations) Update(volumeSnapshotLocation *v1.VolumeSnapshotLocation) (result *v1.VolumeSnapshotLocation, err error) {
+ result = &v1.VolumeSnapshotLocation{}
+ err = c.client.Put().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ Name(volumeSnapshotLocation.Name).
+ Body(volumeSnapshotLocation).
+ Do().
+ Into(result)
+ return
+}
+
+// UpdateStatus was generated because the type contains a Status member.
+// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
+
+func (c *volumeSnapshotLocations) UpdateStatus(volumeSnapshotLocation *v1.VolumeSnapshotLocation) (result *v1.VolumeSnapshotLocation, err error) {
+ result = &v1.VolumeSnapshotLocation{}
+ err = c.client.Put().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ Name(volumeSnapshotLocation.Name).
+ SubResource("status").
+ Body(volumeSnapshotLocation).
+ Do().
+ Into(result)
+ return
+}
+
+// Delete takes name of the volumeSnapshotLocation and deletes it. Returns an error if one occurs.
+func (c *volumeSnapshotLocations) Delete(name string, options *meta_v1.DeleteOptions) error {
+ return c.client.Delete().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ Name(name).
+ Body(options).
+ Do().
+ Error()
+}
+
+// DeleteCollection deletes a collection of objects.
+func (c *volumeSnapshotLocations) DeleteCollection(options *meta_v1.DeleteOptions, listOptions meta_v1.ListOptions) error {
+ return c.client.Delete().
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ VersionedParams(&listOptions, scheme.ParameterCodec).
+ Body(options).
+ Do().
+ Error()
+}
+
+// Patch applies the patch and returns the patched volumeSnapshotLocation.
+func (c *volumeSnapshotLocations) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.VolumeSnapshotLocation, err error) {
+ result = &v1.VolumeSnapshotLocation{}
+ err = c.client.Patch(pt).
+ Namespace(c.ns).
+ Resource("volumesnapshotlocations").
+ SubResource(subresources...).
+ Name(name).
+ Body(data).
+ Do().
+ Into(result)
+ return
+}
diff --git a/pkg/generated/informers/externalversions/ark/v1/interface.go b/pkg/generated/informers/externalversions/ark/v1/interface.go
index 3b253965c9..bdeb84abcc 100644
--- a/pkg/generated/informers/externalversions/ark/v1/interface.go
+++ b/pkg/generated/informers/externalversions/ark/v1/interface.go
@@ -44,6 +44,8 @@ type Interface interface {
Restores() RestoreInformer
// Schedules returns a ScheduleInformer.
Schedules() ScheduleInformer
+ // VolumeSnapshotLocations returns a VolumeSnapshotLocationInformer.
+ VolumeSnapshotLocations() VolumeSnapshotLocationInformer
}
type version struct {
@@ -106,3 +108,8 @@ func (v *version) Restores() RestoreInformer {
func (v *version) Schedules() ScheduleInformer {
return &scheduleInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
}
+
+// VolumeSnapshotLocations returns a VolumeSnapshotLocationInformer.
+func (v *version) VolumeSnapshotLocations() VolumeSnapshotLocationInformer {
+ return &volumeSnapshotLocationInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
+}
diff --git a/pkg/generated/informers/externalversions/ark/v1/volumesnapshotlocation.go b/pkg/generated/informers/externalversions/ark/v1/volumesnapshotlocation.go
new file mode 100644
index 0000000000..71d88b766d
--- /dev/null
+++ b/pkg/generated/informers/externalversions/ark/v1/volumesnapshotlocation.go
@@ -0,0 +1,89 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by informer-gen. DO NOT EDIT.
+
+package v1
+
+import (
+ time "time"
+
+ ark_v1 "github.com/heptio/ark/pkg/apis/ark/v1"
+ versioned "github.com/heptio/ark/pkg/generated/clientset/versioned"
+ internalinterfaces "github.com/heptio/ark/pkg/generated/informers/externalversions/internalinterfaces"
+ v1 "github.com/heptio/ark/pkg/generated/listers/ark/v1"
+ meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ runtime "k8s.io/apimachinery/pkg/runtime"
+ watch "k8s.io/apimachinery/pkg/watch"
+ cache "k8s.io/client-go/tools/cache"
+)
+
+// VolumeSnapshotLocationInformer provides access to a shared informer and lister for
+// VolumeSnapshotLocations.
+type VolumeSnapshotLocationInformer interface {
+ Informer() cache.SharedIndexInformer
+ Lister() v1.VolumeSnapshotLocationLister
+}
+
+type volumeSnapshotLocationInformer struct {
+ factory internalinterfaces.SharedInformerFactory
+ tweakListOptions internalinterfaces.TweakListOptionsFunc
+ namespace string
+}
+
+// NewVolumeSnapshotLocationInformer constructs a new informer for VolumeSnapshotLocation type.
+// Always prefer using an informer factory to get a shared informer instead of getting an independent
+// one. This reduces memory footprint and number of connections to the server.
+func NewVolumeSnapshotLocationInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
+ return NewFilteredVolumeSnapshotLocationInformer(client, namespace, resyncPeriod, indexers, nil)
+}
+
+// NewFilteredVolumeSnapshotLocationInformer constructs a new informer for VolumeSnapshotLocation type.
+// Always prefer using an informer factory to get a shared informer instead of getting an independent
+// one. This reduces memory footprint and number of connections to the server.
+func NewFilteredVolumeSnapshotLocationInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
+ return cache.NewSharedIndexInformer(
+ &cache.ListWatch{
+ ListFunc: func(options meta_v1.ListOptions) (runtime.Object, error) {
+ if tweakListOptions != nil {
+ tweakListOptions(&options)
+ }
+ return client.ArkV1().VolumeSnapshotLocations(namespace).List(options)
+ },
+ WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) {
+ if tweakListOptions != nil {
+ tweakListOptions(&options)
+ }
+ return client.ArkV1().VolumeSnapshotLocations(namespace).Watch(options)
+ },
+ },
+ &ark_v1.VolumeSnapshotLocation{},
+ resyncPeriod,
+ indexers,
+ )
+}
+
+func (f *volumeSnapshotLocationInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
+ return NewFilteredVolumeSnapshotLocationInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
+}
+
+func (f *volumeSnapshotLocationInformer) Informer() cache.SharedIndexInformer {
+ return f.factory.InformerFor(&ark_v1.VolumeSnapshotLocation{}, f.defaultInformer)
+}
+
+func (f *volumeSnapshotLocationInformer) Lister() v1.VolumeSnapshotLocationLister {
+ return v1.NewVolumeSnapshotLocationLister(f.Informer().GetIndexer())
+}
diff --git a/pkg/generated/informers/externalversions/generic.go b/pkg/generated/informers/externalversions/generic.go
index bea5129160..3973bf1d80 100644
--- a/pkg/generated/informers/externalversions/generic.go
+++ b/pkg/generated/informers/externalversions/generic.go
@@ -73,6 +73,8 @@ func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource
return &genericInformer{resource: resource.GroupResource(), informer: f.Ark().V1().Restores().Informer()}, nil
case v1.SchemeGroupVersion.WithResource("schedules"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Ark().V1().Schedules().Informer()}, nil
+ case v1.SchemeGroupVersion.WithResource("volumesnapshotlocations"):
+ return &genericInformer{resource: resource.GroupResource(), informer: f.Ark().V1().VolumeSnapshotLocations().Informer()}, nil
}
diff --git a/pkg/generated/listers/ark/v1/expansion_generated.go b/pkg/generated/listers/ark/v1/expansion_generated.go
index 2a1d153e7d..7daa6b8e96 100644
--- a/pkg/generated/listers/ark/v1/expansion_generated.go
+++ b/pkg/generated/listers/ark/v1/expansion_generated.go
@@ -97,3 +97,11 @@ type ScheduleListerExpansion interface{}
// ScheduleNamespaceListerExpansion allows custom methods to be added to
// ScheduleNamespaceLister.
type ScheduleNamespaceListerExpansion interface{}
+
+// VolumeSnapshotLocationListerExpansion allows custom methods to be added to
+// VolumeSnapshotLocationLister.
+type VolumeSnapshotLocationListerExpansion interface{}
+
+// VolumeSnapshotLocationNamespaceListerExpansion allows custom methods to be added to
+// VolumeSnapshotLocationNamespaceLister.
+type VolumeSnapshotLocationNamespaceListerExpansion interface{}
diff --git a/pkg/generated/listers/ark/v1/volumesnapshotlocation.go b/pkg/generated/listers/ark/v1/volumesnapshotlocation.go
new file mode 100644
index 0000000000..95084c8d92
--- /dev/null
+++ b/pkg/generated/listers/ark/v1/volumesnapshotlocation.go
@@ -0,0 +1,94 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by lister-gen. DO NOT EDIT.
+
+package v1
+
+import (
+ v1 "github.com/heptio/ark/pkg/apis/ark/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/client-go/tools/cache"
+)
+
+// VolumeSnapshotLocationLister helps list VolumeSnapshotLocations.
+type VolumeSnapshotLocationLister interface {
+ // List lists all VolumeSnapshotLocations in the indexer.
+ List(selector labels.Selector) (ret []*v1.VolumeSnapshotLocation, err error)
+ // VolumeSnapshotLocations returns an object that can list and get VolumeSnapshotLocations.
+ VolumeSnapshotLocations(namespace string) VolumeSnapshotLocationNamespaceLister
+ VolumeSnapshotLocationListerExpansion
+}
+
+// volumeSnapshotLocationLister implements the VolumeSnapshotLocationLister interface.
+type volumeSnapshotLocationLister struct {
+ indexer cache.Indexer
+}
+
+// NewVolumeSnapshotLocationLister returns a new VolumeSnapshotLocationLister.
+func NewVolumeSnapshotLocationLister(indexer cache.Indexer) VolumeSnapshotLocationLister {
+ return &volumeSnapshotLocationLister{indexer: indexer}
+}
+
+// List lists all VolumeSnapshotLocations in the indexer.
+func (s *volumeSnapshotLocationLister) List(selector labels.Selector) (ret []*v1.VolumeSnapshotLocation, err error) {
+ err = cache.ListAll(s.indexer, selector, func(m interface{}) {
+ ret = append(ret, m.(*v1.VolumeSnapshotLocation))
+ })
+ return ret, err
+}
+
+// VolumeSnapshotLocations returns an object that can list and get VolumeSnapshotLocations.
+func (s *volumeSnapshotLocationLister) VolumeSnapshotLocations(namespace string) VolumeSnapshotLocationNamespaceLister {
+ return volumeSnapshotLocationNamespaceLister{indexer: s.indexer, namespace: namespace}
+}
+
+// VolumeSnapshotLocationNamespaceLister helps list and get VolumeSnapshotLocations.
+type VolumeSnapshotLocationNamespaceLister interface {
+ // List lists all VolumeSnapshotLocations in the indexer for a given namespace.
+ List(selector labels.Selector) (ret []*v1.VolumeSnapshotLocation, err error)
+ // Get retrieves the VolumeSnapshotLocation from the indexer for a given namespace and name.
+ Get(name string) (*v1.VolumeSnapshotLocation, error)
+ VolumeSnapshotLocationNamespaceListerExpansion
+}
+
+// volumeSnapshotLocationNamespaceLister implements the VolumeSnapshotLocationNamespaceLister
+// interface.
+type volumeSnapshotLocationNamespaceLister struct {
+ indexer cache.Indexer
+ namespace string
+}
+
+// List lists all VolumeSnapshotLocations in the indexer for a given namespace.
+func (s volumeSnapshotLocationNamespaceLister) List(selector labels.Selector) (ret []*v1.VolumeSnapshotLocation, err error) {
+ err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) {
+ ret = append(ret, m.(*v1.VolumeSnapshotLocation))
+ })
+ return ret, err
+}
+
+// Get retrieves the VolumeSnapshotLocation from the indexer for a given namespace and name.
+func (s volumeSnapshotLocationNamespaceLister) Get(name string) (*v1.VolumeSnapshotLocation, error) {
+ obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name)
+ if err != nil {
+ return nil, err
+ }
+ if !exists {
+ return nil, errors.NewNotFound(v1.Resource("volumesnapshotlocation"), name)
+ }
+ return obj.(*v1.VolumeSnapshotLocation), nil
+}
diff --git a/pkg/persistence/mocks/backup_store.go b/pkg/persistence/mocks/backup_store.go
index 5407f97416..315bbe5190 100644
--- a/pkg/persistence/mocks/backup_store.go
+++ b/pkg/persistence/mocks/backup_store.go
@@ -5,6 +5,7 @@ import io "io"
import mock "github.com/stretchr/testify/mock"
import v1 "github.com/heptio/ark/pkg/apis/ark/v1"
+import volume "github.com/heptio/ark/pkg/volume"
// BackupStore is an autogenerated mock type for the BackupStore type
type BackupStore struct {
@@ -85,6 +86,29 @@ func (_m *BackupStore) GetBackupMetadata(name string) (*v1.Backup, error) {
return r0, r1
}
+// GetBackupVolumeSnapshots provides a mock function with given fields: name
+func (_m *BackupStore) GetBackupVolumeSnapshots(name string) ([]*volume.Snapshot, error) {
+ ret := _m.Called(name)
+
+ var r0 []*volume.Snapshot
+ if rf, ok := ret.Get(0).(func(string) []*volume.Snapshot); ok {
+ r0 = rf(name)
+ } else {
+ if ret.Get(0) != nil {
+ r0 = ret.Get(0).([]*volume.Snapshot)
+ }
+ }
+
+ var r1 error
+ if rf, ok := ret.Get(1).(func(string) error); ok {
+ r1 = rf(name)
+ } else {
+ r1 = ret.Error(1)
+ }
+
+ return r0, r1
+}
+
// GetDownloadURL provides a mock function with given fields: target
func (_m *BackupStore) GetDownloadURL(target v1.DownloadTarget) (string, error) {
ret := _m.Called(target)
@@ -164,13 +188,13 @@ func (_m *BackupStore) ListBackups() ([]string, error) {
return r0, r1
}
-// PutBackup provides a mock function with given fields: name, metadata, contents, log
-func (_m *BackupStore) PutBackup(name string, metadata io.Reader, contents io.Reader, log io.Reader) error {
- ret := _m.Called(name, metadata, contents, log)
+// PutBackup provides a mock function with given fields: name, metadata, contents, log, volumeSnapshots
+func (_m *BackupStore) PutBackup(name string, metadata io.Reader, contents io.Reader, log io.Reader, volumeSnapshots io.Reader) error {
+ ret := _m.Called(name, metadata, contents, log, volumeSnapshots)
var r0 error
- if rf, ok := ret.Get(0).(func(string, io.Reader, io.Reader, io.Reader) error); ok {
- r0 = rf(name, metadata, contents, log)
+ if rf, ok := ret.Get(0).(func(string, io.Reader, io.Reader, io.Reader, io.Reader) error); ok {
+ r0 = rf(name, metadata, contents, log, volumeSnapshots)
} else {
r0 = ret.Error(0)
}
diff --git a/pkg/persistence/object_store.go b/pkg/persistence/object_store.go
index b2793cc460..050b498f18 100644
--- a/pkg/persistence/object_store.go
+++ b/pkg/persistence/object_store.go
@@ -17,6 +17,7 @@ limitations under the License.
package persistence
import (
+ "encoding/json"
"io"
"io/ioutil"
"strings"
@@ -31,6 +32,7 @@ import (
arkv1api "github.com/heptio/ark/pkg/apis/ark/v1"
"github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/generated/clientset/versioned/scheme"
+ "github.com/heptio/ark/pkg/volume"
)
// BackupStore defines operations for creating, retrieving, and deleting
@@ -41,8 +43,9 @@ type BackupStore interface {
ListBackups() ([]string, error)
- PutBackup(name string, metadata, contents, log io.Reader) error
+ PutBackup(name string, metadata, contents, log, volumeSnapshots io.Reader) error
GetBackupMetadata(name string) (*arkv1api.Backup, error)
+ GetBackupVolumeSnapshots(name string) ([]*volume.Snapshot, error)
GetBackupContents(name string) (io.ReadCloser, error)
DeleteBackup(name string) error
@@ -159,7 +162,7 @@ func (s *objectBackupStore) ListBackups() ([]string, error) {
return output, nil
}
-func (s *objectBackupStore) PutBackup(name string, metadata io.Reader, contents io.Reader, log io.Reader) error {
+func (s *objectBackupStore) PutBackup(name string, metadata, contents, log, volumeSnapshots io.Reader) error {
if err := seekAndPutObject(s.objectStore, s.bucket, s.layout.getBackupLogKey(name), log); err != nil {
// Uploading the log file is best-effort; if it fails, we log the error but it doesn't impact the
// backup's status.
@@ -183,6 +186,18 @@ func (s *objectBackupStore) PutBackup(name string, metadata io.Reader, contents
return kerrors.NewAggregate([]error{err, deleteErr})
}
+ if err := seekAndPutObject(s.objectStore, s.bucket, s.layout.getBackupVolumeSnapshotsKey(name), volumeSnapshots); err != nil {
+ errs := []error{err}
+
+ deleteErr := s.objectStore.DeleteObject(s.bucket, s.layout.getBackupContentsKey(name))
+ errs = append(errs, deleteErr)
+
+ deleteErr = s.objectStore.DeleteObject(s.bucket, s.layout.getBackupMetadataKey(name))
+ errs = append(errs, deleteErr)
+
+ return kerrors.NewAggregate(errs)
+ }
+
if err := s.putRevision(); err != nil {
s.logger.WithField("backup", name).WithError(err).Warn("Error updating backup store revision")
}
@@ -216,7 +231,23 @@ func (s *objectBackupStore) GetBackupMetadata(name string) (*arkv1api.Backup, er
}
return backupObj, nil
+}
+
+func (s *objectBackupStore) GetBackupVolumeSnapshots(name string) ([]*volume.Snapshot, error) {
+ key := s.layout.getBackupVolumeSnapshotsKey(name)
+
+ res, err := s.objectStore.GetObject(s.bucket, key)
+ if err != nil {
+ return nil, err
+ }
+ defer res.Close()
+
+ var volumeSnapshots []*volume.Snapshot
+ if err := json.NewDecoder(res).Decode(&volumeSnapshots); err != nil {
+ return nil, errors.Wrap(err, "error decoding object data")
+ }
+ return volumeSnapshots, nil
}
func (s *objectBackupStore) GetBackupContents(name string) (io.ReadCloser, error) {
diff --git a/pkg/persistence/object_store_layout.go b/pkg/persistence/object_store_layout.go
index 0454b4333a..6d95d7c29a 100644
--- a/pkg/persistence/object_store_layout.go
+++ b/pkg/persistence/object_store_layout.go
@@ -83,6 +83,10 @@ func (l *ObjectStoreLayout) getBackupLogKey(backup string) string {
return path.Join(l.subdirs["backups"], backup, fmt.Sprintf("%s-logs.gz", backup))
}
+func (l *ObjectStoreLayout) getBackupVolumeSnapshotsKey(backup string) string {
+ return path.Join(l.subdirs["backups"], backup, fmt.Sprintf("%s-volumesnapshots.json.gz", backup))
+}
+
func (l *ObjectStoreLayout) getRestoreLogKey(restore string) string {
return path.Join(l.subdirs["restores"], restore, fmt.Sprintf("restore-%s-logs.gz", restore))
}
diff --git a/pkg/persistence/object_store_test.go b/pkg/persistence/object_store_test.go
index 24489ea02f..bd32f12fd1 100644
--- a/pkg/persistence/object_store_test.go
+++ b/pkg/persistence/object_store_test.go
@@ -211,31 +211,47 @@ func TestPutBackup(t *testing.T) {
metadata io.Reader
contents io.Reader
log io.Reader
+ snapshots io.Reader
expectedErr string
expectedKeys []string
}{
{
- name: "normal case",
- metadata: newStringReadSeeker("metadata"),
- contents: newStringReadSeeker("contents"),
- log: newStringReadSeeker("log"),
- expectedErr: "",
- expectedKeys: []string{"backups/backup-1/ark-backup.json", "backups/backup-1/backup-1.tar.gz", "backups/backup-1/backup-1-logs.gz", "metadata/revision"},
+ name: "normal case",
+ metadata: newStringReadSeeker("metadata"),
+ contents: newStringReadSeeker("contents"),
+ log: newStringReadSeeker("log"),
+ snapshots: newStringReadSeeker("snapshots"),
+ expectedErr: "",
+ expectedKeys: []string{
+ "backups/backup-1/ark-backup.json",
+ "backups/backup-1/backup-1.tar.gz",
+ "backups/backup-1/backup-1-logs.gz",
+ "backups/backup-1/backup-1-volumesnapshots.json.gz",
+ "metadata/revision",
+ },
},
{
- name: "normal case with backup store prefix",
- prefix: "prefix-1/",
- metadata: newStringReadSeeker("metadata"),
- contents: newStringReadSeeker("contents"),
- log: newStringReadSeeker("log"),
- expectedErr: "",
- expectedKeys: []string{"prefix-1/backups/backup-1/ark-backup.json", "prefix-1/backups/backup-1/backup-1.tar.gz", "prefix-1/backups/backup-1/backup-1-logs.gz", "prefix-1/metadata/revision"},
+ name: "normal case with backup store prefix",
+ prefix: "prefix-1/",
+ metadata: newStringReadSeeker("metadata"),
+ contents: newStringReadSeeker("contents"),
+ log: newStringReadSeeker("log"),
+ snapshots: newStringReadSeeker("snapshots"),
+ expectedErr: "",
+ expectedKeys: []string{
+ "prefix-1/backups/backup-1/ark-backup.json",
+ "prefix-1/backups/backup-1/backup-1.tar.gz",
+ "prefix-1/backups/backup-1/backup-1-logs.gz",
+ "prefix-1/backups/backup-1/backup-1-volumesnapshots.json.gz",
+ "prefix-1/metadata/revision",
+ },
},
{
name: "error on metadata upload does not upload data",
metadata: new(errorReader),
contents: newStringReadSeeker("contents"),
log: newStringReadSeeker("log"),
+ snapshots: newStringReadSeeker("snapshots"),
expectedErr: "error readers return errors",
expectedKeys: []string{"backups/backup-1/backup-1-logs.gz"},
},
@@ -244,22 +260,30 @@ func TestPutBackup(t *testing.T) {
metadata: newStringReadSeeker("metadata"),
contents: new(errorReader),
log: newStringReadSeeker("log"),
+ snapshots: newStringReadSeeker("snapshots"),
expectedErr: "error readers return errors",
expectedKeys: []string{"backups/backup-1/backup-1-logs.gz"},
},
{
- name: "error on log upload is ok",
- metadata: newStringReadSeeker("foo"),
- contents: newStringReadSeeker("bar"),
- log: new(errorReader),
- expectedErr: "",
- expectedKeys: []string{"backups/backup-1/ark-backup.json", "backups/backup-1/backup-1.tar.gz", "metadata/revision"},
+ name: "error on log upload is ok",
+ metadata: newStringReadSeeker("foo"),
+ contents: newStringReadSeeker("bar"),
+ log: new(errorReader),
+ snapshots: newStringReadSeeker("snapshots"),
+ expectedErr: "",
+ expectedKeys: []string{
+ "backups/backup-1/ark-backup.json",
+ "backups/backup-1/backup-1.tar.gz",
+ "backups/backup-1/backup-1-volumesnapshots.json.gz",
+ "metadata/revision",
+ },
},
{
name: "don't upload data when metadata is nil",
metadata: nil,
contents: newStringReadSeeker("contents"),
log: newStringReadSeeker("log"),
+ snapshots: newStringReadSeeker("snapshots"),
expectedErr: "",
expectedKeys: []string{"backups/backup-1/backup-1-logs.gz"},
},
@@ -269,7 +293,7 @@ func TestPutBackup(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
harness := newObjectBackupStoreTestHarness("foo", tc.prefix)
- err := harness.PutBackup("backup-1", tc.metadata, tc.contents, tc.log)
+ err := harness.PutBackup("backup-1", tc.metadata, tc.contents, tc.log, tc.snapshots)
arktest.AssertErrorMatches(t, tc.expectedErr, err)
assert.Len(t, harness.objectStore.Data[harness.bucket], len(tc.expectedKeys))
diff --git a/pkg/restore/restore.go b/pkg/restore/restore.go
index 39e452ef9b..0e39b51947 100644
--- a/pkg/restore/restore.go
+++ b/pkg/restore/restore.go
@@ -50,7 +50,7 @@ import (
"github.com/heptio/ark/pkg/client"
"github.com/heptio/ark/pkg/cloudprovider"
"github.com/heptio/ark/pkg/discovery"
- arkv1client "github.com/heptio/ark/pkg/generated/clientset/versioned/typed/ark/v1"
+ listers "github.com/heptio/ark/pkg/generated/listers/ark/v1"
"github.com/heptio/ark/pkg/kuberesource"
"github.com/heptio/ark/pkg/restic"
"github.com/heptio/ark/pkg/util/boolptr"
@@ -58,12 +58,25 @@ import (
"github.com/heptio/ark/pkg/util/filesystem"
"github.com/heptio/ark/pkg/util/kube"
arksync "github.com/heptio/ark/pkg/util/sync"
+ "github.com/heptio/ark/pkg/volume"
)
+type BlockStoreGetter interface {
+ GetBlockStore(name string) (cloudprovider.BlockStore, error)
+}
+
// Restorer knows how to restore a backup.
type Restorer interface {
// Restore restores the backup data from backupReader, returning warnings and errors.
- Restore(log logrus.FieldLogger, restore *api.Restore, backup *api.Backup, backupReader io.Reader, actions []ItemAction) (api.RestoreResult, api.RestoreResult)
+ Restore(log logrus.FieldLogger,
+ restore *api.Restore,
+ backup *api.Backup,
+ volumeSnapshots []*volume.Snapshot,
+ backupReader io.Reader,
+ actions []ItemAction,
+ snapshotLocationLister listers.VolumeSnapshotLocationLister,
+ blockStoreGetter BlockStoreGetter,
+ ) (api.RestoreResult, api.RestoreResult)
}
type gvString string
@@ -73,8 +86,6 @@ type kindString string
type kubernetesRestorer struct {
discoveryHelper discovery.Helper
dynamicFactory client.DynamicFactory
- blockStore cloudprovider.BlockStore
- backupClient arkv1client.BackupsGetter
namespaceClient corev1.NamespaceInterface
resticRestorerFactory restic.RestorerFactory
resticTimeout time.Duration
@@ -145,9 +156,7 @@ func prioritizeResources(helper discovery.Helper, priorities []string, includedR
func NewKubernetesRestorer(
discoveryHelper discovery.Helper,
dynamicFactory client.DynamicFactory,
- blockStore cloudprovider.BlockStore,
resourcePriorities []string,
- backupClient arkv1client.BackupsGetter,
namespaceClient corev1.NamespaceInterface,
resticRestorerFactory restic.RestorerFactory,
resticTimeout time.Duration,
@@ -156,22 +165,29 @@ func NewKubernetesRestorer(
return &kubernetesRestorer{
discoveryHelper: discoveryHelper,
dynamicFactory: dynamicFactory,
- blockStore: blockStore,
- backupClient: backupClient,
namespaceClient: namespaceClient,
resticRestorerFactory: resticRestorerFactory,
resticTimeout: resticTimeout,
resourcePriorities: resourcePriorities,
logger: logger,
-
- fileSystem: filesystem.NewFileSystem(),
+ fileSystem: filesystem.NewFileSystem(),
}, nil
}
// Restore executes a restore into the target Kubernetes cluster according to the restore spec
// and using data from the provided backup/backup reader. Returns a warnings and errors RestoreResult,
// respectively, summarizing info about the restore.
-func (kr *kubernetesRestorer) Restore(log logrus.FieldLogger, restore *api.Restore, backup *api.Backup, backupReader io.Reader, actions []ItemAction) (api.RestoreResult, api.RestoreResult) {
+func (kr *kubernetesRestorer) Restore(
+ log logrus.FieldLogger,
+ restore *api.Restore,
+ backup *api.Backup,
+ volumeSnapshots []*volume.Snapshot,
+ backupReader io.Reader,
+ actions []ItemAction,
+ snapshotLocationLister listers.VolumeSnapshotLocationLister,
+ blockStoreGetter BlockStoreGetter,
+) (api.RestoreResult, api.RestoreResult) {
+
// metav1.LabelSelectorAsSelector converts a nil LabelSelector to a
// Nothing Selector, i.e. a selector that matches nothing. We want
// a selector that matches everything. This can be accomplished by
@@ -220,11 +236,14 @@ func (kr *kubernetesRestorer) Restore(log logrus.FieldLogger, restore *api.Resto
}
pvRestorer := &pvRestorer{
- logger: log,
- snapshotVolumes: backup.Spec.SnapshotVolumes,
- restorePVs: restore.Spec.RestorePVs,
- volumeBackups: backup.Status.VolumeBackups,
- blockStore: kr.blockStore,
+ logger: log,
+ backupName: restore.Spec.BackupName,
+ backupNamespace: backup.Namespace,
+ snapshotVolumes: backup.Spec.SnapshotVolumes,
+ restorePVs: restore.Spec.RestorePVs,
+ volumeSnapshots: volumeSnapshots,
+ blockStoreGetter: blockStoreGetter,
+ snapshotLocationLister: snapshotLocationLister,
}
restoreCtx := &context{
@@ -238,7 +257,7 @@ func (kr *kubernetesRestorer) Restore(log logrus.FieldLogger, restore *api.Resto
fileSystem: kr.fileSystem,
namespaceClient: kr.namespaceClient,
actions: resolvedActions,
- blockStore: kr.blockStore,
+ blockStoreGetter: blockStoreGetter,
resticRestorer: resticRestorer,
pvsToProvision: sets.NewString(),
pvRestorer: pvRestorer,
@@ -319,7 +338,7 @@ type context struct {
fileSystem filesystem.Interface
namespaceClient corev1.NamespaceInterface
actions []resolvedAction
- blockStore cloudprovider.BlockStore
+ blockStoreGetter BlockStoreGetter
resticRestorer restic.Restorer
globalWaitGroup arksync.ErrorGroup
resourceWaitGroup sync.WaitGroup
@@ -887,11 +906,14 @@ type PVRestorer interface {
}
type pvRestorer struct {
- logger logrus.FieldLogger
- snapshotVolumes *bool
- restorePVs *bool
- volumeBackups map[string]*api.VolumeBackupInfo
- blockStore cloudprovider.BlockStore
+ logger logrus.FieldLogger
+ backupName string
+ backupNamespace string
+ snapshotVolumes *bool
+ restorePVs *bool
+ volumeSnapshots []*volume.Snapshot
+ blockStoreGetter BlockStoreGetter
+ snapshotLocationLister listers.VolumeSnapshotLocationLister
}
func (r *pvRestorer) executePVAction(obj *unstructured.Unstructured) (*unstructured.Unstructured, error) {
@@ -902,7 +924,7 @@ func (r *pvRestorer) executePVAction(obj *unstructured.Unstructured) (*unstructu
spec, err := collections.GetMap(obj.UnstructuredContent(), "spec")
if err != nil {
- return nil, err
+ return nil, errors.WithStack(err)
}
delete(spec, "claimRef")
@@ -918,43 +940,50 @@ func (r *pvRestorer) executePVAction(obj *unstructured.Unstructured) (*unstructu
return obj, nil
}
- // If we can't find a snapshot record for this particular PV, it most likely wasn't a PV that Ark
- // could snapshot, so return early instead of trying to restore from a snapshot.
- backupInfo, found := r.volumeBackups[pvName]
- if !found {
+ log := r.logger.WithFields(logrus.Fields{"persistentVolume": pvName})
+
+ var foundSnapshot *volume.Snapshot
+ for _, snapshot := range r.volumeSnapshots {
+ if snapshot.Spec.PersistentVolumeName == pvName {
+ foundSnapshot = snapshot
+ break
+ }
+ }
+ if foundSnapshot == nil {
+ log.Info("skipping no snapshot found")
return obj, nil
}
- // Past this point, we expect to be doing a restore
+ location, err := r.snapshotLocationLister.VolumeSnapshotLocations(r.backupNamespace).Get(foundSnapshot.Spec.Location)
+ if err != nil {
+ return nil, errors.WithStack(err)
+ }
- if r.blockStore == nil {
- return nil, errors.New("you must configure a persistentVolumeProvider to restore PersistentVolumes from snapshots")
+ blockStore, err := r.blockStoreGetter.GetBlockStore(location.Spec.Provider)
+ if err != nil {
+ return nil, errors.WithStack(err)
}
- log := r.logger.WithFields(
- logrus.Fields{
- "persistentVolume": pvName,
- "snapshot": backupInfo.SnapshotID,
- },
- )
+ if err := blockStore.Init(location.Spec.Config); err != nil {
+ return nil, errors.WithStack(err)
+ }
- log.Info("restoring persistent volume from snapshot")
- volumeID, err := r.blockStore.CreateVolumeFromSnapshot(backupInfo.SnapshotID, backupInfo.Type, backupInfo.AvailabilityZone, backupInfo.Iops)
+ volumeID, err := blockStore.CreateVolumeFromSnapshot(foundSnapshot.Status.ProviderSnapshotID, foundSnapshot.Spec.VolumeType, foundSnapshot.Spec.VolumeAZ, foundSnapshot.Spec.VolumeIOPS)
if err != nil {
- return nil, err
+ return nil, errors.WithStack(err)
}
- log.Info("successfully restored persistent volume from snapshot")
- updated1, err := r.blockStore.SetVolumeID(obj, volumeID)
+ log = log.WithFields(logrus.Fields{"snapshot": foundSnapshot.Status.ProviderSnapshotID})
+ log.Info("successfully restored persistent volume from snapshot")
+ // used to be update1 which is then cast to Unstructured and returned
+ updated1, err := blockStore.SetVolumeID(obj, volumeID)
if err != nil {
- return nil, err
+ return nil, errors.WithStack(err)
}
-
updated2, ok := updated1.(*unstructured.Unstructured)
if !ok {
return nil, errors.Errorf("unexpected type %T", updated1)
}
-
return updated2, nil
}
diff --git a/pkg/restore/restore_test.go b/pkg/restore/restore_test.go
index 3f98c703e0..48069807c4 100644
--- a/pkg/restore/restore_test.go
+++ b/pkg/restore/restore_test.go
@@ -21,13 +21,20 @@ import (
"testing"
"time"
- "k8s.io/client-go/kubernetes/scheme"
-
+ api "github.com/heptio/ark/pkg/apis/ark/v1"
+ "github.com/heptio/ark/pkg/cloudprovider"
+ "github.com/heptio/ark/pkg/generated/clientset/versioned/fake"
+ informers "github.com/heptio/ark/pkg/generated/informers/externalversions"
+ "github.com/heptio/ark/pkg/kuberesource"
+ "github.com/heptio/ark/pkg/util/collections"
+ "github.com/heptio/ark/pkg/util/logging"
+ arktest "github.com/heptio/ark/pkg/util/test"
+ "github.com/heptio/ark/pkg/volume"
"github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
-
"k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -37,14 +44,8 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/watch"
+ "k8s.io/client-go/kubernetes/scheme"
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
-
- api "github.com/heptio/ark/pkg/apis/ark/v1"
- "github.com/heptio/ark/pkg/cloudprovider"
- "github.com/heptio/ark/pkg/kuberesource"
- "github.com/heptio/ark/pkg/util/boolptr"
- "github.com/heptio/ark/pkg/util/collections"
- arktest "github.com/heptio/ark/pkg/util/test"
)
func TestPrioritizeResources(t *testing.T) {
@@ -580,6 +581,12 @@ func TestRestoreResourceForNamespace(t *testing.T) {
},
}
+ var (
+ client = fake.NewSimpleClientset()
+ sharedInformers = informers.NewSharedInformerFactory(client, 0)
+ snapshotLocationLister = sharedInformers.Ark().V1().VolumeSnapshotLocations().Lister()
+ )
+
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
resourceClient := &arktest.FakeDynamicClient{}
@@ -619,9 +626,16 @@ func TestRestoreResourceForNamespace(t *testing.T) {
BackupName: "my-backup",
},
},
- backup: &api.Backup{},
- log: arktest.NewLogger(),
- pvRestorer: &pvRestorer{},
+ backup: &api.Backup{},
+ log: arktest.NewLogger(),
+ pvRestorer: &pvRestorer{
+ logger: logging.DefaultLogger(logrus.DebugLevel),
+ blockStoreGetter: &fakeBlockStoreGetter{
+ volumeMap: map[api.VolumeBackupInfo]string{{SnapshotID: "snap-1"}: "volume-1"},
+ volumeID: "volume-1",
+ },
+ snapshotLocationLister: snapshotLocationLister,
+ },
}
warnings, errors := ctx.restoreResource(test.resourcePath, test.namespace, test.resourcePath)
@@ -1179,11 +1193,22 @@ func TestIsCompleted(t *testing.T) {
func TestExecutePVAction(t *testing.T) {
iops := int64(1000)
+ locationsFake := map[string]*api.VolumeSnapshotLocation{
+ "default": arktest.NewTestVolumeSnapshotLocation().WithName("default-name").VolumeSnapshotLocation,
+ }
+
+ var locations []string
+ for key := range locationsFake {
+ locations = append(locations, key)
+ }
+
tests := []struct {
name string
obj *unstructured.Unstructured
restore *api.Restore
- backup *api.Backup
+ backup *arktest.TestBackup
+ volumeSnapshots []*volume.Snapshot
+ locations map[string]*api.VolumeSnapshotLocation
volumeMap map[api.VolumeBackupInfo]string
noBlockStore bool
expectedErr bool
@@ -1207,21 +1232,28 @@ func TestExecutePVAction(t *testing.T) {
name: "ensure spec.claimRef, spec.storageClassName are deleted",
obj: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("claimRef", "storageClassName", "someOtherField").Unstructured,
restore: arktest.NewDefaultTestRestore().WithRestorePVs(false).Restore,
- backup: &api.Backup{},
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithVolumeSnapshotLocations(locations),
expectedRes: NewTestUnstructured().WithAnnotations("a", "b").WithName("pv-1").WithSpec("someOtherField").Unstructured,
},
{
name: "if backup.spec.snapshotVolumes is false, ignore restore.spec.restorePVs and return early",
obj: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("claimRef", "storageClassName", "someOtherField").Unstructured,
restore: arktest.NewDefaultTestRestore().WithRestorePVs(true).Restore,
- backup: &api.Backup{Spec: api.BackupSpec{SnapshotVolumes: boolptr.False()}},
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithSnapshotVolumes(false),
expectedRes: NewTestUnstructured().WithName("pv-1").WithAnnotations("a", "b").WithSpec("someOtherField").Unstructured,
},
{
- name: "not restoring, return early",
- obj: NewTestUnstructured().WithName("pv-1").WithSpec().Unstructured,
- restore: arktest.NewDefaultTestRestore().WithRestorePVs(false).Restore,
- backup: &api.Backup{Status: api.BackupStatus{VolumeBackups: map[string]*api.VolumeBackupInfo{"pv-1": {SnapshotID: "snap-1"}}}},
+ name: "not restoring, return early",
+ obj: NewTestUnstructured().WithName("pv-1").WithSpec().Unstructured,
+ restore: arktest.NewDefaultTestRestore().WithRestorePVs(false).Restore,
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithVolumeSnapshotLocations(locations),
+ volumeSnapshots: []*volume.Snapshot{
+ {
+ Spec: volume.SnapshotSpec{BackupName: "backup1", Location: "default-name", ProviderVolumeID: "volume-1", PersistentVolumeName: "pv-1", VolumeType: "gp", VolumeIOPS: &iops},
+ Status: volume.SnapshotStatus{ProviderSnapshotID: "snap-1"},
+ },
+ },
+ locations: locationsFake,
expectedErr: false,
expectedRes: NewTestUnstructured().WithName("pv-1").WithSpec().Unstructured,
},
@@ -1229,23 +1261,38 @@ func TestExecutePVAction(t *testing.T) {
name: "restoring, return without error if there is no PV->BackupInfo map",
obj: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
restore: arktest.NewDefaultTestRestore().WithRestorePVs(true).Restore,
- backup: &api.Backup{Status: api.BackupStatus{}},
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithVolumeSnapshotLocations(locations),
+ locations: locationsFake,
expectedErr: false,
expectedRes: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
},
{
- name: "restoring, return early if there is PV->BackupInfo map but no entry for this PV",
- obj: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
- restore: arktest.NewDefaultTestRestore().WithRestorePVs(true).Restore,
- backup: &api.Backup{Status: api.BackupStatus{VolumeBackups: map[string]*api.VolumeBackupInfo{"another-pv": {}}}},
+ name: "restoring, return early if there is PV->BackupInfo map but no entry for this PV",
+ obj: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
+ restore: arktest.NewDefaultTestRestore().WithRestorePVs(true).Restore,
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithVolumeSnapshotLocations(locations),
+ volumeSnapshots: []*volume.Snapshot{
+ {
+ Spec: volume.SnapshotSpec{BackupName: "backup1", Location: "default-name", ProviderVolumeID: "volume-1", PersistentVolumeName: "another-pv", VolumeType: "gp", VolumeIOPS: &iops},
+ Status: volume.SnapshotStatus{ProviderSnapshotID: "another-snap-1"},
+ },
+ },
+ locations: locationsFake,
expectedErr: false,
expectedRes: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
},
{
- name: "volume type and IOPS are correctly passed to CreateVolume",
- obj: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
- restore: arktest.NewDefaultTestRestore().WithRestorePVs(true).Restore,
- backup: &api.Backup{Status: api.BackupStatus{VolumeBackups: map[string]*api.VolumeBackupInfo{"pv-1": {SnapshotID: "snap-1", Type: "gp", Iops: &iops}}}},
+ name: "volume type and IOPS are correctly passed to CreateVolume",
+ obj: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
+ restore: arktest.NewDefaultTestRestore().WithRestorePVs(true).Restore,
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithVolumeSnapshotLocations(locations),
+ locations: locationsFake,
+ volumeSnapshots: []*volume.Snapshot{
+ {
+ Spec: volume.SnapshotSpec{BackupName: "backup1", Location: "default-name", ProviderVolumeID: "volume-1", PersistentVolumeName: "pv-1", VolumeType: "gp", VolumeIOPS: &iops},
+ Status: volume.SnapshotStatus{ProviderSnapshotID: "snap-1"},
+ },
+ },
volumeMap: map[api.VolumeBackupInfo]string{{SnapshotID: "snap-1", Type: "gp", Iops: &iops}: "volume-1"},
volumeID: "volume-1",
expectedErr: false,
@@ -1253,12 +1300,16 @@ func TestExecutePVAction(t *testing.T) {
expectedRes: NewTestUnstructured().WithName("pv-1").WithSpec("xyz").Unstructured,
},
{
- name: "restoring, blockStore=nil, backup has at least 1 snapshot -> error",
- obj: NewTestUnstructured().WithName("pv-1").WithSpecField("awsElasticBlockStore", make(map[string]interface{})).Unstructured,
- restore: arktest.NewDefaultTestRestore().Restore,
- backup: &api.Backup{Status: api.BackupStatus{VolumeBackups: map[string]*api.VolumeBackupInfo{"pv-1": {SnapshotID: "snap-1"}}}},
- volumeMap: map[api.VolumeBackupInfo]string{{SnapshotID: "snap-1"}: "volume-1"},
- volumeID: "volume-1",
+ name: "restoring, blockStore=nil, backup has at least 1 snapshot -> error",
+ obj: NewTestUnstructured().WithName("pv-1").WithSpecField("awsElasticBlockStore", make(map[string]interface{})).Unstructured,
+ restore: arktest.NewDefaultTestRestore().Restore,
+ backup: arktest.NewTestBackup().WithName("backup1").WithPhase(api.BackupPhaseInProgress).WithVolumeSnapshotLocations(locations),
+ volumeSnapshots: []*volume.Snapshot{
+ {
+ Spec: volume.SnapshotSpec{BackupName: "backup1", Location: "default-name", ProviderVolumeID: "volume-1", PersistentVolumeName: "pv-1", VolumeType: "gp", VolumeIOPS: &iops},
+ Status: volume.SnapshotStatus{ProviderSnapshotID: "snap-1"},
+ },
+ },
noBlockStore: true,
expectedErr: true,
expectedRes: NewTestUnstructured().WithName("pv-1").WithSpecField("awsElasticBlockStore", make(map[string]interface{})).Unstructured,
@@ -1268,25 +1319,42 @@ func TestExecutePVAction(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
var (
- blockStore cloudprovider.BlockStore
- fakeBlockStore *arktest.FakeBlockStore
+ blockStoreGetter BlockStoreGetter
+ testBlockStoreGetter *fakeBlockStoreGetter
+
+ client = fake.NewSimpleClientset()
+ sharedInformers = informers.NewSharedInformerFactory(client, 0)
+ snapshotLocationLister = sharedInformers.Ark().V1().VolumeSnapshotLocations().Lister()
)
+
if !test.noBlockStore {
- fakeBlockStore = &arktest.FakeBlockStore{
- RestorableVolumes: test.volumeMap,
- VolumeID: test.volumeID,
+ testBlockStoreGetter = &fakeBlockStoreGetter{
+ volumeMap: test.volumeMap,
+ volumeID: test.volumeID,
}
- blockStore = fakeBlockStore
+ testBlockStoreGetter.GetBlockStore("default")
+ blockStoreGetter = testBlockStoreGetter
+ for _, location := range test.locations {
+ require.NoError(t, sharedInformers.Ark().V1().VolumeSnapshotLocations().Informer().GetStore().Add(location))
+ }
+ } else {
+ assert.Equal(t, nil, blockStoreGetter)
}
r := &pvRestorer{
- logger: arktest.NewLogger(),
- restorePVs: test.restore.Spec.RestorePVs,
- blockStore: blockStore,
+ logger: logging.DefaultLogger(logrus.DebugLevel),
+ backupName: "backup1",
+ backupNamespace: api.DefaultNamespace,
+ restorePVs: test.restore.Spec.RestorePVs,
+ snapshotLocationLister: snapshotLocationLister,
+ blockStoreGetter: blockStoreGetter,
}
if test.backup != nil {
+ backup := test.backup.DeepCopy()
+ backup.Spec.VolumeSnapshotLocations = test.backup.Spec.VolumeSnapshotLocations
+
r.snapshotVolumes = test.backup.Spec.SnapshotVolumes
- r.volumeBackups = test.backup.Status.VolumeBackups
+ r.volumeSnapshots = test.volumeSnapshots
}
res, err := r.executePVAction(test.obj)
@@ -1298,9 +1366,9 @@ func TestExecutePVAction(t *testing.T) {
require.NoError(t, err)
if test.expectSetVolumeID {
- assert.Equal(t, test.volumeID, fakeBlockStore.VolumeIDSet)
+ assert.Equal(t, test.volumeID, testBlockStoreGetter.fakeBlockStore.VolumeIDSet)
} else {
- assert.Equal(t, "", fakeBlockStore.VolumeIDSet)
+ assert.Equal(t, "", testBlockStoreGetter.fakeBlockStore.VolumeIDSet)
}
assert.Equal(t, test.expectedRes, res)
})
@@ -1622,6 +1690,22 @@ type fakeAction struct {
resource string
}
+type fakeBlockStoreGetter struct {
+ fakeBlockStore *arktest.FakeBlockStore
+ volumeMap map[api.VolumeBackupInfo]string
+ volumeID string
+}
+
+func (r *fakeBlockStoreGetter) GetBlockStore(provider string) (cloudprovider.BlockStore, error) {
+ if r.fakeBlockStore == nil {
+ r.fakeBlockStore = &arktest.FakeBlockStore{
+ RestorableVolumes: r.volumeMap,
+ VolumeID: r.volumeID,
+ }
+ }
+ return r.fakeBlockStore, nil
+}
+
func newFakeAction(resource string) *fakeAction {
return &fakeAction{resource}
}
diff --git a/pkg/util/test/fake_block_store.go b/pkg/util/test/fake_block_store.go
index 9e9f727837..b512bf3b41 100644
--- a/pkg/util/test/fake_block_store.go
+++ b/pkg/util/test/fake_block_store.go
@@ -104,10 +104,6 @@ func (bs *FakeBlockStore) GetVolumeInfo(volumeID, volumeAZ string) (string, *int
}
func (bs *FakeBlockStore) GetVolumeID(pv runtime.Unstructured) (string, error) {
- if bs.Error != nil {
- return "", bs.Error
- }
-
return bs.VolumeID, nil
}
diff --git a/pkg/util/test/test_backup.go b/pkg/util/test/test_backup.go
index 041dd9e848..e3a764daf9 100644
--- a/pkg/util/test/test_backup.go
+++ b/pkg/util/test/test_backup.go
@@ -140,3 +140,8 @@ func (b *TestBackup) WithStorageLocation(location string) *TestBackup {
b.Spec.StorageLocation = location
return b
}
+
+func (b *TestBackup) WithVolumeSnapshotLocations(locations []string) *TestBackup {
+ b.Spec.VolumeSnapshotLocations = locations
+ return b
+}
diff --git a/pkg/util/test/test_volume_snapshot_location.go b/pkg/util/test/test_volume_snapshot_location.go
new file mode 100644
index 0000000000..5bfb7e496e
--- /dev/null
+++ b/pkg/util/test/test_volume_snapshot_location.go
@@ -0,0 +1,72 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package test
+
+import (
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+
+ "github.com/heptio/ark/pkg/apis/ark/v1"
+)
+
+type TestVolumeSnapshotLocation struct {
+ *v1.VolumeSnapshotLocation
+}
+
+func NewTestVolumeSnapshotLocation() *TestVolumeSnapshotLocation {
+ return &TestVolumeSnapshotLocation{
+ VolumeSnapshotLocation: &v1.VolumeSnapshotLocation{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: v1.DefaultNamespace,
+ },
+ Spec: v1.VolumeSnapshotLocationSpec{
+ Provider: "aws",
+ Config: map[string]string{"region": "us-west-1"},
+ },
+ },
+ }
+}
+
+func (location *TestVolumeSnapshotLocation) WithName(name string) *TestVolumeSnapshotLocation {
+ location.Name = name
+ return location
+}
+
+func (location *TestVolumeSnapshotLocation) WithProviderConfig(info []LocationInfo) []*TestVolumeSnapshotLocation {
+ var locations []*TestVolumeSnapshotLocation
+
+ for _, v := range info {
+ location := &TestVolumeSnapshotLocation{
+ VolumeSnapshotLocation: &v1.VolumeSnapshotLocation{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: v.Name,
+ Namespace: v1.DefaultNamespace,
+ },
+ Spec: v1.VolumeSnapshotLocationSpec{
+ Provider: v.Provider,
+ Config: v.Config,
+ },
+ },
+ }
+ locations = append(locations, location)
+ }
+ return locations
+}
+
+type LocationInfo struct {
+ Name, Provider string
+ Config map[string]string
+}
diff --git a/pkg/volume/snapshot.go b/pkg/volume/snapshot.go
new file mode 100644
index 0000000000..1aed9f9e5c
--- /dev/null
+++ b/pkg/volume/snapshot.go
@@ -0,0 +1,80 @@
+/*
+Copyright 2018 the Heptio Ark contributors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package volume
+
+// Snapshot stores information about a persistent volume snapshot taken as
+// part of an Ark backup.
+type Snapshot struct {
+ Spec SnapshotSpec `json:"spec"`
+
+ Status SnapshotStatus `json:"status"`
+}
+
+type SnapshotSpec struct {
+ // BackupName is the name of the Ark backup this snapshot
+ // is associated with.
+ BackupName string `json:"backupName"`
+
+ // BackupUID is the UID of the Ark backup this snapshot
+ // is associated with.
+ BackupUID string `json:"backupUID"`
+
+ // Location is the name of the VolumeSnapshotLocation where this snapshot is stored.
+ Location string `json:"location"`
+
+ // PersistentVolumeName is the Kubernetes name for the volume.
+ PersistentVolumeName string `json:persistentVolumeName`
+
+ // ProviderVolumeID is the provider's ID for the volume.
+ ProviderVolumeID string `json:"providerVolumeID"`
+
+ // VolumeType is the type of the disk/volume in the cloud provider
+ // API.
+ VolumeType string `json:"volumeType"`
+
+ // VolumeAZ is the where the volume is provisioned
+ // in the cloud provider.
+ VolumeAZ string `json:"volumeAZ,omitempty"`
+
+ // VolumeIOPS is the optional value of provisioned IOPS for the
+ // disk/volume in the cloud provider API.
+ VolumeIOPS *int64 `json:"volumeIOPS,omitempty"`
+}
+
+type SnapshotStatus struct {
+ // ProviderSnapshotID is the ID of the snapshot taken in the cloud
+ // provider API of this volume.
+ ProviderSnapshotID string `json:"providerSnapshotID,omitempty"`
+
+ // Phase is the current state of the VolumeSnapshot.
+ Phase SnapshotPhase `json:"phase,omitempty"`
+}
+
+// SnapshotPhase is the lifecyle phase of an Ark volume snapshot.
+type SnapshotPhase string
+
+const (
+ // SnapshotPhaseNew means the volume snapshot has been created but not
+ // yet processed by the VolumeSnapshotController.
+ SnapshotPhaseNew SnapshotPhase = "New"
+
+ // SnapshotPhaseCompleted means the volume snapshot was successfully created and can be restored from..
+ SnapshotPhaseCompleted SnapshotPhase = "Completed"
+
+ // SnapshotPhaseFailed means the volume snapshot was unable to execute.
+ SnapshotPhaseFailed SnapshotPhase = "Failed"
+)