Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare v0.3.11 #903

Merged
merged 2 commits into from
Nov 10, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,16 @@
* Added `databricks_sql_global_config` resource to provide global configuration for SQL Endpoints ([#855](https://github.com/databrickslabs/terraform-provider-databricks/issues/855))
* Added `databricks_mount` resource to mount arbitrary cloud storage ([#497](https://github.com/databrickslabs/terraform-provider-databricks/issues/497))
* Improved implementation of `databricks_repo` by creating the parent folder structure ([#895](https://github.com/databrickslabs/terraform-provider-databricks/pull/895))
* Fixed `databricks_job` error related [to randomized job IDs](https://docs.databricks.com/release-notes/product/2021/august.html#jobs-service-stability-and-scalability-improvements) ([#901](https://github.com/databrickslabs/terraform-provider-databricks/issues/901))
* Replace `databricks_group` on name change ([#890](https://github.com/databrickslabs/terraform-provider-databricks/pull/890))
* Names of scopes in `databricks_secret_scope` can have `/` characters in them ([#892](https://github.com/databrickslabs/terraform-provider-databricks/pull/892))

**Deprecations**
* `databricks_aws_s3_mount`, `databricks_azure_adls_gen1_mount`, `databricks_azure_adls_gen2_mount`, and `databricks_azure_blob_mount` are deprecated in favor of `databricks_mount`.

Updated dependency versions:

* Bump google.golang.org/api from 0.59.0 to 0.60.0

## 0.3.10

Expand Down
15 changes: 6 additions & 9 deletions docs/resources/mount.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@ subcategory: "Storage"
---
# databricks_mount Resource

-> **Note** This resource has an evolving API, which may change in future versions of the provider.

This resource will mount your cloud storage account on `dbfs:/mnt/yourname`. Right now it supports mounting AWS S3, Azure (Blob Storage, ADLS Gen1 & Gen2), Google Cloud Storage. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and may take some time to validate the mount. If `cluster_id` is not specified, it will create the smallest possible cluster with name equal to or starting with `terraform-mount` for the shortest possible amount of time.

This resource provides two ways of mounting a storage account:
Expand All @@ -21,9 +19,9 @@ This resource provides two ways of mounting a storage account:

* `cluster_id` - (Optional, String) Cluster to use for mounting. If no cluster is specified, a new cluster will be created and will mount the bucket for all of the clusters in this workspace. If the cluster is not running - it's going to be started, so be aware to set auto-termination rules on it.
* `name` - (Optional, String) Name, under which mount will be accessible in `dbfs:/mnt/<MOUNT_NAME>`. If not specified, provider will try to infer it from depending on the resource type:
* bucket name for AWS S3 and Google Cloud Storage
* container name for ADLS Gen2 and Azure Blob Storage
* storage resource name for ADLS Gen1
* `bucket_name` for AWS S3 and Google Cloud Storage
* `container_name` for ADLS Gen2 and Azure Blob Storage
* `storage_resource_name` for ADLS Gen1
* `uri` - (Optional, String) the URI for accessing specific storage (`s3a://....`, `abfss://....`, `gs://....`, etc.)
* `extra_configs` - (Optional, String map) configuration parameters that are necessary for mounting of specific storage
* `resource_id` - (Optional, String) resource ID for given storage account. Could be used to fill defaults, such as storage account & container names on Azure.
Expand All @@ -33,8 +31,8 @@ This resource provides two ways of mounting a storage account:

```hcl
locals {
tenant_id = "8f35a392-f2ae-4280-9796-f1864a10eeec"
client_id = "d1b2a25b-86c4-451a-a0eb-0808be121957"
tenant_id = "00000000-1111-2222-3333-444444444444"
client_id = "55555555-6666-7777-8888-999999999999"
secret_scope = "some-kv"
secret_key = "some-sp-secret"
container = "test"
Expand Down Expand Up @@ -82,10 +80,9 @@ data "azurerm_databricks_workspace" "this" {

# it works only with AAD token!
provider "databricks" {
azure_workspace_resource_id = data.azurerm_databricks_workspace.this.id
host = data.azurerm_databricks_workspace.this.workspace_url
}


data "databricks_node_type" "smallest" {
local_disk = true
}
Expand Down
6 changes: 3 additions & 3 deletions docs/resources/sql_global_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ subcategory: "Databricks SQL"

-> **Public Preview** This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).

This resource configures the security policy, instance profile (AWS only), and data access properties for all SQL endpoints of workspace. *Please note that changing parameters of this resources will restart all running SQL endpoints.* To use this resource you need to be an administrator.
This resource configures the security policy, [databricks_instance_profile](instance_profile.md), and data access properties for all [databricks_sql_endpoint](sql_endpoint.md) of workspace. *Please note that changing parameters of this resources will restart all running [databricks_sql_endpoint](sql_endpoint.md).* To use this resource you need to be an administrator.

## Example usage

Expand All @@ -24,8 +24,8 @@ resource "databricks_sql_global_config" "this" {
The following arguments are supported (see [documentation](https://docs.databricks.com/sql/api/sql-endpoints.html#global-edit) for more details):

* `security_policy` (Optional, String) - The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values
* `data_access_config` (Optional, Map) - data access configuration for SQL Endpoints, such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list. Apply will fail if you're specifying not permitted configuration.
* `instance_profile_arn` (Optional, String) - Instance profile used to access storage from SQL endpoints. Please note that this parameter is only for AWS, and will generate an error if used on other clouds.
* `data_access_config` (Optional, Map) - data access configuration for [databricks_sql_endpoint](sql_endpoint.md), such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list. Apply will fail if you're specifying not permitted configuration.
* `instance_profile_arn` (Optional, String) - [databricks_instance_profile](instance_profile.md) used to access storage from [databricks_sql_endpoint](sql_endpoint.md). Please note that this parameter is only for AWS, and will generate an error if used on other clouds.

## Import

Expand Down
16 changes: 5 additions & 11 deletions sqlanalytics/resource_sql_global_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,34 +14,31 @@ type ConfPair struct {
Value string `json:"value"`
}

// GlobalConfig ...
// GlobalConfig used to generate Terraform resource schema and bind to resource data
type GlobalConfig struct {
SecurityPolicy string `json:"security_policy,omitempty" tf:"default:DATA_ACCESS_CONTROL"`
DataAccessConfig map[string]string `json:"data_access_config,omitempty"`
InstanceProfileARN string `json:"instance_profile_arn,omitempty"`
EnableServerlessCompute bool `json:"enable_serverless_compute,omitempty" tf:"default:false"`
}

// GlobalConfigForRead ...
// GlobalConfigForRead used to talk to REST API
type GlobalConfigForRead struct {
SecurityPolicy string `json:"security_policy"`
DataAccessConfig []ConfPair `json:"data_access_config"`
InstanceProfileARN string `json:"instance_profile_arn,omitempty"`
EnableServerlessCompute bool `json:"enable_serverless_compute,omitempty"`
}

// NewSqlGlobalConfigAPI ...
func NewSqlGlobalConfigAPI(ctx context.Context, m interface{}) globalConfigAPI {
return globalConfigAPI{m.(*common.DatabricksClient), ctx}
}

// sAPI ...
type globalConfigAPI struct {
client *common.DatabricksClient
context context.Context
}

// Set ...
func (a globalConfigAPI) Set(gc GlobalConfig) error {
data := map[string]interface{}{
"security_policy": gc.SecurityPolicy,
Expand Down Expand Up @@ -84,15 +81,13 @@ func (a globalConfigAPI) Get() (GlobalConfig, error) {
return gc, nil
}

// ResourceSQLGlobalConfig ...
func ResourceSQLGlobalConfig() *schema.Resource {
s := common.StructToSchema(GlobalConfig{}, func(
m map[string]*schema.Schema) map[string]*schema.Schema {
m["instance_profile_arn"].Default = ""
return m
})

set_func := func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
setGlobalConfig := func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
var gc GlobalConfig
if err := common.DataToStructPointer(d, s, &gc); err != nil {
return err
Expand All @@ -103,9 +98,8 @@ func ResourceSQLGlobalConfig() *schema.Resource {
d.SetId("global")
return nil
}

return common.Resource{
Create: set_func,
Create: setGlobalConfig,
Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
gc, err := NewSqlGlobalConfigAPI(ctx, c).Get()
if err != nil {
Expand All @@ -114,7 +108,7 @@ func ResourceSQLGlobalConfig() *schema.Resource {
err = common.StructToData(gc, s, d)
return err
},
Update: set_func,
Update: setGlobalConfig,
Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
return NewSqlGlobalConfigAPI(ctx, c).Set(GlobalConfig{SecurityPolicy: "DATA_ACCESS_CONTROL"})
},
Expand Down
4 changes: 2 additions & 2 deletions storage/adls_gen1_mount.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ func (m AzureADLSGen1Mount) Config(client *common.DatabricksClient) map[string]s

// ResourceAzureAdlsGen1Mount creates the resource
func ResourceAzureAdlsGen1Mount() *schema.Resource {
return commonMountResource(AzureADLSGen1Mount{}, map[string]*schema.Schema{
return deprecatedMountTesource(commonMountResource(AzureADLSGen1Mount{}, map[string]*schema.Schema{
"cluster_id": {
Type: schema.TypeString,
Optional: true,
Expand Down Expand Up @@ -106,5 +106,5 @@ func ResourceAzureAdlsGen1Mount() *schema.Resource {
Required: true,
ForceNew: true,
},
})
}))
}
4 changes: 2 additions & 2 deletions storage/adls_gen2_mount.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ func (m AzureADLSGen2Mount) Config(client *common.DatabricksClient) map[string]s

// ResourceAzureAdlsGen2Mount creates the resource
func ResourceAzureAdlsGen2Mount() *schema.Resource {
return commonMountResource(AzureADLSGen2Mount{}, map[string]*schema.Schema{
return deprecatedMountTesource(commonMountResource(AzureADLSGen2Mount{}, map[string]*schema.Schema{
"cluster_id": {
Type: schema.TypeString,
Optional: true,
Expand Down Expand Up @@ -106,5 +106,5 @@ func ResourceAzureAdlsGen2Mount() *schema.Resource {
Required: true,
ForceNew: true,
},
})
}))
}
4 changes: 4 additions & 0 deletions storage/aws_s3_mount.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,10 @@ func (m AWSIamMount) Config(client *common.DatabricksClient) map[string]string {
func ResourceAWSS3Mount() *schema.Resource {
tpl := AWSIamMount{}
r := &schema.Resource{
DeprecationMessage: "Resource is deprecated and will be removed in further versions. " +
"Please rewrite configuration using `databricks_mount` resource. More info at " +
"https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/" +
"resources/mount#migration-from-other-mount-resources",
Schema: map[string]*schema.Schema{
"cluster_id": {
Type: schema.TypeString,
Expand Down
4 changes: 2 additions & 2 deletions storage/azure_blob_mount.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ func (m AzureBlobMount) Config(client *common.DatabricksClient) map[string]strin

// ResourceAzureBlobMount creates the resource
func ResourceAzureBlobMount() *schema.Resource {
return commonMountResource(AzureBlobMount{}, map[string]*schema.Schema{
return deprecatedMountTesource(commonMountResource(AzureBlobMount{}, map[string]*schema.Schema{
"cluster_id": {
Type: schema.TypeString,
Optional: true,
Expand Down Expand Up @@ -97,5 +97,5 @@ func ResourceAzureBlobMount() *schema.Resource {
Sensitive: true,
ForceNew: true,
},
})
}))
}
13 changes: 12 additions & 1 deletion storage/mounts.go
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,10 @@ func (mp MountPoint) Mount(mo Mount, client *common.DatabricksClient) (source st
}

func commonMountResource(tpl Mount, s map[string]*schema.Schema) *schema.Resource {
resource := &schema.Resource{Schema: s, SchemaVersion: 2}
resource := &schema.Resource{
SchemaVersion: 2,
Schema: s,
}
// nolint should be a bigger context-aware refactor
resource.CreateContext = mountCreate(tpl, resource)
resource.ReadContext = mountRead(tpl, resource)
Expand All @@ -111,6 +114,14 @@ func commonMountResource(tpl Mount, s map[string]*schema.Schema) *schema.Resourc
return resource
}

func deprecatedMountTesource(r *schema.Resource) *schema.Resource {
r.DeprecationMessage = "Resource is deprecated and will be removed in further versions. " +
"Please rewrite configuration using `databricks_mount` resource. More info at " +
"https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/" +
"resources/mount#migration-from-other-mount-resources"
return r
}

// NewMountPoint returns new mount point config
func NewMountPoint(executor common.CommandExecutor, name, clusterID string) MountPoint {
return MountPoint{
Expand Down