Skip to content

Commit

Permalink
Merge branch 'master' into sqla-queries
Browse files Browse the repository at this point in the history
  • Loading branch information
pietern authored Apr 21, 2021
2 parents f7c8b71 + 62ca936 commit 502319c
Show file tree
Hide file tree
Showing 35 changed files with 1,532 additions and 141 deletions.
17 changes: 16 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,22 @@

## 0.3.2

* Fix incorrect escaping of notebook names ([#566](https://github.com/databrickslabs/terraform-provider-databricks/pull/566))
* Fixed minor issues to add support for GCP ([#558](https://github.com/databrickslabs/terraform-provider-databricks/pull/558))
* Fixed `databricks_permissions` for SQL Analytics Entities ([#535](https://github.com/databrickslabs/terraform-provider-databricks/issues/535))
* Fixed incorrect HTTP 404 handling on create ([#564](https://github.com/databrickslabs/terraform-provider-databricks/issues/564), [#576](https://github.com/databrickslabs/terraform-provider-databricks/issues/576))
* Fixed incorrect escaping of notebook names ([#566](https://github.com/databrickslabs/terraform-provider-databricks/pull/566))
* Fixed entitlements for databricks_group ([#549](https://github.com/databrickslabs/terraform-provider-databricks/pull/549))
* Fixed rate limiting to perform more than 1 request per second ([#577](https://github.com/databrickslabs/terraform-provider-databricks/pull/577))
* Added support for spot instances on Azure ([#571](https://github.com/databrickslabs/terraform-provider-databricks/pull/571))
* Added job schedules support for `pause_status` as a optional field. ([#575](https://github.com/databrickslabs/terraform-provider-databricks/pull/575))
* Fixed minor documentation issues.

Updated dependency versions:

* Bump github.com/aws/aws-sdk-go from 1.37.20 to 1.38.10
* Bump github.com/hashicorp/hcl/v2 from 2.9.0 to 2.9.1
* Bump github.com/zclconf/go-cty from 1.8.0 to 1.8.1
* Bump github.com/google/go-querystring from 1.0.0 to 1.1.0

## 0.3.1

Expand Down
181 changes: 181 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,187 @@ $ docker run -it -v $(pwd):/workpace -w /workpace databricks-terraform plan
$ docker run -it -v $(pwd):/workpace -w /workpace databricks-terraform apply
```

## Adding a new resource

The general process for adding a new resource is:

*Define the resource models.* The models for a resource are `struct`s defining the schemas of the objects in the Databricks REST API. Define structures used for multiple resources in a common `models.go` file; otherwise, you can define these directly in your resource file. An example model:
```go
type Field struct {
A string `json:"a,omitempty"`
AMoreComplicatedName int `json:"a_more_complicated_name,omitempty"`
}

type Example struct {
ID string `json:"id"`
TheField *Field `json:"the_field"`
AnotherField bool `json:"another_field"`
Filters []string `json:"filters" tf:"optional"`
}
```

Some interesting points to note here:
* Use the `json` tag to determine the serde properties of the field. The allowed tags are defined here: https://go.googlesource.com/go/+/go1.16/src/encoding/json/encode.go#158
* Use the custom `tf` tag indicates properties to be annotated on the Terraform schema for this struct. Supported values are:
* `optional` for optional fields
* `computed` for computed fields
* `alias:X` to use a custom name in HCL for a field
* `default:X` to set a default value for a field
* `max_items:N` to set the maximum number of items for a multi-valued parameter
* `slice_set` to indicate that a the parameter should accept a set instead of a list
* Do not use bare references to structs in the model; rather, use pointers to structs. Maps and slices are permitted, as well as the following primitive types: int, int32, int64, float64, bool, string.
See `typeToSchema` in `common/reflect_resource.go` for the up-to-date list of all supported field types and values for the `tf` tag.

*Define the Terraform schema.* This is made easy for you by the `StructToSchema` method in the `common` package, which converts your struct automatically to a Terraform schema, accepting also a function allowing the user to post-process the automatically generated schema, if needed.
```go
var exampleSchema = common.StructToSchema(Example{}, func(m map[string]*schema.Schema) map[string]*schema.Schema { return m })
```

*Define the API client for the resource.* You will need to implement create, read, update, and delete functions.
```go
type ExampleApi struct {
client *common.DatabricksClient
ctx context.Context
}

func NewExampleApi(ctx context.Context, m interface{}) ExampleApi {
return ExampleApi{m.(*common.DatabricksClient), ctx}
}

func (a ExampleApi) Create(e Example) (string, error) {
var id string
err := a.client.Post(a.ctx, "/example", e, &id)
return id, err
}

func (a ExampleApi) Read(id string) (e Example, err error) {
err = a.client.Get(a.ctx, "/example/"+id, nil, &e)
return
}

func (a ExampleApi) Update(id string, e Example) error {
return a.client.Put(a.ctx, "/example/"+string(id), e)
}

func (a ExampleApi) Delete(id string) error {
return a.client.Delete(a.ctx, "/pipelines/"+id, nil)
}
```

*Define the Resource object itself.* This is made quite simple by using the `toResource` function defined on the `Resource` type in the `common` package. A simple example:
```go
func ResourceExample() *schema.Resource {
return common.Resource{
Schema: exampleSchema,
SchemaVersion: 2,
Create: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
var e Example
err := common.DataToStructPointer(d, exampleSchema, &e)
if err != nil {
return err
}
id, err := NewExampleApi(ctx, c).Create(e)
if err != nil {
return err
}
d.SetId(string(id))
return nil
},
Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
i, err := NewExampleApi(ctx, c).Read(d.Id())
if err != nil {
return err
}
return common.StructToData(i.Spec, exampleSchema, d)
},
Update: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
var e Example
err := common.DataToStructPointer(d, exampleSchema, &e)
if err != nil {
return err
}
return NewExampleApi(ctx, c).Update(d.Id(), e)
},
Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
return NewExampleApi(ctx, c).Delete(d.Id())
},
}.ToResource()
}
```

*Add the resource to the top-level provider.* Simply add the resource to the provider definition in `provider/provider.go`.

*Write unit tests for your resource.* To write your unit tests, you can make use of `ResourceFixture` and `HTTPFixture` structs defined in the `qa` package. This starts a fake HTTP server, asserting that your resource provdier generates the correct request for a given HCL template body for your resource. An example:

```go
func TestExampleResourceCreate(t *testing.T) {
d, err := qa.ResourceFixture{
Fixtures: []qa.HTTPFixture{
{
Method: "POST",
Resource: "/api/2.0/example",
ExpectedRequest: Example{
TheField: Field{
A: "test",
},
},
Response: map[string]interface{} {
"id": "abcd",
"the_field": map[string]interface{} {
"a": "test",
},
},
},
{
Method: "GET",
Resource: "/api/2.0/example/abcd",
Response: map[string]interface{}{
"id": "abcd",
"the_field": map[string]interface{} {
"a": "test",
},
},
},
},
Create: true,
Resource: ResourceExample(),
HCL: `the_field {
a = "test"
}`,
}.Apply(t)
assert.NoError(t, err, err)
assert.Equal(t, "abcd", d.Id())
}
```

*Write acceptance tests.* These are E2E tests which run terraform against the live cloud and Databricks APIs. For these, you can use the `Test` and `Step` structs defined in the `acceptance` package. An example:

```go
func TestPreviewAccPipelineResource_CreatePipeline(t *testing.T) {
acceptance.Test(t, []acceptance.Step{
{
Template: `
resource "databricks_example" "this" {
the_field {
a = "test"
a_more_complicated_name = 3
}
another_field = true
filters = [
"a",
"b"
]
}
`,
},
})
}
```

## Debugging

**TF_LOG=DEBUG terraform apply** allows you to see the internal logs from `terraform apply`.

## Testing

* [Integration tests](scripts/README.md) should be run at a client level against both azure and aws to maintain sdk parity against both apis.
Expand Down
6 changes: 4 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ coverage: test
@echo "✓ Opening coverage for unit tests ..."
@go tool cover -html=coverage.txt

VERSION = 0.3.1

build: vendor
@echo "✓ Building source code with go build ..."
@go build -mod vendor -v -o terraform-provider-databricks
Expand Down Expand Up @@ -68,6 +66,10 @@ test-awsmt: install
@echo "✓ Running Terraform Acceptance Tests for AWS MT..."
@/bin/bash scripts/run.sh awsmt '^(TestAcc|TestAwsAcc)' --debug --tee

test-preview: install
@echo "✓ Running acceptance Tests for Preview features..."
@/bin/bash scripts/run.sh preview '^TestPreviewAcc' --debug --tee

snapshot:
@echo "✓ Making Snapshot ..."
@goreleaser release --rm-dist --snapshot
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.3.1"
version = "0.3.2"
}
}
}
Expand Down
6 changes: 5 additions & 1 deletion common/http.go
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,11 @@ func (c *DatabricksClient) redactedDump(body []byte) (res string) {
// error in this case is not much relevant
return
}
return onlyNBytes(string(rePacked), 1024)
maxBytes := 1024
if c.DebugTruncateBytes > maxBytes {
maxBytes = c.DebugTruncateBytes
}
return onlyNBytes(string(rePacked), maxBytes)
}

func (c *DatabricksClient) userAgent(ctx context.Context) string {
Expand Down
6 changes: 4 additions & 2 deletions common/reflect_resource_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -358,14 +358,16 @@ func TestStructToData(t *testing.T) {

// Empty optional string should not be set.
{
// nolint: marked as deprecated, without viable alternative.
//lint:ignore SA1019
// nolint
_, ok := d.GetOkExists("addresses.0.optional_string")
assert.Falsef(t, ok, "Empty optional string should not be set in ResourceData")
}

// Empty required string should be set.
{
// nolint: marked as deprecated, without viable alternative.
//lint:ignore SA1019
// nolint
_, ok := d.GetOkExists("addresses.0.required_string")
assert.Truef(t, ok, "Empty required string should be set in ResourceData")
}
Expand Down
6 changes: 0 additions & 6 deletions common/resource.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,6 @@ func (r Resource) ToResource() *schema.Resource {
CreateContext: func(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
c := m.(*DatabricksClient)
err := r.Create(ctx, d, c)
if e, ok := err.(APIError); ok && e.IsMissing() {
log.Printf("[INFO] %s[id=%s] is removed on backend",
ResourceName.GetOrUnknown(ctx), d.Id())
d.SetId("")
return nil
}
if err != nil {
return diag.FromErr(err)
}
Expand Down
2 changes: 1 addition & 1 deletion common/version.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ package common
import "context"

var (
version = "0.3.2"
version = "0.3.3"
// ResourceName is resource name without databricks_ prefix
ResourceName contextKey = 1
// Provider is the current instance of provider
Expand Down
3 changes: 3 additions & 0 deletions compute/acceptance/cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,9 @@ func TestAccClusterResource_CreateSingleNodeCluster(t *testing.T) {
"spark.databricks.cluster.profile" = "singleNode"
"spark.master" = "local[*]"
}
custom_tags = {
"ResourceClass" = "SingleNode"
}
{var.AWS_ATTRIBUTES}
}`,
},
Expand Down
53 changes: 53 additions & 0 deletions compute/acceptance/pipeline_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
package acceptance

import (
"testing"

"github.com/databrickslabs/terraform-provider-databricks/internal/acceptance"
)

func TestPreviewAccPipelineResource_CreatePipeline(t *testing.T) {
acceptance.Test(t, []acceptance.Step{
{
Template: `
locals {
name = "pipeline-acceptance-{var.RANDOM}"
}
resource "databricks_pipeline" "this" {
name = locals.name
storage = "/test/${locals.name}"
configuration = {
key1 = "value1"
key2 = "value2"
}
clusters {
label = "default"
num_workers = 2
custom_tags = {
cluster_type = "default"
}
}
cluster {
label = "maintenance"
num_workers = 1
custom_tags = {
cluster_type = "maintenance
}
}
library {
maven {
coordinates = "com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.7"
}
}
filters {
include = ["com.databricks.include"]
exclude = ["com.databricks.exclude"]
}
continuous = false
}
`,
},
})
}
10 changes: 8 additions & 2 deletions compute/clusters.go
Original file line number Diff line number Diff line change
Expand Up @@ -181,9 +181,15 @@ func (a ClustersAPI) waitForClusterStatus(clusterID string, desired ClusterState
}
if !clusterInfo.State.CanReach(desired) {
docLink := "https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterstate"
details := ""
if clusterInfo.TerminationReason != nil {
details = fmt.Sprintf(", Termination info: code: %s, type: %s, parameters: %v",
clusterInfo.TerminationReason.Code, clusterInfo.TerminationReason.Type,
clusterInfo.TerminationReason.Parameters)
}
return resource.NonRetryableError(fmt.Errorf(
"%s is not able to transition from %s to %s: %s. Please see %s for more details",
clusterID, clusterInfo.State, desired, clusterInfo.StateMessage, docLink))
"%s is not able to transition from %s to %s: %s%s. Please see %s for more details",
clusterID, clusterInfo.State, desired, clusterInfo.StateMessage, details, docLink))
}
return resource.RetryableError(
fmt.Errorf("%s is %s, but has to be %s",
Expand Down
Loading

0 comments on commit 502319c

Please sign in to comment.