Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: Don't always update DynamoDB read/write capacity #5617

Closed
oli-g opened this issue Mar 14, 2016 · 4 comments
Closed

provider/aws: Don't always update DynamoDB read/write capacity #5617

oli-g opened this issue Mar 14, 2016 · 4 comments

Comments

@oli-g
Copy link

oli-g commented Mar 14, 2016

Hi guys!

First of all, thank you for your massive amount of work: Terraform is improving everyday.

I'm using Terraform to provision DynamoDB tables. Currently, read_capacity and write_capacity are required arguments, so that you can specify default values for read and write initial capacity:

resource "aws_dynamodb_table" "accounts" {
    name = "foo-staging-accounts"
    read_capacity = 1
    write_capacity = 1
    hash_key = "ACC"

    attribute {
        name = "ACC"
        type = "S"
    }

    attribute {
        name = "FBID"
        type = "S"
    }

    global_secondary_index {
        name = "fbid-index"
        read_capacity = 1
        write_capacity = 1
        hash_key = "FBID"
        projection_type = "ALL"
    }
}

The problem is that I'm using a tool called Dynamic DynamoDB in order to automatically adjust the provisioned capacities, depending on the actual consumed capacities. But when I want to plan or apply changes with Terraform, it always tries to update the capacities to the default values I have in .tf files. With the example above, it will always try to set read and write capacities to 1 (for the global secondary index too), even if Dynamic DynamoDB changed them because of a traffic increase.

I would love to solve this issue by adding a new argument to aws_dynamodb_table resource: something like update_capacities (or maybe better two new ones, update_read_capacity and update_write_capacity): if set to false, Terraform will not try to update capacities if the table has been already created. If the table is not present yet, Terraform will create it as always, setting the default capacities accordingly.

What do you think guys? Do you have a better idea? How would you solve this issue without touching Terraform code?

@sd-charris
Copy link

I am in a similar scenario. The ability to ignore capacity is much needed!

@oli-g
Copy link
Author

oli-g commented Apr 4, 2017

At the end I solved the issue using ignore_changes as described here. I found that possibility after I opened this issue. So I think that this issue could be closed, as Terrafom already provides a mean to achieve that.

@oli-g oli-g closed this as completed Apr 4, 2017
@sd-charris
Copy link

Maybe I am doing it wrong, but the ignore_changes appears to only work for top level resource settings. It does not appear that I can do the same for a GSI setting which looks like "global_secondary_index.4003134.write_capacity" where "4003134" appears to be a dynamically created id.

@ghost
Copy link

ghost commented Apr 14, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants