Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Triton locality attribute - allow self referencing #32

Closed
hashibot opened this issue Sep 9, 2017 · 7 comments
Closed

Triton locality attribute - allow self referencing #32

hashibot opened this issue Sep 9, 2017 · 7 comments

Comments

@hashibot
Copy link

hashibot commented Sep 9, 2017

This issue was originally opened by @pannon as hashicorp/terraform#16060. It was migrated here as a result of the provider split. The original body of the issue is below.


Hi,

I am trying to provision instances in Triton with a locality far away from each other.
For instance in the example below, each machine created through a module called affinity-test should land far away from each other (spread across different compute nodes):
locality = { far_from = ["${triton_machine.affinity-test.id}"] }

(I believe this is a common use case, same machine types (services) spread out far away from each other.)

Trying to do this, results in the following error:
* module.affinity-test.triton_machine.affinity-test[0]: module.affinity-test.triton_machine.affinity-test[0]: self reference not allowed: "triton_machine.affinity-test.id"

In this instance it would be quite beneficial to allow self referencing. The locality flag expects machine UUID's which are only generated after the machine itself is provisioned anyway.

In other words the UUID of the current machine will not be in the list as that is only generated after the machine is provisioned.

Not sure if there is some other ways to achieve this kind of provisioning through the Triton provider - if yes I would be interested to hear some suggestions.

Thanks
P

@misterbisson
Copy link

The problem is in CloudAPI, rather than Terraform or the Triton provider in Terraform. PUBAPI-1428 is tracking some internal API changes to better support locality specification in CloudAPI. The summary is that it will be implemented to work more like how it does in sdc-docker.

@pannon
Copy link

pannon commented Sep 12, 2017

@misterbisson where can I check some details on PUBAPI-1428?
I believe we are talking about two separate issues - current vs future locality options.

Currently Terraform (via CloudAPI) accepts a list of machines for locality/placement rules, described here.

close_to - (list of strings) - List of container UUIDs that a new instance should be placed alongside, on the same host.
far_from - (list of strings) - List of container UUIDs that a new instance should not be placed onto the same host.

The self referencing check/error originates from Terraform not CloudAPI. If Terraform accepted the locality list as described above (without throwing an error) we could pass in a generated list of machines automatically with the help of an output variable (${triton_machine.affinity-test.id}).

The UUID of the currently provisioned machine will not be in the ${triton_machine.affinity-test.id} list and is not known until the machine is provisioned anyway, there is no danger in allowing this.

@jwreagor
Copy link
Contributor

@pannon While I believe there are holes in the usefulness of the existing locality feature of the Triton provider, I'm missing how the self reference validation of Terraform is the Triton provider's fault.

I might be wrong, and please correct me, but where else can you self reference in Terraform?

@misterbisson's comment will definitely provide support for this since our Docker API style of placement can have pattern matching filters for instance names that have yet to be created.

# Run on a different node to all containers with names starting with "foo":
docker run -e 'affinity:container!=foo*' ...

# Same, using a regular expression:
docker run -e 'affinity:container!=/^foo/' ...

Let's hope it works in a similar fashion through CloudAPI.

@jwreagor
Copy link
Contributor

jwreagor commented Sep 13, 2017

Looks like we have early access to begin testing integration with the updated affinity rules coming to CloudAPI. The current proposal is to keep the existing locality block but only accept either locality hints or affinity rules, like so...

resource "triton_machine" "test-affinity" {
    name = "test-affinity"
    package = "sample-256M"
    image   = "${data.triton_image.base64.id}"
    affinity = ["instance==affinity"]
}

CloudAPI's CreateMachine endpoint will error out if it gets both 'locality' and 'affinity' params. It only accepts one or the other so Terraform should also respect this.

@pannon
Copy link

pannon commented Sep 13, 2017

Thanks for your input @cheapRoc. Hopefully the new CloudAPI affinity feature will be out soon.
Need some workaround solution for the interim tho, but that is something I need to test in our environment. triton cli worked fine with the affinity rules so far, but after switching to Terraform we have a few corner cases with machine placement.

Right now the only option with Terraform seem to be to tweak the ALLOC_* variables described here

This kinda works OK in 90% of the provisioning cases with Terraform. The problem is more acute with smaller private Triton users as in these cases only a small number of compute nodes are available for provisioning.

@jwreagor
Copy link
Contributor

jwreagor commented Oct 4, 2017

@pannon Ref: #42, the Triton team has released the new affinity feature into CloudAPI under the "release" channel of Triton as well as all DCs in the Joyent public cloud. The functionality in TF has been merged into master and aiming for official release with 0.3.0 later in the week.

Considering that this is moving toward deprecating locality all together, how do you feel about closing this issue?

@pannon
Copy link

pannon commented Oct 4, 2017

Thanks for everyone's effort and hard work making this happen. Please close this issue.

@jwreagor jwreagor closed this as completed Oct 4, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants