Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to provision multiple instances of the same image/package #50

Closed
askfongjojo opened this issue Oct 19, 2015 · 8 comments
Closed

Comments

@askfongjojo
Copy link
Contributor

This corresponds to the request for cloudapi PUBAPI-1117. I don't know where the change is best implemented - cloudapi, cloudapi v2, or at the CLI level. We'll have to think about how it works with the -w flag if we allow user to specify some -n option to provision multiple.

@bahamas10
Copy link
Contributor

$ for i in {1..3}; do triton create -n "machine-$i" minimal-64 t4-standard-128M; done; triton wait machine-{1..3}

is this not sufficient?

@dekobon
Copy link

dekobon commented Oct 19, 2015

I've had two customers ask for a change in the API specifically so that they did not have to do a loop because a looping in our stack is problematic.

The problem is that we throttle based on the number of API requests per second. If a customer is pulling other API requests rapidly (say for cloud metrics) and then they need to auto-scale up 10 new instances they will have to do some serious guesswork about how many sleeps (and for how long) they will need to add per createmachine request. Rather, they want to be able to just specify an integer and create N number of machines. The purpose of this feature request was to help the customer by removing that layer of complexity from them.

Moreover, when folks are trying to port from AWS or OpenStack and we don't support an equivalent API, we have added a lot of work.

As for where to implement, I would say at first we should implement in the HTTP API (it is your call about the v1 vs v2 APIs) and then integrate as time permits in the CLI.

@misterbisson
Copy link

This really is an API request, rather than a cli client issue, but the need is there.

I have trouble understanding how any user can make enough requests to get throttled, but that's orthogonal to the issue that there's a gap in our API that people coming from AWS expect: the ability to create more than one instance in a single request.

@trentm
Copy link
Contributor

trentm commented Oct 20, 2015

I had to look. You mean the -n instance_count from http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RunInstances.html right?

@dekobon
Copy link

dekobon commented Oct 20, 2015

@trentm Exactly.

@dwlf
Copy link
Contributor

dwlf commented Oct 20, 2015

Similarly in OpenStack it is nova boot --min-count <number> --max-count <number> where

--min-count <number>
Boot at least <number> servers (limited by quota).
--max-count <number>
Boot up to <number> servers (limited by quota).

http://docs.openstack.org/cli-reference/content/novaclient_commands.html#novaclient_subcommand_boot

I was surprised to find that the unified OpenStack client still is a peripheral, it similarly has openstack server create --min --max http://docs.openstack.org/developer/python-openstackclient/command-objects/server.html#server-create .

@trentm
Copy link
Contributor

trentm commented Jan 11, 2016

@trentm
Copy link
Contributor

trentm commented Jan 11, 2016

Really this isn't something we propose to do client side-only, so there isn't great reason to have this ticket linger open. If/when we do CloudAPI changes, we can open issues on node-triton.

@trentm trentm closed this as completed Jan 11, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants