Skip to content

Commit

Permalink
fix errors
Browse files Browse the repository at this point in the history
  • Loading branch information
ericl committed Sep 11, 2019
1 parent f6ac687 commit 354e10b
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions doc/source/autoscaling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ as described in `the boto docs <http://boto3.readthedocs.io/en/latest/guide/conf
Then you're ready to go. The provided `ray/python/ray/autoscaler/aws/example-full.yaml <https://github.com/ray-project/ray/tree/master/python/ray/autoscaler/aws/example-full.yaml>`__ cluster config file will create a small cluster with a m5.large head node (on-demand) configured to autoscale up to two m5.large `spot workers <https://aws.amazon.com/ec2/spot/>`__.

Try it out by running these commands from your personal computer. Once the cluster is started, you can then
SSH into the head node, ``source activate tensorflow_p36``, and then run Ray programs with ``ray.init(address="localhost:6379")``.
SSH into the head node, ``source activate tensorflow_p36``, and then run Ray programs with ``ray.init(address="auto")``.

.. code-block:: bash
Expand All @@ -37,7 +37,7 @@ First, install the Google API client (``pip install google-api-python-client``),
Then you're ready to go. The provided `ray/python/ray/autoscaler/gcp/example-full.yaml <https://github.com/ray-project/ray/tree/master/python/ray/autoscaler/gcp/example-full.yaml>`__ cluster config file will create a small cluster with a n1-standard-2 head node (on-demand) configured to autoscale up to two n1-standard-2 `preemptible workers <https://cloud.google.com/preemptible-vms/>`__. Note that you'll need to fill in your project id in those templates.

Try it out by running these commands from your personal computer. Once the cluster is started, you can then
SSH into the head node and then run Ray programs with ``ray.init(address="localhost:6379")``.
SSH into the head node and then run Ray programs with ``ray.init(address="auto")``.

.. code-block:: bash
Expand All @@ -59,7 +59,7 @@ This is used when you have a list of machine IP addresses to connect in a Ray cl
Be sure to specify the proper ``head_ip``, list of ``worker_ips``, and the ``ssh_user`` field.

Try it out by running these commands from your personal computer. Once the cluster is started, you can then
SSH into the head node and then run Ray programs with ``ray.init(address="localhost:6379")``.
SSH into the head node and then run Ray programs with ``ray.init(address="auto")``.

.. code-block:: bash
Expand All @@ -77,7 +77,7 @@ SSH into the head node and then run Ray programs with ``ray.init(address="localh
Running commands on new and existing clusters
---------------------------------------------

You can use ``ray exec`` to conveniently run commands on clusters. Note that scripts you run should connect to Ray via ``ray.init(address="localhost:6379")``.
You can use ``ray exec`` to conveniently run commands on clusters. Note that scripts you run should connect to Ray via ``ray.init(address="auto")``.

.. code-block:: bash
Expand Down
2 changes: 1 addition & 1 deletion python/ray/monitor.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,6 @@ def xray_heartbeat_batch_handler(self, unused_channel, data):
# Update the load metrics for this raylet.
client_id = ray.utils.binary_to_hex(heartbeat_message.client_id)
ip = self.raylet_id_to_ip_map.get(client_id)
logger.error("RESOURCE LOAD {}".format(resource_load))
if ip:
self.load_metrics.update(ip, total_resources,
available_resources, resource_load)
Expand Down Expand Up @@ -361,6 +360,7 @@ def run(self):
try:
self._run()
except Exception:
logger.exception("Error in monitor loop")
if self.autoscaler:
self.autoscaler.kill_workers()
raise
Expand Down

0 comments on commit 354e10b

Please sign in to comment.