Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Program crashes when running same program twice sequentially #1018

Closed
Joachimoe opened this issue Oct 18, 2022 · 22 comments · Fixed by #1023
Closed

Program crashes when running same program twice sequentially #1018

Joachimoe opened this issue Oct 18, 2022 · 22 comments · Fixed by #1023
Assignees
Labels
bug Something isn't working

Comments

@Joachimoe
Copy link

When running the below piece of code:

import cupy as cp
import numpy as np
import dask.array as da
from dask_cuda import LocalCUDACluster
from dask.distributed import Client, wait
import rmm

if __name__ == '__main__':

    cluster = LocalCUDACluster('0', rmm_managed_memory=True)
    client = Client(cluster)
    client.run(cp.cuda.set_allocator, rmm.rmm_cupy_allocator)

    # Here we set RMM/CuPy memory allocator on the "current" process,
    # i.e., the Dask client.
    rmm.reinitialize(managed_memory=True)
    cp.cuda.set_allocator(rmm.rmm_cupy_allocator)

    shape = (512, 512, 30000)
    chunks = (100, 100, 1000)

    huge_array_gpu = da.ones_like(cp.array(()), shape=shape, chunks=chunks)
    array_sum = da.multiply(huge_array_gpu, 17).persist()
    # `persist()` only does lazy evaluation, so we must `wait()` for the
    # actual compute to occur.
    wait(array_sum)

It runs perfectly the first time around. This of course creates a folder called dask-worker-space/storage. If I delete this folder, I can run the program again with no problem. If I do not, however, I get the following error:

2022-10-18 10:10:16,404 - distributed.preloading - INFO - Creating preload: dask_cuda.initialize
2022-10-18 10:10:16,404 - distributed.preloading - INFO - Import preload module: dask_cuda.initialize
2022-10-18 10:10:16,626 - distributed.nanny - ERROR - Failed to start worker
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
2022-10-18 10:10:16,676 - distributed.nanny - ERROR - Failed while trying to start worker process: Worker failed to start.
2022-10-18 10:10:16,677 - distributed.nanny - ERROR - Failed to connect to process
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
2022-10-18 10:10:16,678 - distributed.nanny - ERROR - Failed to start process
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 423, in instantiate
    result = await self.process.start()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
Task exception was never retrieved
future: <Task finished name='Task-22' coro=<_wrap_awaitable() done, defined at /home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py:681> exception=RuntimeError('Nanny failed to start.')>
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 350, in start_unsafe
    response = await self.instantiate()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 423, in instantiate
    result = await self.process.start()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 688, in _wrap_awaitable
    return (yield from awaitable.__await__())
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Nanny failed to start.
2022-10-18 10:10:16,683 - tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7fcf80946fd0>>, <Task finished name='Task-21' coro=<SpecCluster._correct_state_internal() done, defined at /home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/deploy/spec.py:319> exception=RuntimeError('Worker failed to start.')>)
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/tornado/ioloop.py", line 741, in _run_callback
    ret = callback()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
    future.result()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/deploy/spec.py", line 358, in _correct_state_internal
    await w  # for tornado gen.coroutine support
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 469, in start
    raise self.__startup_exc
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 350, in start_unsafe
    response = await self.instantiate()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 423, in instantiate
    result = await self.process.start()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
2022-10-18 10:10:17,673 - distributed.preloading - INFO - Creating preload: dask_cuda.initialize
2022-10-18 10:10:17,673 - distributed.preloading - INFO - Import preload module: dask_cuda.initialize
2022-10-18 10:10:17,871 - distributed.nanny - ERROR - Failed to start worker
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
2022-10-18 10:10:17,915 - distributed.nanny - ERROR - Failed while trying to start worker process: Worker failed to start.
2022-10-18 10:10:17,915 - distributed.nanny - ERROR - Failed to connect to process
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
2022-10-18 10:10:17,916 - distributed.nanny - ERROR - Failed to start process
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 423, in instantiate
    result = await self.process.start()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
Task exception was never retrieved
future: <Task finished name='Task-36' coro=<_wrap_awaitable() done, defined at /home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py:681> exception=RuntimeError('Nanny failed to start.')>
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 350, in start_unsafe
    response = await self.instantiate()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 423, in instantiate
    result = await self.process.start()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 688, in _wrap_awaitable
    return (yield from awaitable.__await__())
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Nanny failed to start.
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1406, in start_unsafe
    await self._register_with_scheduler()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in _register_with_scheduler
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/worker.py", line 1093, in <dictcomp>
    types={k: typename(v) for k, v in self.data.items()},
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/_collections_abc.py", line 850, in __iter__
    for key in self._mapping:
RuntimeError: Set changed size during iteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/Desktop/src/pygpubatch/regex.py", line 10, in <module>
    cluster = LocalCUDACluster('0', rmm_managed_memory=True)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/dask_cuda/local_cuda_cluster.py", line 366, in __init__
    self.sync(self._correct_state)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 338, in sync
    return sync(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 405, in sync
    raise exc.with_traceback(tb)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 378, in f
    result = yield future
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/tornado/gen.py", line 762, in run
    value = future.result()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/deploy/spec.py", line 358, in _correct_state_internal
    await w  # for tornado gen.coroutine support
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 469, in start
    raise self.__startup_exc
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 480, in start
    await asyncio.wait_for(self.start_unsafe(), timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 442, in wait_for
    return await fut
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 350, in start_unsafe
    response = await self.instantiate()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 423, in instantiate
    result = await self.process.start()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 669, in start
    msg = await self._wait_until_connected(uid)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 789, in _wait_until_connected
    raise msg["exception"]
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/nanny.py", line 858, in run
    await worker
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/core.py", line 488, in start
    raise RuntimeError(f"{type(self).__name__} failed to start.") from exc
RuntimeError: Worker failed to start.
@Joachimoe
Copy link
Author

EDIT SET UP:

Python Version 3.9.3 
NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7    
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                       2_gnu    conda-forge
arrow-cpp                 8.0.1           py39hd3ccb9b_2_cpu    conda-forge
aws-c-cal                 0.5.11               h95a6274_0    conda-forge
aws-c-common              0.6.2                h7f98852_0    conda-forge
aws-c-event-stream        0.2.7               h3541f99_13    conda-forge
aws-c-io                  0.10.5               hfb6a706_0    conda-forge
aws-checksums             0.1.11               ha31a3da_7    conda-forge
aws-sdk-cpp               1.8.186              hb4091e7_3    conda-forge
bokeh                     2.4.3              pyhd8ed1ab_3    conda-forge
brotlipy                  0.7.0           py39hb9d737c_1004    conda-forge
bzip2                     1.0.8                h7f98852_4    conda-forge
c-ares                    1.18.1               h7f98852_0    conda-forge
ca-certificates           2022.07.19           h06a4308_0  
cachetools                5.2.0              pyhd8ed1ab_0    conda-forge
carbontracker             1.1.6                    pypi_0    pypi
certifi                   2022.9.24        py39h06a4308_0  
cffi                      1.15.1           py39he91dace_0    conda-forge
charset-normalizer        2.1.1                    pypi_0    pypi
click                     8.1.3            py39hf3d152e_0    conda-forge
cloudpickle               2.2.0              pyhd8ed1ab_0    conda-forge
contourpy                 1.0.5                    pypi_0    pypi
cryptography              37.0.4           py39hd97740a_0    conda-forge
cuda-python               11.7.0           py39h3fd9d12_0    nvidia
cudatoolkit               11.5.1               hcf5317a_9    nvidia
cudf                      22.08.01        cuda_11_py39_g31337c9001_0    rapidsai
cuml                      22.08.00        cuda11_py39_g1e2f8a9aa_0    rapidsai
cupy                      10.6.0           py39hc3c280e_0    conda-forge
cycler                    0.11.0                   pypi_0    pypi
cytoolz                   0.12.0           py39hb9d737c_0    conda-forge
dask                      2022.7.1           pyhd8ed1ab_0    conda-forge
dask-core                 2022.7.1           pyhd8ed1ab_0    conda-forge
dask-cuda                 22.08.00        py39_g9a61ce5_0    rapidsai
dask-cudf                 22.08.01        cuda_11_py39_g31337c9001_0    rapidsai
decorator                 5.1.1                    pypi_0    pypi
distributed               2022.7.1           pyhd8ed1ab_0    conda-forge
dlpack                    0.5                  h9c3ff4c_0    conda-forge
faiss-proc                1.0.0                      cuda    rapidsai
fastavro                  1.6.1            py39hb9d737c_0    conda-forge
fastrlock                 0.8              py39h5a03fae_2    conda-forge
fonttools                 4.37.4                   pypi_0    pypi
freetype                  2.12.1               hca18f0e_0    conda-forge
fsspec                    2022.8.2           pyhd8ed1ab_0    conda-forge
future                    0.18.2                   pypi_0    pypi
geocoder                  1.38.1                   pypi_0    pypi
gflags                    2.2.2             he1b5a44_1004    conda-forge
glog                      0.6.0                h6f12383_0    conda-forge
grpc-cpp                  1.47.1               hbad87ad_6    conda-forge
heapdict                  1.0.1                      py_0    conda-forge
idna                      3.4                pyhd8ed1ab_0    conda-forge
jinja2                    3.1.2              pyhd8ed1ab_1    conda-forge
joblib                    1.2.0              pyhd8ed1ab_0    conda-forge
jpeg                      9e                   h166bdaf_2    conda-forge
keyutils                  1.6.1                h166bdaf_0    conda-forge
kiwisolver                1.4.4                    pypi_0    pypi
krb5                      1.19.3               h3790be6_0    conda-forge
lcms2                     2.12                 hddcbb42_0    conda-forge
ld_impl_linux-64          2.36.1               hea4e1c9_2    conda-forge
lerc                      4.0.0                h27087fc_0    conda-forge
libabseil                 20220623.0      cxx17_h48a1fff_4    conda-forge
libblas                   3.9.0           16_linux64_openblas    conda-forge
libbrotlicommon           1.0.9                h166bdaf_7    conda-forge
libbrotlidec              1.0.9                h166bdaf_7    conda-forge
libbrotlienc              1.0.9                h166bdaf_7    conda-forge
libcblas                  3.9.0           16_linux64_openblas    conda-forge
libcrc32c                 1.1.2                h9c3ff4c_0    conda-forge
libcudf                   22.08.01        cuda11_g31337c9001_0    rapidsai
libcuml                   22.08.00        cuda11_g1e2f8a9aa_0    rapidsai
libcumlprims              22.08.00        cuda11_g1770e60_0    nvidia
libcurl                   7.85.0               h7bff187_0    conda-forge
libcusolver               11.4.1.48                     0    nvidia
libcusparse               11.7.5.86                     0    nvidia
libdeflate                1.14                 h166bdaf_0    conda-forge
libedit                   3.1.20191231         he28a2e2_2    conda-forge
libev                     4.33                 h516909a_1    conda-forge
libevent                  2.1.10               h9b69904_4    conda-forge
libfaiss                  1.7.0           cuda112h5bea7ad_8_cuda    conda-forge
libffi                    3.4.2                h7f98852_5    conda-forge
libgcc-ng                 12.1.0              h8d9b700_16    conda-forge
libgfortran-ng            12.1.0              h69a702a_16    conda-forge
libgfortran5              12.1.0              hdcd56e2_16    conda-forge
libgomp                   12.1.0              h8d9b700_16    conda-forge
libgoogle-cloud           2.1.0                h9ebe8e8_2    conda-forge
liblapack                 3.9.0           16_linux64_openblas    conda-forge
libllvm11                 11.1.0               he0ac6c6_4    conda-forge
libnghttp2                1.47.0               hdcd2b5c_1    conda-forge
libnsl                    2.0.0                h7f98852_0    conda-forge
libopenblas               0.3.21          pthreads_h78a6416_3    conda-forge
libpng                    1.6.38               h753d276_0    conda-forge
libprotobuf               3.20.1               h6239696_4    conda-forge
libraft-distance          22.08.00        cuda11_g87a7d16c_0    rapidsai
libraft-headers           22.08.00        cuda11_g87a7d16c_0    rapidsai
libraft-nn                22.08.00        cuda11_g87a7d16c_0    rapidsai
librmm                    22.08.00        cuda11_gd212232c_0    rapidsai
libsqlite                 3.39.4               h753d276_0    conda-forge
libssh2                   1.10.0               haa6b8db_3    conda-forge
libstdcxx-ng              12.1.0              ha89aaad_16    conda-forge
libthrift                 0.16.0               h491838f_2    conda-forge
libtiff                   4.4.0                h55922b4_4    conda-forge
libutf8proc               2.7.0                h7f98852_0    conda-forge
libuuid                   2.32.1            h7f98852_1000    conda-forge
libwebp-base              1.2.4                h166bdaf_0    conda-forge
libxcb                    1.13              h7f98852_1004    conda-forge
libzlib                   1.2.12               h166bdaf_4    conda-forge
llvmlite                  0.39.1           py39h7d9a04d_0    conda-forge
locket                    1.0.0              pyhd8ed1ab_0    conda-forge
lz4                       4.0.0            py39h029007f_2    conda-forge
lz4-c                     1.9.3                h9c3ff4c_1    conda-forge
markupsafe                2.1.1            py39hb9d737c_1    conda-forge
matplotlib                3.6.0                    pypi_0    pypi
msgpack-python            1.0.4            py39hf939315_0    conda-forge
nccl                      2.14.3.1             h0800d71_0    conda-forge
ncurses                   6.3                  h27087fc_1    conda-forge
numba                     0.56.2           py39h61ddf18_1    conda-forge
numpy                     1.22.4           py39hc58783e_0    conda-forge
nvtx                      0.2.3            py39h3811e60_1    conda-forge
openjpeg                  2.5.0                h7d73246_1    conda-forge
openssl                   1.1.1q               h7f8727e_0  
orc                       1.7.6                h6c59b99_0    conda-forge
packaging                 21.3               pyhd8ed1ab_0    conda-forge
pandas                    1.4.4            py39h1832856_0    conda-forge
parquet-cpp               1.5.1                         2    conda-forge
partd                     1.3.0              pyhd8ed1ab_0    conda-forge
pillow                    9.2.0            py39hd5dbb17_2    conda-forge
pip                       22.2.2           py39h06a4308_0  
protobuf                  3.20.1           py39h5a03fae_0    conda-forge
psutil                    5.9.2            py39hb9d737c_0    conda-forge
pthread-stubs             0.4               h36c2ea0_1001    conda-forge
ptxcompiler               0.2.0            py39h107f55c_0    rapidsai
pyarrow                   8.0.1           py39hc0775d8_2_cpu    conda-forge
pycparser                 2.21               pyhd8ed1ab_0    conda-forge
pynvml                    11.4.1             pyhd8ed1ab_0    conda-forge
pyopenssl                 22.0.0             pyhd8ed1ab_1    conda-forge
pyparsing                 3.0.9              pyhd8ed1ab_0    conda-forge
pyraft                    22.08.00        cuda11_py39_g87a7d16c_0    rapidsai
pysocks                   1.7.1              pyha2e5f31_6    conda-forge
python                    3.9.13          h9a8a25e_0_cpython    conda-forge
python-dateutil           2.8.2              pyhd8ed1ab_0    conda-forge
python_abi                3.9                      2_cp39    conda-forge
pytz                      2022.4             pyhd8ed1ab_0    conda-forge
pyyaml                    6.0              py39hb9d737c_4    conda-forge
ratelim                   0.1.6                    pypi_0    pypi
re2                       2022.06.01           h27087fc_0    conda-forge
readline                  8.1.2                h0f457ee_0    conda-forge
requests                  2.28.1                   pypi_0    pypi
rmm                       22.08.00        cuda11_py39_gd212232c_0    rapidsai
s2n                       1.0.10               h9b69904_0    conda-forge
scikit-learn              1.1.2                    pypi_0    pypi
scipy                     1.9.1            py39h8ba3f38_0    conda-forge
seaborn                   0.12.0                   pypi_0    pypi
setuptools                65.4.1             pyhd8ed1ab_0    conda-forge
six                       1.16.0             pyh6c4a22f_0    conda-forge
sklearn                   0.0                      pypi_0    pypi
snappy                    1.1.9                hbd366e4_1    conda-forge
sortedcontainers          2.4.0              pyhd8ed1ab_0    conda-forge
spdlog                    1.8.5                h4bd325d_1    conda-forge
sqlite                    3.39.4               h4ff8645_0    conda-forge
tblib                     1.7.0              pyhd8ed1ab_0    conda-forge
threadpoolctl             3.1.0                    pypi_0    pypi
tk                        8.6.12               h27826a3_0    conda-forge
toolz                     0.12.0             pyhd8ed1ab_0    conda-forge
tornado                   6.1              py39hb9d737c_3    conda-forge
treelite                  2.4.0            py39h6b629c6_1    conda-forge
treelite-runtime          2.4.0                    pypi_0    pypi
typing_extensions         4.3.0              pyha770c72_0    conda-forge
tzdata                    2022d                h191b570_0    conda-forge
ucx                       1.13.1               h538f049_0    conda-forge
ucx-proc                  1.0.0                       gpu    rapidsai
ucx-py                    0.27.00         py39_g9abe3c1_0    rapidsai
urllib3                   1.26.11            pyhd8ed1ab_0    conda-forge
wheel                     0.37.1             pyhd8ed1ab_0    conda-forge
xorg-libxau               1.0.9                h7f98852_0    conda-forge
xorg-libxdmcp             1.1.3                h7f98852_0    conda-forge
xz                        5.2.6                h166bdaf_0    conda-forge
yaml                      0.2.5                h7f98852_2    conda-forge
zict                      2.2.0              pyhd8ed1ab_0    conda-forge
zlib                      1.2.12               h166bdaf_4    conda-forge
zstd                      1.5.2                h6239696_4    conda-forge

@pentschev
Copy link
Member

Thanks @Joachimoe for the details. How does your first run complete, does it terminate cleanly or do you see errors? This can happen if the cluster didn't shut down cleanly, but on my end it terminates cleanly which allows me to run a second time without having to delete that directory.

@Joachimoe
Copy link
Author

The first run completes cleanly. Not errors or casualities. The only output generated is the following:

(rps) joachim@Moe:~/Desktop/src/pygpubatch$ python3 regex.py 1000 1000
2022-10-18 10:43:48,930 - distributed.preloading - INFO - Creating preload: dask_cuda.initialize
2022-10-18 10:43:48,931 - distributed.preloading - INFO - Import preload module: dask_cuda.initialize
(rps) joachim@Moe:~/Desktop/src/pygpubatch$ 

@pentschev
Copy link
Member

Could you also paste the contents of dask-worker-space/storage after the first run?

@Joachimoe
Copy link
Author

Joachimoe commented Oct 18, 2022

These files are all encoded binaries. I'll try to read the contents of a single file. The contents of the folder are all files encoded with names: (I post sizes of the files as they differ).

joachim@Moe:~/Desktop/src/pygpubatch/dask-worker-space/storage$ ls -sh *
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%200%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%201%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%202%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%203%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%204%2C%209%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%200%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2010%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2011%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2012%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%201%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2013%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2014%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2015%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2016%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2017%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2018%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2019%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2020%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2021%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2022%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%202%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2023%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2024%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2025%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2026%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2027%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2028%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%2029%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%203%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%204%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%205%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%206%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%207%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%208%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%200%2C%205%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%200%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%201%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%202%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%203%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%204%2C%209%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%200%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2010%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2011%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2012%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%201%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2013%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2014%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2015%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2016%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2017%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2018%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2019%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2020%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2021%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2022%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%202%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2023%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2024%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2025%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2026%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2027%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2028%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%2029%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%203%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%204%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%205%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%206%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%207%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%208%29
 40K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%201%2C%205%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%200%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%201%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%202%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%205%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%206%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%207%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%208%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%203%2C%209%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%200%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2010%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2011%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2012%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%201%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2013%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2014%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2015%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2016%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2017%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2018%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2019%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2020%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2021%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2022%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%202%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2023%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2024%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2025%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2026%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2027%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2028%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%2029%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%203%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%204%29
308K %28%27multiply-5c7abecb5638ed5bd521807e713f7867%27%2C%202%2C%204%2C%205%29

@Joachimoe
Copy link
Author

After trying to add client.shutdown()to the end of the program above, a new error occurred:

Python runtime state: finalizing (tstate=0x557c06a28b00)

@pentschev
Copy link
Member

It's really strange that you're getting no errors but still the files do not cleanup. The client.shutdown() error is also something I haven't seen before. This is a long shot, but could you try closing the cluster before client.shutdown()? E.g.:

cluster.close()
client.shutdown()

And if that doesn't work, could you try a nasty hack sleeping for a while just to see if that has any effect? E.g.:

import time

...

cluster.close()
client.shutdown()

time.sleep(60)

@Joachimoe
Copy link
Author

These are the error-messages when running the programs ABOVE in the same order AND deleting the storage-folder before each run (IGNORE the time-stamps I did it in reverse locally).

WITHOUT time.sleep(60):

2022-10-18 11:57:22,538 - distributed.preloading - INFO - Import preload module: dask_cuda.initialize
2022-10-18 11:57:44,874 - distributed.client - ERROR - cannot schedule new futures after shutdown
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 223, in read
    frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1392, in _handle_report
    msgs = await self.scheduler_comm.comm.read()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 239, in read
    convert_stream_closed_error(self, e)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
    raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) Client->Scheduler local=tcp://127.0.0.1:34000 remote=tcp://127.0.0.1:38271>: Stream is closed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 778, in wrapper
    return await func(*args, **kwargs)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1211, in _reconnect
    await self._ensure_connected(timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1241, in _ensure_connected
    comm = await connect(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/core.py", line 291, in connect
    comm = await asyncio.wait_for(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 479, in wait_for
    return fut.result()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 449, in connect
    stream = await self.client.connect(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/tornado/tcpclient.py", line 265, in connect
    addrinfo = await self.resolver.resolve(host, port, af)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 434, in resolve
    for fam, _, _, _, address in await asyncio.get_running_loop().getaddrinfo(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
    return await self.run_in_executor(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/concurrent/futures/thread.py", line 167, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
cannot schedule new futures after shutdown
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 223, in read
    frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1392, in _handle_report
    msgs = await self.scheduler_comm.comm.read()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 239, in read
    convert_stream_closed_error(self, e)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
    raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) Client->Scheduler local=tcp://127.0.0.1:34000 remote=tcp://127.0.0.1:38271>: Stream is closed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 778, in wrapper
    return await func(*args, **kwargs)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1400, in _handle_report
    await self._reconnect()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 778, in wrapper
    return await func(*args, **kwargs)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1211, in _reconnect
    await self._ensure_connected(timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1241, in _ensure_connected
    comm = await connect(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/core.py", line 291, in connect
    comm = await asyncio.wait_for(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 479, in wait_for
    return fut.result()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 449, in connect
    stream = await self.client.connect(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/tornado/tcpclient.py", line 265, in connect
    addrinfo = await self.resolver.resolve(host, port, af)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 434, in resolve
    for fam, _, _, _, address in await asyncio.get_running_loop().getaddrinfo(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
    return await self.run_in_executor(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/concurrent/futures/thread.py", line 167, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
2022-10-18 11:57:44,875 - distributed.client - ERROR - cannot schedule new futures after shutdown
Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 223, in read
    frames_nbytes = await stream.read_bytes(fmt_size)
tornado.iostream.StreamClosedError: Stream is closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1392, in _handle_report
    msgs = await self.scheduler_comm.comm.read()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 239, in read
    convert_stream_closed_error(self, e)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error
    raise CommClosedError(f"in {obj}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) Client->Scheduler local=tcp://127.0.0.1:34000 remote=tcp://127.0.0.1:38271>: Stream is closed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1521, in _close
    await asyncio.wait_for(asyncio.shield(handle_report_task), 0.1)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 479, in wait_for
    return fut.result()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 778, in wrapper
    return await func(*args, **kwargs)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1400, in _handle_report
    await self._reconnect()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/utils.py", line 778, in wrapper
    return await func(*args, **kwargs)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1211, in _reconnect
    await self._ensure_connected(timeout=timeout)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/client.py", line 1241, in _ensure_connected
    comm = await connect(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/core.py", line 291, in connect
    comm = await asyncio.wait_for(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/tasks.py", line 479, in wait_for
    return fut.result()
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 449, in connect
    stream = await self.client.connect(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/tornado/tcpclient.py", line 265, in connect
    addrinfo = await self.resolver.resolve(host, port, af)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/site-packages/distributed/comm/tcp.py", line 434, in resolve
    for fam, _, _, _, address in await asyncio.get_running_loop().getaddrinfo(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/base_events.py", line 861, in getaddrinfo
    return await self.run_in_executor(
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/asyncio/base_events.py", line 819, in run_in_executor
    executor.submit(func, *args), loop=self)
  File "/home/joachim/anaconda3/envs/rps/lib/python3.9/concurrent/futures/thread.py", line 167, in submit
    raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown```







This is with ```time.sleep(60)```

2022-10-18 11:54:29,805 - distributed.preloading - INFO - Creating preload: dask_cuda.initialize
2022-10-18 11:54:29,805 - distributed.preloading - INFO - Import preload module: dask_cuda.initialize
2022-10-18 11:55:23,272 - distributed.client - ERROR - Failed to reconnect to scheduler after 30.00 seconds, closing client

@Joachimoe
Copy link
Author

EDIT:

After having tinkered around, I tried to create another program which utilizes DASK_ML libraries. These programs run, also sequentially, with no errors at all. It seems to be a specific problem to the code above. If I however do execute the originally pasted code, all other sequential runs of anything importing DASK will crash. One such example is the below program:

import cupy as cp
import numpy as np
import dask.array as da
from dask_cuda import LocalCUDACluster
from dask.distributed import Client, wait
import rmm
import sys 
import dask_ml.preprocessing as pre
import time 



def minmax(size, work_units):
    rs = da.random.RandomState(RandomState=seed)
    rs = rs.randint(low = 0, high = 100_000, size = (size, work_units), chunks = 'auto')
    size = rs.nbytes / 1e9
    scaler = pre.MinMaxScaler(copy=False)
    
    start = time.time()

    scaler.fit(rs)
    print(da.sum(rs).compute())
    print(da.sum(scaler.transform(rs)).compute())

    
    
    end = time.time()

    t = (end - start) * 1000
    
    return size, t


if __name__ == '__main__':
    seed = cp.random.seed(seed=42)
    cluster = LocalCUDACluster('0', rmm_managed_memory=True)    ##This means that we use GPU device ID 0, and please give me the rapids memory manager 
    client = Client(cluster)                                    ##We create a local cluster here 
    client.run(cp.cuda.set_allocator, rmm.rmm_cupy_allocator)   ##We combine cupy and rmm to allocate memory on GPU 
    rmm.reinitialize(managed_memory=True)
    cp.cuda.set_allocator(rmm.rmm_cupy_allocator)


    size = int(sys.argv[1])
    wk   = int(sys.argv[2])

    print(minmax(size, wk))

    

@wence-
Copy link
Contributor

wence- commented Oct 18, 2022

I can't recreate the initial problem locally (with admittedly a slightly different environment). Can you post the output of conda list --export (this produces a conda environment file we can use to replicate your environment)?

@pentschev
Copy link
Member

I am a bit confused by your statements:

Just to re-iterate, here's the program that is now being run:

I did NOT delete the storage folder before running either.

and then:

These are the error-messages when running the programs ABOVE in the same order AND deleting the storage-folder before each run (IGNORE the time-stamps I did it in reverse locally).

Are you saying you delete the directory before each run or you don't delete?

You will need to delete the files before running with the changes I suggested. This is the order I'm proposing:

  1. Delete dask-worker-space;
  2. Run the code for the first time (first time relative to the deletion of dask-worker-space);
  3. Check the contents of dask-worker-space/storage -- should be empty when everything completes successfully.

After having tinkered around, I tried to create another program which utilizes DASK_ML libraries. These programs run, also sequentially, with no errors at all. It seems to be a specific problem to the code above. If I however do execute the originally pasted code, all other sequential runs of anything importing DASK will crash.

I'm really puzzled as to what happens in your case, I've also been running the same code as you but I do not experience errors even if I run multiple times. What is more confusing is the fact that your cluster apparently completes successfully, and that normally is an indication that the cleaning of the files should also have occurred.

@Joachimoe
Copy link
Author

@pentschev Sorry for the confusion. I deleted the comment which caused the confusion, as it did not contribute to the discussion.

I delete the directory before each run of the program. Adding either of your suggestion results in an error when running the programs the first time around.

When adding:

cluster.close()
client.shutdown()

The error is the following:

RuntimeError: cannot schedule new futures after shutdown

When adding:

cluster.close()
client.shutdown()
time.sleep(60)

The error is:

2022-10-18 11:54:29,805 - distributed.preloading - INFO - Creating preload: dask_cuda.initialize
2022-10-18 11:54:29,805 - distributed.preloading - INFO - Import preload module: dask_cuda.initialize
2022-10-18 11:55:23,272 - distributed.client - ERROR - Failed to reconnect to scheduler after 30.00 seconds, closing client

@Joachimoe
Copy link
Author

@wence- The output of the export command is the following:

# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
_libgcc_mutex=0.1=conda_forge
_openmp_mutex=4.5=2_gnu
aom=3.5.0=h27087fc_0
arrow-cpp=8.0.1=py39hd3ccb9b_2_cpu
aws-c-cal=0.5.11=h95a6274_0
aws-c-common=0.6.2=h7f98852_0
aws-c-event-stream=0.2.7=h3541f99_13
aws-c-io=0.10.5=hfb6a706_0
aws-checksums=0.1.11=ha31a3da_7
aws-sdk-cpp=1.8.186=hb4091e7_3
blosc=1.21.1=h83bc5f7_3
bokeh=2.4.3=pyhd8ed1ab_3
brotli=1.0.9=h166bdaf_7
brotli-bin=1.0.9=h166bdaf_7
brotlipy=0.7.0=py39hb9d737c_1004
brunsli=0.1=h9c3ff4c_0
bzip2=1.0.8=h7f98852_4
c-ares=1.18.1=h7f98852_0
c-blosc2=2.4.2=h7a311fb_0
ca-certificates=2022.9.24=ha878542_0
cachetools=5.2.0=pyhd8ed1ab_0
carbontracker=1.1.6=pypi_0
certifi=2022.9.24=pyhd8ed1ab_0
cffi=1.15.1=py39he91dace_0
cfitsio=4.1.0=hd9d235c_0
charls=2.3.4=h9c3ff4c_0
charset-normalizer=2.1.1=pypi_0
click=8.1.3=py39hf3d152e_0
cloudpickle=2.2.0=pyhd8ed1ab_0
contourpy=1.0.5=pypi_0
cryptography=37.0.4=py39hd97740a_0
cuda-python=11.7.0=py39h3fd9d12_0
cudatoolkit=11.5.1=hcf5317a_9
cudf=22.08.01=cuda_11_py39_g31337c9001_0
cuml=22.08.00=cuda11_py39_g1e2f8a9aa_0
cupy=10.6.0=py39hc3c280e_0
cycler=0.11.0=pypi_0
cytoolz=0.12.0=py39hb9d737c_0
dask=2022.7.1=pyhd8ed1ab_0
dask-core=2022.7.1=pyhd8ed1ab_0
dask-cuda=22.08.00=py39_g9a61ce5_0
dask-cudf=22.08.01=cuda_11_py39_g31337c9001_0
dask-glm=0.2.0=py_1
dask-image=2022.9.0=pyhd8ed1ab_0
dask-ml=2022.5.27=pyhd8ed1ab_0
dav1d=1.0.0=h166bdaf_1
decorator=5.1.1=pypi_0
distributed=2022.7.1=pyhd8ed1ab_0
dlpack=0.5=h9c3ff4c_0
faiss-proc=1.0.0=cuda
fastavro=1.6.1=py39hb9d737c_0
fastrlock=0.8=py39h5a03fae_2
fonttools=4.37.4=pypi_0
freetype=2.12.1=hca18f0e_0
fsspec=2022.8.2=pyhd8ed1ab_0
future=0.18.2=pypi_0
geocoder=1.38.1=pypi_0
gflags=2.2.2=he1b5a44_1004
giflib=5.2.1=h36c2ea0_2
glog=0.6.0=h6f12383_0
grpc-cpp=1.47.1=hbad87ad_6
heapdict=1.0.1=py_0
idna=3.4=pyhd8ed1ab_0
imagecodecs=2022.9.26=py39hf586f7a_0
imageio=2.22.0=pyhfa7a67d_0
jinja2=3.1.2=pyhd8ed1ab_1
joblib=1.2.0=pyhd8ed1ab_0
jpeg=9e=h166bdaf_2
jxrlib=1.1=h7f98852_2
keyutils=1.6.1=h166bdaf_0
kiwisolver=1.4.4=pypi_0
krb5=1.19.3=h3790be6_0
lcms2=2.12=hddcbb42_0
ld_impl_linux-64=2.36.1=hea4e1c9_2
lerc=4.0.0=h27087fc_0
libabseil=20220623.0=cxx17_h48a1fff_4
libaec=1.0.6=h9c3ff4c_0
libavif=0.10.1=h5cdd6b5_2
libblas=3.9.0=16_linux64_openblas
libbrotlicommon=1.0.9=h166bdaf_7
libbrotlidec=1.0.9=h166bdaf_7
libbrotlienc=1.0.9=h166bdaf_7
libcblas=3.9.0=16_linux64_openblas
libcrc32c=1.1.2=h9c3ff4c_0
libcudf=22.08.01=cuda11_g31337c9001_0
libcuml=22.08.00=cuda11_g1e2f8a9aa_0
libcumlprims=22.08.00=cuda11_g1770e60_0
libcurl=7.85.0=h7bff187_0
libcusolver=11.4.1.48=0
libcusparse=11.7.5.86=0
libdeflate=1.14=h166bdaf_0
libedit=3.1.20191231=he28a2e2_2
libev=4.33=h516909a_1
libevent=2.1.10=h9b69904_4
libfaiss=1.7.0=cuda112h5bea7ad_8_cuda
libffi=3.4.2=h7f98852_5
libgcc-ng=12.1.0=h8d9b700_16
libgfortran-ng=12.1.0=h69a702a_16
libgfortran5=12.1.0=hdcd56e2_16
libgomp=12.1.0=h8d9b700_16
libgoogle-cloud=2.1.0=h9ebe8e8_2
liblapack=3.9.0=16_linux64_openblas
libllvm11=11.1.0=he0ac6c6_4
libnghttp2=1.47.0=hdcd2b5c_1
libnsl=2.0.0=h7f98852_0
libopenblas=0.3.21=pthreads_h78a6416_3
libpng=1.6.38=h753d276_0
libprotobuf=3.20.1=h6239696_4
libraft-distance=22.08.00=cuda11_g87a7d16c_0
libraft-headers=22.08.00=cuda11_g87a7d16c_0
libraft-nn=22.08.00=cuda11_g87a7d16c_0
librmm=22.08.00=cuda11_gd212232c_0
libsqlite=3.39.4=h753d276_0
libssh2=1.10.0=haa6b8db_3
libstdcxx-ng=12.1.0=ha89aaad_16
libthrift=0.16.0=h491838f_2
libtiff=4.4.0=h55922b4_4
libutf8proc=2.7.0=h7f98852_0
libuuid=2.32.1=h7f98852_1000
libwebp-base=1.2.4=h166bdaf_0
libxcb=1.13=h7f98852_1004
libzlib=1.2.12=h166bdaf_4
libzopfli=1.0.3=h9c3ff4c_0
llvmlite=0.39.1=py39h7d9a04d_0
locket=1.0.0=pyhd8ed1ab_0
lz4=4.0.0=py39h029007f_2
lz4-c=1.9.3=h9c3ff4c_1
markupsafe=2.1.1=py39hb9d737c_1
matplotlib=3.6.0=pypi_0
msgpack-python=1.0.4=py39hf939315_0
multipledispatch=0.6.0=py_0
nccl=2.14.3.1=h0800d71_0
ncurses=6.3=h27087fc_1
numba=0.56.2=py39h61ddf18_1
numpy=1.22.4=py39hc58783e_0
nvtx=0.2.3=py39h3811e60_1
openjpeg=2.5.0=h7d73246_1
openssl=1.1.1q=h166bdaf_0
orc=1.7.6=h6c59b99_0
packaging=21.3=pyhd8ed1ab_0
pandas=1.4.4=py39h1832856_0
parquet-cpp=1.5.1=2
partd=1.3.0=pyhd8ed1ab_0
pillow=9.2.0=py39hd5dbb17_2
pims=0.6.1=pyhd8ed1ab_0
pip=22.2.2=py39h06a4308_0
protobuf=3.20.1=py39h5a03fae_0
psutil=5.9.2=py39hb9d737c_0
pthread-stubs=0.4=h36c2ea0_1001
ptxcompiler=0.2.0=py39h107f55c_0
pyarrow=8.0.1=py39hc0775d8_2_cpu
pycparser=2.21=pyhd8ed1ab_0
pynvml=11.4.1=pyhd8ed1ab_0
pyopenssl=22.0.0=pyhd8ed1ab_1
pyparsing=3.0.9=pyhd8ed1ab_0
pyraft=22.08.00=cuda11_py39_g87a7d16c_0
pysocks=1.7.1=pyha2e5f31_6
python=3.9.13=h9a8a25e_0_cpython
python-dateutil=2.8.2=pyhd8ed1ab_0
python_abi=3.9=2_cp39
pytz=2022.4=pyhd8ed1ab_0
pyyaml=6.0=py39hb9d737c_4
ratelim=0.1.6=pypi_0
re2=2022.06.01=h27087fc_0
readline=8.1.2=h0f457ee_0
requests=2.28.1=pypi_0
rmm=22.08.00=cuda11_py39_gd212232c_0
s2n=1.0.10=h9b69904_0
scikit-learn=1.1.2=py39he5e8d7e_0
scipy=1.9.1=py39h8ba3f38_0
seaborn=0.12.0=pypi_0
setuptools=65.4.1=pyhd8ed1ab_0
six=1.16.0=pyh6c4a22f_0
sklearn=0.0=pypi_0
slicerator=1.1.0=pyhd8ed1ab_0
snappy=1.1.9=hbd366e4_1
sortedcontainers=2.4.0=pyhd8ed1ab_0
spdlog=1.8.5=h4bd325d_1
sqlite=3.39.4=h4ff8645_0
tblib=1.7.0=pyhd8ed1ab_0
threadpoolctl=3.1.0=pyh8a188c0_0
tifffile=2022.10.10=pyhd8ed1ab_0
tk=8.6.12=h27826a3_0
toolz=0.12.0=pyhd8ed1ab_0
tornado=6.1=py39hb9d737c_3
treelite=2.4.0=py39h6b629c6_1
treelite-runtime=2.4.0=pypi_0
typing_extensions=4.3.0=pyha770c72_0
tzdata=2022d=h191b570_0
ucx=1.13.1=h538f049_0
ucx-proc=1.0.0=gpu
ucx-py=0.27.00=py39_g9abe3c1_0
urllib3=1.26.11=pyhd8ed1ab_0
wheel=0.37.1=pyhd8ed1ab_0
xorg-libxau=1.0.9=h7f98852_0
xorg-libxdmcp=1.1.3=h7f98852_0
xz=5.2.6=h166bdaf_0
yaml=0.2.5=h7f98852_2
zfp=1.0.0=h27087fc_1
zict=2.2.0=pyhd8ed1ab_0
zlib=1.2.12=h166bdaf_4
zlib-ng=2.0.6=h166bdaf_0
zstd=1.5.2=h6239696_4

@Joachimoe
Copy link
Author

Joachimoe commented Oct 18, 2022

I'm not sure if this is allowed, but here I have uploaded a GIF of the error happening live.

https://imgur.com/DEB6mSo

@Joachimoe
Copy link
Author

I can also not run the program from within multiple times, such as:

import cupy as cp
import numpy as np
import dask.array as da
from dask_cuda import LocalCUDACluster
from dask.distributed import Client, wait
import rmm
import sys
import time 
from helpers import *

def multiplication(size, work_units):
    rs = da.random.RandomState(RandomState=cp.random.RandomState)
    rs = rs.randint(low = 0, high = 100_000, size = (size, work_units), chunks = 'auto')
    rs = rs.map_blocks(cp.asarray)

    start = time.time()
    array_sum = da.multiply(rs, 42).persist()
    wait(array_sum)
    end   = time.time()

    return (end - start) * 1000


if __name__ == '__main__':

    cluster = LocalCUDACluster('0', rmm_managed_memory=True)
    client = Client(cluster)
    client.run(cp.cuda.set_allocator, rmm.rmm_cupy_allocator)


    rmm.reinitialize(managed_memory=True)
    cp.cuda.set_allocator(rmm.rmm_cupy_allocator)

    size = int(sys.argv[1])
    wk   = int(sys.argv[2])
    print([multiplication(size, wk) for _ in range(20)])
    ```

ERROR: 

distributed.comm.core.CommClosedError: in <TCP (closed) ConnectionPool.heartbeat_worker local=tcp://127.0.0.1:38900 remote=tcp://127.0.0.1:40177>: Stream is closed

@wence-
Copy link
Contributor

wence- commented Oct 18, 2022

@wence- The output of the export command is the following:
[...]

Thanks, I can reproduce (also with up to date dask-cuda). On the system I am running on I needed to artificially limit the available device and host memory, at which point I don't need GPU arrays in the loop at all, but use of LocalCUDACluster is critical to reproduce the issue:

import numpy as np
import dask.array as da
from dask_cuda import LocalCUDACluster
from dask.distributed import Client, wait

if __name__ == '__main__':

    cluster = LocalCUDACluster('0', memory_limit="3GiB")
    client = Client(cluster)
    shape = (512, 512, 3000)
    chunks = (100, 100, 1000)
    huge_array = da.ones_like(np.array(()), shape=shape, chunks=chunks)
    array_sum = da.multiply(huge_array, 17).persist()
    # `persist()` only does lazy evaluation, so we must `wait()` for the
    # actual compute to occur.
    wait(array_sum)

The problem appears to be that the disk-spilling storage ends up in dask-worker-space/storage/ whereas it should be put in dask-worker-space/worker-XXXX/storage (so it is not cleaned up at the end of the run). Let me see if I can figure out how this is happening.

@wence-
Copy link
Contributor

wence- commented Oct 18, 2022

Hmm, this is a somewhat chicken and egg situation. We create a Worker passing in the global temporary_directory, and use that to determine where to put the on-disk storage directory for the memory spilling Worker.data object. But really this directory needs to be worker.local_directory (which we don't have a handle on because we don't have a worker yet).

@wence-
Copy link
Contributor

wence- commented Oct 18, 2022

Working on enabling this via dask/distributed#7151

@wence- wence- self-assigned this Oct 18, 2022
@wence- wence- added the bug Something isn't working label Oct 18, 2022
@Joachimoe
Copy link
Author

Thanks a lot for all of your work. Should I close this?

@wence-
Copy link
Contributor

wence- commented Oct 20, 2022

Let's leave it open until we actually have the fixes in, thanks!

wence- added a commit to wence-/dask-cuda that referenced this issue Oct 25, 2022
For automated cleanup when the cluster exits, the on-disk spilling
directory needs to live inside the relevant worker's local_directory.
Since we do not have a handle on the worker when constructing the
keyword arguments to DeviceHostFile or ProxifyHostFile, instead take
advantage of dask/distributed#7153 and request that we are called with
the worker_local_directory as an argument. Closes rapidsai#1018.
wence- added a commit to wence-/dask-cuda that referenced this issue Oct 25, 2022
For automated cleanup when the cluster exits, the on-disk spilling
directory needs to live inside the relevant worker's local_directory.
Since we do not have a handle on the worker when constructing the
keyword arguments to DeviceHostFile or ProxifyHostFile, instead take
advantage of dask/distributed#7153 and request that we are called with
the worker_local_directory as an argument. Closes rapidsai#1018.
wence- added a commit to wence-/dask-cuda that referenced this issue Oct 25, 2022
For automated cleanup when the cluster exits, the on-disk spilling
directory needs to live inside the relevant worker's local_directory.
Since we do not have a handle on the worker when constructing the
keyword arguments to DeviceHostFile or ProxifyHostFile, instead take
advantage of dask/distributed#7153 and request that we are called with
the worker_local_directory as an argument. Closes rapidsai#1018.
@Joachimoe
Copy link
Author

I've been following the work you guys have made, and thanks for making such rapid changes. I also see that the pull-request has been merged. Please tell me when to close this issue :-)

Thanks to you both for your help.

@wence-
Copy link
Contributor

wence- commented Oct 25, 2022

I am just testing the branch that I hope fixes things, when that is merged (after review) it should close this issue automatically.

wence- added a commit to wence-/dask-cuda that referenced this issue Oct 25, 2022
For automated cleanup when the cluster exits, the on-disk spilling
directory needs to live inside the relevant worker's local_directory.
Since we do not have a handle on the worker when constructing the
keyword arguments to DeviceHostFile or ProxifyHostFile, instead take
advantage of dask/distributed#7153 and request that we are called with
the worker_local_directory as an argument. Closes rapidsai#1018.
@rapids-bot rapids-bot bot closed this as completed in #1023 Nov 8, 2022
rapids-bot bot pushed a commit that referenced this issue Nov 8, 2022
For automated cleanup when the cluster exits, the on-disk spilling directory needs to live inside the relevant worker's local_directory. Since we do not have a handle on the worker when constructing the keyword arguments to DeviceHostFile or ProxifyHostFile, instead take advantage of dask/distributed#7153 and request that we are called with the worker_local_directory as an argument. Closes #1018.

Authors:
  - Lawrence Mitchell (https://github.com/wence-)

Approvers:
  - Peter Andreas Entschev (https://github.com/pentschev)

URL: #1023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants