Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Simplify DeviceSerialized and usage thereof #268

Closed

Conversation

jakirkham
Copy link
Member

As "dask" serialization already converts a CUDA object into headers and frames that Dask is able to work with, drop code that tries to serialize frames on host further (as they are already as simple as they can be). Cuts a fair bit of boilerplate from the spilling path, which should simplify things a bit.

As `"dask"` serialization already converts a CUDA object into headers
and frames that Dask is able to work with, drop code that tries to
serialize frames on host further (as they are already as simple as they
can be). Cuts a fair bit of boilerplate from the spilling path, which
should simplify things a bit.
@jakirkham jakirkham requested a review from a team as a code owner March 27, 2020 02:46
@jakirkham jakirkham changed the title Simplify DeviceSerialized and usage thereof WIP: Simplify DeviceSerialized and usage thereof Mar 27, 2020
@jakirkham
Copy link
Member Author

jakirkham commented Mar 27, 2020

Just to be clear, this PR still has issues. So should not be merged until addressed. In particular we are seeing exceptions like this one, which requires PR ( dask/distributed#3639 ) to fix or at least some other solution here.

distributed.worker - ERROR - too many values to unpack (expected 1)
Traceback (most recent call last):
  File "/conda/envs/gdf/lib/python3.7/site-packages/distributed/worker.py", line 2169, in release_key
    if key in self.data and key not in self.dep_state:
  File "/conda/envs/gdf/lib/python3.7/_collections_abc.py", line 666, in __contains__
    self[key]
  File "/var/lib/jenkins/workspace/rapidsai/gpuci/dask-cuda/prb/dask-cuda-gpu-build/dask_cuda/device_host_file.py", line 125, in __getitem__
    return self.device_buffer[key]
  File "/conda/envs/gdf/lib/python3.7/site-packages/zict/buffer.py", line 78, in __getitem__
    return self.slow_to_fast(key)
  File "/conda/envs/gdf/lib/python3.7/site-packages/zict/buffer.py", line 65, in slow_to_fast
    value = self.slow[key]
  File "/conda/envs/gdf/lib/python3.7/site-packages/zict/func.py", line 38, in __getitem__
    return self.load(self.d[key])
  File "/var/lib/jenkins/workspace/rapidsai/gpuci/dask-cuda/prb/dask-cuda-gpu-build/dask_cuda/device_host_file.py", line 60, in host_to_device
    return deserialize(s.header, s.frames)
  File "/conda/envs/gdf/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 269, in deserialize
    return loads(header, frames)
  File "/conda/envs/gdf/lib/python3.7/site-packages/distributed/protocol/serialize.py", line 51, in dask_loads
    return loads(header, frames)
  File "/conda/envs/gdf/lib/python3.7/site-packages/distributed/protocol/cupy.py", line 83, in dask_deserialize_cupy_ndarray
    frames = [dask_deserialize_cuda_buffer(header, frames)]
  File "/conda/envs/gdf/lib/python3.7/site-packages/distributed/protocol/rmm.py", line 38, in dask_deserialize_rmm_device_buffer
    (frame,) = frames
ValueError: too many values to unpack (expected 1)

https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/dask-cuda/job/prb/job/dask-cuda-gpu-build/902/console

@jakirkham
Copy link
Member Author

Closing for now to cleanup things. May revisit in the future.

@jakirkham jakirkham closed this Jun 2, 2020
@jakirkham jakirkham deleted the simp_serialization_spill branch June 2, 2020 21:57
@jakirkham
Copy link
Member Author

Captured the main features of this in PR ( #307 ).

@jakirkham
Copy link
Member Author

The rest of the changes here are handled in PR ( #309 ).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant