-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Saving and loading a ReplayBuffer to disk #1588
Comments
Hello! from torchrl.data import LazyMemmapStorage, TensorDictReplayBuffer
from tensordict import TensorDict
s = LazyMemmapStorage(100, scratch_dir="./_dump")
td = TensorDict({"a": 0}, [])
s._init(td)
print("saved", s._storage)
s2 = LazyMemmapStorage(100, scratch_dir="./_dump")
s2._storage = TensorDict.load_memmap("./_dump")
print("loaded", s2._storage) which will print the same result. I will make a PR to have this functionality available at a high level! |
hey, I am also interested in this. @vmoens in your example you do not use TensorDictReplayBuffer. |
It's part of the roadmap for the next release! Here's what I'm envisioning:
For 3. I think we should simply have a state_dict for the storage and call it a day. It'll be a regular For 2. we could do rb = TensorDictReplayBuffer(storage=storage)
...
rb.dumps(path) # saves the storage (and some other stuff, see below)
rb.loads(path) # loads the storage from the path For 3. In my current view,
I'm not 100% sure of what shape
Thoughts? |
Have a look at #1733 which implements this feature! |
Wow, thanks for a swift response! The proposed solution does exactly what I need: will make restarting experiments a breeze and enable my team to use TorchRL buffers for our use-case. When do you expect to merge? |
Tonight or tomorrow, there are some loose ends with the max writer in the tests |
Hello
I'm using a TensorDictReplayBuffer with a LazyTensorStorage for training a model. After the training, I need to save to disk the replay buffer for future use. I expected torch.save to be capable to pickle it, but the object is RLocked.
What would be the correct approach with torchrl?
I'm using torchrl==0.1.1
The text was updated successfully, but these errors were encountered: