-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: unsupported format string passed to MetaTensor.__format__ #6522
Comments
maybe it's because 'val_loss' is not a scalar? >>> "{:.2f}".format(torch.tensor([1]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/_tensor.py", line 873, in __format__
return object.__format__(self, format_spec)
TypeError: unsupported format string passed to Tensor.__format__
>>> "{:.2f}".format(torch.tensor(1))
'1.00' |
|
thanks, now I can reproduce this issue import torch
from torch.nn import MSELoss
from monai.data import MetaTensor
input = MetaTensor(torch.randn(10, 10, 20))
target = MetaTensor(torch.randn(10, 10, 20))
loss = MSELoss()
loss_value = loss(input, target)
print("{:.2f}".format(loss_value))
|
Good to know I'm not making a basic mistake! Thanks for confirming |
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Fixes #6522 ### Types of changes <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Non-breaking change (fix or new feature that would not break existing functionality). - [ ] Breaking change (fix or new feature that would cause existing functionality to change). - [ ] New tests added to cover the changes. - [ ] Integration tests passed locally by running `./runtests.sh -f -u --net --coverage`. - [ ] Quick tests passed locally by running `./runtests.sh --quick --unittests --disttests`. - [ ] In-line docstrings updated. - [ ] Documentation updated, tested `make html` command in the `docs/` folder. --------- Signed-off-by: Wenqi Li <wenqil@nvidia.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
In the meantime, you can convert all the values that should be logged to regular Python scalars using
These values are also used in the early stopping callback and thus support |
Describe the bug
Not sure if the bug lies on this side, or on the side of PyTorch Lightning, but here it goes:
I'm using PyTorch Lightning to set up a simple training pipeline. When I use
pl.callbacks.EarlyStopping
with aCacheDataset
and associated transforms, I get:Where I reckon this line is the issue:
To Reproduce
I've tried to extract a minimal example of the cause of the issue.
Expected behavior
The
EarlyStopping
to work.Environment
Tried this on
v1.1.0
andv1.2.0rc7
The text was updated successfully, but these errors were encountered: