Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] PT: passing >f4 array to DeepPot triggers ValueError #4099

Closed
njzjz opened this issue Sep 3, 2024 · 0 comments · Fixed by #4100
Closed

[BUG] PT: passing >f4 array to DeepPot triggers ValueError #4099

njzjz opened this issue Sep 3, 2024 · 0 comments · Fixed by #4100
Labels

Comments

@njzjz
Copy link
Member

njzjz commented Sep 3, 2024

Bug summary

It seems to me that passing a >f4 array to DeepPot will trigger ValueError: given numpy array has byte order different from the native byte order. Conversion between byte orders is currently not supported.

>f4 may be the original data type stored in the NetCDF files.

DeePMD-kit Version

5111e9b

Backend and its version

PyTorch 2.3.1

How did you download the software?

Built from source

Input Files, Running Commands, Error Log, etc.

Traceback (most recent call last):
  File "/home/jz748/codes/deepmd-kit/examples/water/se_e2_a/1.py", line 8, in <module>
    e, f, v = dp.eval(coord, cell, atype)
  File "/home/jz748/codes/deepmd-kit/deepmd/infer/deep_pot.py", line 201, in eval
    results = self.deep_eval.eval(
  File "/home/jz748/codes/deepmd-kit/deepmd/pt/infer/deep_eval.py", line 268, in eval
    out = self._eval_func(self._eval_model, numb_test, natoms)(
  File "/home/jz748/codes/deepmd-kit/deepmd/pt/infer/deep_eval.py", line 340, in eval_func
    return self.auto_batch_size.execute_all(
  File "/home/jz748/codes/deepmd-kit/deepmd/utils/batch_size.py", line 203, in execute_all
    n_batch, result = self.execute(execute_with_batch_size, index, natoms)
  File "/home/jz748/codes/deepmd-kit/deepmd/utils/batch_size.py", line 117, in execute
    raise e
  File "/home/jz748/codes/deepmd-kit/deepmd/utils/batch_size.py", line 114, in execute
    n_batch, result = callable(max(batch_nframes, 1), start_index)
  File "/home/jz748/codes/deepmd-kit/deepmd/utils/batch_size.py", line 180, in execute_with_batch_size
    return (end_index - start_index), callable(
  File "/home/jz748/codes/deepmd-kit/deepmd/pt/infer/deep_eval.py", line 383, in _eval_model
    coord_input = torch.tensor(
ValueError: given numpy array has byte order different from the native byte order. Conversion between byte orders is currently not supported.

Steps to Reproduce

from deepmd.infer import DeepPot
import numpy as np

dp = DeepPot("frozen_model.pth")
coord = np.array([[1, 0, 0], [0, 0, 1.5], [1, 0, 3]], dtype=">f4").reshape([1, -1])                                                                                                                                                                                                                                          
cell = np.diag(10 * np.ones(3)).reshape([1, -1])
atype = [1, 0, 1]
e, f, v = dp.eval(coord, cell, atype)

Further Information, Files, and Links

No response

@njzjz njzjz added the bug label Sep 3, 2024
@njzjz njzjz changed the title [BUG] PT: passing <f4 array to DeepPot triggers ValueError [BUG] PT: passing >f4 array to DeepPot triggers ValueError Sep 3, 2024
@njzjz njzjz linked a pull request Sep 3, 2024 that will close this issue
github-merge-queue bot pushed a commit that referenced this issue Sep 4, 2024
Fix #4099.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced tensor data type handling for improved numerical stability
and performance in deep learning computations.
- Introduced a precision dictionary to ensure input data is processed
with the correct precision.

- **Bug Fixes**
- Improved clarity and robustness in the handling of data types within
the model evaluation process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
@njzjz njzjz closed this as completed Sep 4, 2024
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this issue Sep 18, 2024
…ling#4100)

Fix deepmodeling#4099.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced tensor data type handling for improved numerical stability
and performance in deep learning computations.
- Introduced a precision dictionary to ensure input data is processed
with the correct precision.

- **Bug Fixes**
- Improved clarity and robustness in the handling of data types within
the model evaluation process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant