Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

transformers.convert_graph_to_onnx.quantize fails in unit tests #18614

Closed
1 of 4 tasks
rachthree opened this issue Aug 13, 2022 · 3 comments
Closed
1 of 4 tasks

transformers.convert_graph_to_onnx.quantize fails in unit tests #18614

rachthree opened this issue Aug 13, 2022 · 3 comments
Labels

Comments

@rachthree
Copy link
Contributor

rachthree commented Aug 13, 2022

System Info

  • transformers version: 4.22.0.dev0
  • Platform: Linux-5.10.60.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
    • Used tensorflow/tensorflow:latest Docker image for this environment, then used pip install -e '.[dev,onnx]'
  • Python version: 3.8.10
  • Huggingface_hub version: 0.8.1
  • PyTorch version (GPU?): 1.12.1+cu116 (True)
  • Tensorflow version (GPU?): 2.9.1 (True)
  • Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
  • Jax version: 0.3.6
  • JaxLib version: 0.3.5
  • Using GPU in script?: Yes
  • Using distributed or parallel set-up in script?: No

Other:

  • onnxruntime version: 1.12.1

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Run RUN_SLOW=true pytest tests/onnx/test_onnx.py

Get failures:

FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError: 'module' object is not callable
FAILED tests/onnx/test_onnx.py::OnnxExportTestCase::test_quantize_tf - TypeError: 'module' object is not callable

Expected behavior

The unit tests should pass.

I believe this failure is due to onnxruntime.quantization.quantize which is a module that contains functions quantize_static and quantize_dynamic. The API may have changed since the unit test was written. I'm not sure which is the one to use for the unit tests. Even after fixing, not sure how transformers should handle the different versions of onnxruntime or should the required version change in setup.py.

See https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/quantization/quantize.py

@LysandreJik
Copy link
Member

cc @lewtun

@severinsimmler
Copy link
Contributor

I have fixed this in #18336, but still waiting for a review.

@rachthree
Copy link
Contributor Author

Looks like this has been fixed! Closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants