Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Support torch.int32 as a dtype for quantize and dequantize (#289)
Summary: Pull Request resolved: #289 The ops like `quantized_decomposed.quantize_per_tensor.default` did not support an int32 quantized type. Add support for these to the portable and aten runtimes. This is important for Turing which uses int32 to represent uint16 (as the latter is not a valid pytorch dtype). Reviewed By: kimishpatel Differential Revision: D49202048 fbshipit-source-id: 0faa89ce1d34b60ece443fb02fa14f02abf2d376
- Loading branch information