Skip to content

v1.2.0

Compare
Choose a tag to compare
@EikanWang EikanWang released this 25 Feb 14:25

Intel Extension For PyTorch 1.2.0 Release Notes

What's New

New pytorch 1.7.0 was newly supported by Intel extension for Pytorch.

  • We rebased the Intel Extension for pytorch from Pytorch -1.5rc3 to the official Pytorch-1.7.0 release. It will have performance improvement with the new Pytorch-1.7 support.
  • Device name was changed from DPCPP to XPU.
    We changed the device name from DPCPP to XPU to align with the future Intel GPU product for heterogeneous computation.
  • Enabled the launcher for end users.
    We enabled the launch script which helps users launch the program for training and inference, then automatically setup the strategy for multi-thread, multi-instance, and memory allocator. Please refer to the launch script comments for more details.

Performance Improvement

  • This upgrade provides better INT8 optimization with refined auto mixed-precision API.
  • More operators are optimized for the int8 inference and bfp16 training of some key workloads, like MaskRCNN, SSD-ResNet34, DLRM, RNNT.

Others

  • Bug fixes
    • This upgrade fixes the issue that saving the model trained by Intel extension for PyTorch caused errors.
    • This upgrade fixes the issue that Intel extension for PyTorch was slower than pytorch proper for Tacotron2.
  • New custom operators
    This upgrade adds several custom operators: ROIAlign, RNN, FrozenBatchNorm, nms.
  • Optimized operators/fusion
    This upgrade optimizes several operators: tanh, log_softmax, upsample, embeddingbad and enables int8 linear fusion.
  • Performance
    The release has daily automated testing for the supported models: ResNet50, ResNext101, Huggingface Bert, DLRM, Resnext3d, MaskRNN, SSD-ResNet34. With the extension imported, it can bring up to 2x INT8 over FP32 inference performance improvements on the 3rd Gen Intel Xeon scalable processors (formerly codename Cooper Lake).

Known issues

Multi-node training still encounter hang issues after several iterations. The fix will be included in the next official release.