Skip to content

Intel® Extension for TensorFlow* 2.14.0.1

Compare
Choose a tag to compare
@Dboyqiao Dboyqiao released this 24 Nov 02:03
· 8 commits to r2.14 since this release

Major Features and Improvements

Intel® Extension for TensorFlow* extends official TensorFlow capabilities to run TensorFlow workloads on Intel® Data Center GPU Max Series, Intel® Data Center GPU Flex Series, and Intel® Xeon® Scalable Processors. This release contains the following major features and improvement:

  • The Intel® Extension for TensorFlow* supported TensorFlow version is successfully upgraded to Google released TensorFlow 2.14, which is the required TensorFlow version for this release.

  • Supports Intel® oneAPI Base Toolkit 2024.0.

  • Provides experimental support for selecting CPU thread pools using either OpenMP thread pool (default) or Eigen thread pool. You can select the more efficient thread pool based on the workload and hardware configuration. Refer to Selecting Thread Pool in Intel® Extension for TensorFlow* CPU for more details.

  • Enables FP8 functionality support for Transformer-like training models. Refer to FP8 BERT-Large Fine-tuning for Classifying Text on Intel GPU for more details.

  • Provides experimental support for quantization front-end python API, based on Intel® Neural Compressor.

  • Adds OPs performance optimizations:

    • Optimizes GroupNorm/Unique operators.
    • Optimizes Einsum/ScaledDotProductAttention with XeTLA enabled.
  • Supports new OPs to cover the majority of TensorFlow 2.14 OPs.

  • Continues to provide experimental support for Intel® Arc™ A-Series GPUs on Windows Subsystem for Linux 2 with Ubuntu Linux installed and native Ubuntu Linux.

  • Moves the experimental support for Intel GPU backend for OpenXLA from the Intel® Extension for TensorFlow repository to the Intel® Extension for OpenXLA* repository. Refer to Intel® Extension for OpenXLA* for more details.

Known Issues

  • FP64 is not natively supported by the Intel® Data Center GPU Flex Series platform. If you run any AI workload with the FP64 kernel on that platform, the workload will exit with an exception as 'XXX' Op uses fp64 data type, while fp64 instructions are not supported on the platform.
  • A GLIBC++ version mismatch may cause a workload exit with the exception, Can not find any devices. To check runtime environment on your host, please run itex/tools/env_check.sh. Try running env_check.sh script to confirm.

Documents