Skip to content

Intel® Optimizations for TensorFlow* 2.3.0

Compare
Choose a tag to compare
@rsketine rsketine released this 31 Aug 05:02
· 1 commit to r2.3 since this release
f3fbb16

This release of Intel® Optimized TensorFlow  is based on the TensorFlow v2.3.0 tag and is built with support for oneDNN (oneAPI Deep Neural Network Library). For features and fixes that were introduced in TensorFlow 2.3.0, please see the TensorFlow 2.3.0 release notes. This build was built from v2.3.0.

New functionality and usability improvements:

  • BFloat16 support for Intel CPUs.
  • BFloat16 training optimizations are available for many popular models in the Intel Model Zoo.
  • BFloat16 inference optimizations are available for a limited number of models.
  • AutoMixedPrecisionMkl feature is supported. getting-started-with-automixedprecisionmkl
  • oneDNN - moved from 0.x to version 1.4
  • Support for Intel® MKL-DNN version 0.x is still available.
  • building with DNNL0 option is available by specifying --define=build_with_mkl_dnn_v1_only=false
  • Default build with --config=mkl will enable DNNL1 with BFloat16 data type.
  • Released AVX512 binary packages and containers to show case out of box BFloat16 data type support and performance for our customers.

Bug fixes:

Versions and components

Known issues

  • OMP Threads are created with variable number when we compile with XLA flag ON 40836
  • Incorrect result of _MKLMaxPoolGrad 40122
  • test_conv_bn_dropout and test_conv_pool tests of //tensorflow/python:auto_mixed_precision_test fail with MKL backend on AVX.
  • //tensorflow/core/grappler/optimizers:remapper_test is failing in 2.3 branch.

How to: