Intel® Optimizations for TensorFlow* 2.3.0
This release of Intel® Optimized TensorFlow is based on the TensorFlow v2.3.0 tag and is built with support for oneDNN (oneAPI Deep Neural Network Library). For features and fixes that were introduced in TensorFlow 2.3.0, please see the TensorFlow 2.3.0 release notes. This build was built from v2.3.0.
New functionality and usability improvements:
- BFloat16 support for Intel CPUs.
- BFloat16 training optimizations are available for many popular models in the Intel Model Zoo.
- BFloat16 inference optimizations are available for a limited number of models.
- AutoMixedPrecisionMkl feature is supported. getting-started-with-automixedprecisionmkl
- oneDNN - moved from 0.x to version 1.4
- Support for Intel® MKL-DNN version 0.x is still available.
- building with DNNL0 option is available by specifying --define=build_with_mkl_dnn_v1_only=false
- Default build with --config=mkl will enable DNNL1 with BFloat16 data type.
- Released AVX512 binary packages and containers to show case out of box BFloat16 data type support and performance for our customers.
Bug fixes:
- Issues resolved in TensorFlow 2.3
- oneDNN resolved issues
Versions and components
- Intel optimized TensorFlow based off of TensorFlow v2.3.0: https://github.com/Intel-tensorflow/tensorflow/tree/v2.3.0
- TensorFlow v2.3.0 : https://github.com/tensorflow/tensorflow/tree/v2.3.0
- oneDNN: https://github.com/oneapi-src/oneDNN/releases/tag/v1.4
- ModelZoo: https://github.com/IntelAI/models
Known issues
- OMP Threads are created with variable number when we compile with XLA flag ON 40836
- Incorrect result of _MKLMaxPoolGrad 40122
- test_conv_bn_dropout and test_conv_pool tests of //tensorflow/python:auto_mixed_precision_test fail with MKL backend on AVX.
- //tensorflow/core/grappler/optimizers:remapper_test is failing in 2.3 branch.
How to: