From 6356c8dafc0627ee1f47b1ad447b372a45cc0f82 Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Wed, 10 Jan 2024 15:26:59 +0900 Subject: [PATCH] update 2.1.100 example page to remove the out-of-date note for ddp example (#2448) --- cpu/2.1.100+cpu/tutorials/examples.html | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/cpu/2.1.100+cpu/tutorials/examples.html b/cpu/2.1.100+cpu/tutorials/examples.html index 82a47e17a..55fc7a16e 100644 --- a/cpu/2.1.100+cpu/tutorials/examples.html +++ b/cpu/2.1.100+cpu/tutorials/examples.html @@ -335,7 +335,6 @@
BFloat16

Distributed Training

Distributed training with PyTorch DDP is accelerated by oneAPI Collective Communications Library Bindings for Pytorch* (oneCCL Bindings for Pytorch*). The extension supports FP32 and BF16 data types. More detailed information and examples are available at the Github repo.

-

Note: When performing distributed training with BF16 data type, use oneCCL Bindings for Pytorch*. Due to a PyTorch limitation, distributed training with BF16 data type with Intel® Extension for PyTorch* is not supported.

import os
 import torch
 import torch.distributed as dist
@@ -1041,4 +1040,4 @@ 

Intel® AI Reference Models