From e32193705fe61ebd8f29a2c1e0c11e943df6f26b Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Wed, 7 Feb 2024 16:39:40 +0900 Subject: [PATCH] update llm_optimze page for URL update (#2606) --- cpu/2.2.0+cpu/_sources/tutorials/llm/llm_optimize.md.txt | 2 +- cpu/2.2.0+cpu/tutorials/llm/llm_optimize.html | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/cpu/2.2.0+cpu/_sources/tutorials/llm/llm_optimize.md.txt b/cpu/2.2.0+cpu/_sources/tutorials/llm/llm_optimize.md.txt index 7974d0dd9..f9113897a 100644 --- a/cpu/2.2.0+cpu/_sources/tutorials/llm/llm_optimize.md.txt +++ b/cpu/2.2.0+cpu/_sources/tutorials/llm/llm_optimize.md.txt @@ -9,7 +9,7 @@ API documentation is available at [API Docs page](../api_doc.html#ipex.llm.optim ## Pseudocode of Common Usage Scenarios -The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLM models. Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/v2.1.0%2Bcpu/examples/cpu/inference/python/llm). +The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLM models. Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/v2.2.0%2Bcpu/examples/cpu/inference/python/llm). ### FP32/BF16 diff --git a/cpu/2.2.0+cpu/tutorials/llm/llm_optimize.html b/cpu/2.2.0+cpu/tutorials/llm/llm_optimize.html index b7504d654..cfcecebad 100644 --- a/cpu/2.2.0+cpu/tutorials/llm/llm_optimize.html +++ b/cpu/2.2.0+cpu/tutorials/llm/llm_optimize.html @@ -131,7 +131,7 @@

Transformers Optimization Frontend APIAPI Docs page.

Pseudocode of Common Usage Scenarios

-

The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch* APIs to work with LLM models. Complete examples can be found at the Example directory.

+

The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch* APIs to work with LLM models. Complete examples can be found at the Example directory.

FP32/BF16

import torch
@@ -281,4 +281,4 @@ 

Distributed Inference with DeepSpeed