Skip to content

Commit

Permalink
update llm_optimze page for URL update (#2606)
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored Feb 7, 2024
1 parent 86e09d6 commit e321937
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion cpu/2.2.0+cpu/_sources/tutorials/llm/llm_optimize.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ API documentation is available at [API Docs page](../api_doc.html#ipex.llm.optim

## Pseudocode of Common Usage Scenarios

The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLM models. Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/v2.1.0%2Bcpu/examples/cpu/inference/python/llm).
The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLM models. Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/v2.2.0%2Bcpu/examples/cpu/inference/python/llm).

### FP32/BF16

Expand Down
4 changes: 2 additions & 2 deletions cpu/2.2.0+cpu/tutorials/llm/llm_optimize.html
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ <h1>Transformers Optimization Frontend API<a class="headerlink" href="#transform
<p>API documentation is available at <a class="reference external" href="../api_doc.html#ipex.llm.optimize">API Docs page</a>.</p>
<section id="pseudocode-of-common-usage-scenarios">
<h2>Pseudocode of Common Usage Scenarios<a class="headerlink" href="#pseudocode-of-common-usage-scenarios" title="Permalink to this heading"></a></h2>
<p>The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch* APIs to work with LLM models. Complete examples can be found at <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/tree/v2.1.0%2Bcpu/examples/cpu/inference/python/llm">the Example directory</a>.</p>
<p>The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch* APIs to work with LLM models. Complete examples can be found at <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/tree/v2.2.0%2Bcpu/examples/cpu/inference/python/llm">the Example directory</a>.</p>
<section id="fp32-bf16">
<h3>FP32/BF16<a class="headerlink" href="#fp32-bf16" title="Permalink to this heading"></a></h3>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
Expand Down Expand Up @@ -281,4 +281,4 @@ <h3>Distributed Inference with DeepSpeed<a class="headerlink" href="#distributed
</script>

</body>
</html>
</html>

0 comments on commit e321937

Please sign in to comment.