Skip to content

Commit

Permalink
update index contents (#2183)
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored Oct 20, 2023
1 parent 3f77ae4 commit 4aa3fa1
Show file tree
Hide file tree
Showing 2 changed files with 44 additions and 18 deletions.
Binary file modified _images/intel_extension_for_pytorch_structure.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
62 changes: 44 additions & 18 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -62,24 +62,50 @@
</div>
<h1>Welcome to Intel® Extension for PyTorch* Documentation!<a class="headerlink" href="#welcome-to-intel-extension-for-pytorch-documentation" title="Permalink to this heading"></a></h1>
<div id="div-introduction">
<p>Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* <cite>xpu</cite> device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.</p>
<p>Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch* with <a class="reference external" href="https://pytorch.org/docs/stable/jit.html">TorchScript</a> whenever your workload supports it. You could choose to run with <cite>torch.jit.trace()</cite> function or <cite>torch.jit.script()</cite> function, but based on our evaluation, <cite>torch.jit.trace()</cite> supports more workloads so we recommend you to use <cite>torch.jit.trace()</cite> as your first choice.</p>
<p>The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing <cite>intel_extension_for_pytorch</cite>.</p>
<p>Intel® Extension for PyTorch* is structured as shown in the following figure:</p>
<figure class="align-center">
<a class="reference internal image-reference" href="_images/intel_extension_for_pytorch_structure.png"><img alt="Architecture of Intel® Extension for PyTorch*" src="_images/intel_extension_for_pytorch_structure.png" style="width: 800px;" /></a>
</figure>
<div class="line-block">
<div class="line"><br /></div>
</div>
<p>Optimizations for both eager mode and graph mode contribute to extra performance accelerations with the extension. In eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance boost is available by converting the eager-mode model into graph mode via extended graph fusion passes. In the graph mode, the fusions reduce operator/kernel invocation overheads, and thus increase performance. On CPU, Intel® Extension for PyTorch* dispatches the operators into their underlying kernels automatically based on ISA that it detects and leverages vectorization and matrix acceleration units available on Intel hardware. Intel® Extension for PyTorch* runtime extension brings better efficiency with finer-grained thread runtime control and weight sharing. On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the <a class="reference external" href="https://github.com/intel/llvm#oneapi-dpc-compiler">DPC++</a> compiler that supports the latest <a class="reference external" href="https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html">SYCL*</a> standard and also a number of extensions to the SYCL* standard, which can be found in the <a class="reference external" href="https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions">sycl/doc/extensions</a> directory.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>GPU features are not included in CPU only packages.</p>
</div>
<p>Intel® Extension for PyTorch* has been released as an open–source project at <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch">Github</a>. Source code is available at <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master">xpu-master branch</a>. Check <a class="reference internal" href="./xpu/latest/">the tutorial</a> for detailed information. Due to different development schedule, optimizations for CPU only might have a newer code base. Source code is available at <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/tree/master">master branch</a>. Check <a class="reference internal" href="./cpu/latest/">the CPU tutorial</a> for detailed information on the CPU side.</p>
<div class="toctree-wrapper compound">
</div>
<p>Intel® Extension for PyTorch* extends PyTorch* with the latest performance optimizations for Intel hardware.
Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup>Matrix Extensions (XMX) AI engines on Intel discrete GPUs.
Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* <code class="docutils literal notranslate"><span class="pre">xpu</span></code> device.</p>
<p>The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts, users can enable it dynamically by importing <code class="docutils literal notranslate"><span class="pre">intel_extension_for_pytorch</span></code>.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>GPU features are not included in CPU-only packages.</p></li>
<li><p>Optimizations for CPU-only may have a newer code base due to different development schedules.</p></li>
</ul>
</div>
<p>In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch*.</p>
<p>Intel® Extension for PyTorch* has been released as an open–source project at <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch">Github</a>. You can find the source code and instructions on how to get started at:</p>
<ul class="simple">
<li><p><strong>CPU</strong>: <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/tree/master">CPU master branch</a> | <a class="reference external" href="./cpu/latest/tutorials/getting_started.html">Get Started</a></p></li>
<li><p><strong>XPU</strong>: <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master">XPU master branch</a> | <a class="reference external" href="./xpu/latest/tutorials/getting_started.html">Get Started</a></p></li>
</ul>
<section id="architecture">
<h2>Architecture<a class="headerlink" href="#architecture" title="Permalink to this heading"></a></h2>
<p>Intel® Extension for PyTorch* is structured as shown in the following figure:</p>
<figure class="align-center">
<a class="reference internal image-reference" href="_images/intel_extension_for_pytorch_structure.png"><img alt="Architecture of Intel® Extension for PyTorch*" src="_images/intel_extension_for_pytorch_structure.png" style="width: 800px;" /></a>
</figure>
<ul class="simple">
<li><p><strong>Eager Mode</strong>: In the eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance improvement is achieved by converting eager-mode models into graph mode using extended graph fusion passes.</p></li>
<li><p><strong>Graph Mode</strong>: In the graph mode, fusions reduce operator/kernel invocation overhead, resulting in improved performance. Compared to the eager mode, the graph mode in PyTorch* normally yields better performance from the optimization techniques like operation fusion. Intel® Entension for PyTorch* amplifies them with more comprehensive graph optimizations. Both PyTorch <code class="docutils literal notranslate"><span class="pre">Torchscript</span></code> and <code class="docutils literal notranslate"><span class="pre">TorchDynamo</span></code> graph modes are supported. With <code class="docutils literal notranslate"><span class="pre">Torchscript</span></code>, we recommend using <code class="docutils literal notranslate"><span class="pre">torch.jit.trace()</span></code> as your preferred option, as it generally supports a wider range of workloads compared to <code class="docutils literal notranslate"><span class="pre">torch.jit.script()</span></code>. With <code class="docutils literal notranslate"><span class="pre">TorchDynamo</span></code>, ipex backend is available to provide good performances.</p></li>
<li><p><strong>CPU Optimization</strong>: On CPU, Intel® Extension for PyTorch* automatically dispatches operators to underlying kernels based on detected ISA. The extension leverages vectorization and matrix acceleration units available on Intel hardware. The runtime extension offers finer-grained thread runtime control and weight sharing for increased efficiency.</p></li>
<li><p><strong>GPU Optimization</strong>: On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the <a class="reference external" href="https://github.com/intel/llvm#oneapi-dpc-compiler">DPC++</a> compiler that supports the latest <a class="reference external" href="https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html">SYCL*</a> standard and also a number of extensions to the SYCL* standard, which can be found in the <a class="reference external" href="https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions">sycl/doc/extensions</a> directory.</p></li>
</ul>
</section>
<section id="support">
<h2>Support<a class="headerlink" href="#support" title="Permalink to this heading"></a></h2>
<p>The team tracks bugs and enhancement requests using <a class="reference external" href="https://github.com/intel/intel-extension-for-pytorch/issues/">GitHub issues</a>. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.</p>
<div class="toctree-wrapper compound">
</div>
<div class="toctree-wrapper compound">
</div>
<div class="toctree-wrapper compound">
</div>
<div class="toctree-wrapper compound">
</div>
<div class="toctree-wrapper compound">
</div>
</section>
</div>
<div id="div-installation">
<div class="row">
Expand Down

0 comments on commit 4aa3fa1

Please sign in to comment.