Skip to content

Commit

Permalink
docs: [Automated] Regenerating documenation for 38771e6
Browse files Browse the repository at this point in the history
Signed-off-by: Torch-TensorRT Github Bot <torch-tensorrt.github.bot@nvidia.com>
  • Loading branch information
Torch-TensorRT Github Bot committed Oct 11, 2024
1 parent 38771e6 commit 6a3ef30
Show file tree
Hide file tree
Showing 156 changed files with 2,457 additions and 460 deletions.
10 changes: 8 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1DataType.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class DataType &mdash; Torch-TensorRT v2.6.0.dev0+ce38387 documentation</title>
<title>Class DataType &mdash; Torch-TensorRT v2.6.0.dev0+38771e6 documentation</title>



Expand Down Expand Up @@ -275,7 +275,7 @@


<div class="version">
v2.6.0.dev0+ce38387
v2.6.0.dev0+38771e6
</div>


Expand Down Expand Up @@ -315,6 +315,7 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/mixed_precision.html">Compile Mixed Precision models with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage.html">Torch Compile Advanced Usage</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/vgg16_ptq.html">Deploy Quantized Models using Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/engine_caching_example.html">Engine Caching</a></li>
Expand Down Expand Up @@ -349,6 +350,11 @@
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/converter_overloading.html">Overloading Torch-TensorRT Converters with Custom Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/custom_kernel_plugins.html">Using Custom Kernels within TensorRT Engines with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/mutable_torchtrt_module_example.html">Mutable Torch TensorRT Module</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html">Compiling GPT2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html">Compiling Llama2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#pytorch-model-generated-text-dynamic-programming-is-an-algorithmic-technique-used-to-solve-complex-problems-by-breaking-them-down-into-smaller-subproblems-solving-each-subproblem-only-once-and">Pytorch model generated text: Dynamic programming is an algorithmic technique used to solve complex problems by breaking them down into smaller subproblems, solving each subproblem only once, and</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API Documentation</span></p>
<ul>
Expand Down
10 changes: 8 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1Device_1_1DeviceType.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class Device::DeviceType &mdash; Torch-TensorRT v2.6.0.dev0+ce38387 documentation</title>
<title>Class Device::DeviceType &mdash; Torch-TensorRT v2.6.0.dev0+38771e6 documentation</title>



Expand Down Expand Up @@ -275,7 +275,7 @@


<div class="version">
v2.6.0.dev0+ce38387
v2.6.0.dev0+38771e6
</div>


Expand Down Expand Up @@ -315,6 +315,7 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/mixed_precision.html">Compile Mixed Precision models with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage.html">Torch Compile Advanced Usage</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/vgg16_ptq.html">Deploy Quantized Models using Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/engine_caching_example.html">Engine Caching</a></li>
Expand Down Expand Up @@ -349,6 +350,11 @@
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/converter_overloading.html">Overloading Torch-TensorRT Converters with Custom Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/custom_kernel_plugins.html">Using Custom Kernels within TensorRT Engines with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/mutable_torchtrt_module_example.html">Mutable Torch TensorRT Module</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html">Compiling GPT2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html">Compiling Llama2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#pytorch-model-generated-text-dynamic-programming-is-an-algorithmic-technique-used-to-solve-complex-problems-by-breaking-them-down-into-smaller-subproblems-solving-each-subproblem-only-once-and">Pytorch model generated text: Dynamic programming is an algorithmic technique used to solve complex problems by breaking them down into smaller subproblems, solving each subproblem only once, and</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API Documentation</span></p>
<ul>
Expand Down
10 changes: 8 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1TensorFormat.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Class TensorFormat &mdash; Torch-TensorRT v2.6.0.dev0+ce38387 documentation</title>
<title>Class TensorFormat &mdash; Torch-TensorRT v2.6.0.dev0+38771e6 documentation</title>



Expand Down Expand Up @@ -275,7 +275,7 @@


<div class="version">
v2.6.0.dev0+ce38387
v2.6.0.dev0+38771e6
</div>


Expand Down Expand Up @@ -315,6 +315,7 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/mixed_precision.html">Compile Mixed Precision models with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage.html">Torch Compile Advanced Usage</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/vgg16_ptq.html">Deploy Quantized Models using Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/engine_caching_example.html">Engine Caching</a></li>
Expand Down Expand Up @@ -349,6 +350,11 @@
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/converter_overloading.html">Overloading Torch-TensorRT Converters with Custom Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/custom_kernel_plugins.html">Using Custom Kernels within TensorRT Engines with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/mutable_torchtrt_module_example.html">Mutable Torch TensorRT Module</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html">Compiling GPT2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html">Compiling Llama2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#pytorch-model-generated-text-dynamic-programming-is-an-algorithmic-technique-used-to-solve-complex-problems-by-breaking-them-down-into-smaller-subproblems-solving-each-subproblem-only-once-and">Pytorch model generated text: Dynamic programming is an algorithmic technique used to solve complex problems by breaking them down into smaller subproblems, solving each subproblem only once, and</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API Documentation</span></p>
<ul>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Template Class Int8CacheCalibrator &mdash; Torch-TensorRT v2.6.0.dev0+ce38387 documentation</title>
<title>Template Class Int8CacheCalibrator &mdash; Torch-TensorRT v2.6.0.dev0+38771e6 documentation</title>



Expand Down Expand Up @@ -275,7 +275,7 @@


<div class="version">
v2.6.0.dev0+ce38387
v2.6.0.dev0+38771e6
</div>


Expand Down Expand Up @@ -315,6 +315,7 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/mixed_precision.html">Compile Mixed Precision models with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage.html">Torch Compile Advanced Usage</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/vgg16_ptq.html">Deploy Quantized Models using Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/engine_caching_example.html">Engine Caching</a></li>
Expand Down Expand Up @@ -349,6 +350,11 @@
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/converter_overloading.html">Overloading Torch-TensorRT Converters with Custom Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/custom_kernel_plugins.html">Using Custom Kernels within TensorRT Engines with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/mutable_torchtrt_module_example.html">Mutable Torch TensorRT Module</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html">Compiling GPT2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html">Compiling Llama2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#pytorch-model-generated-text-dynamic-programming-is-an-algorithmic-technique-used-to-solve-complex-problems-by-breaking-them-down-into-smaller-subproblems-solving-each-subproblem-only-once-and">Pytorch model generated text: Dynamic programming is an algorithmic technique used to solve complex problems by breaking them down into smaller subproblems, solving each subproblem only once, and</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API Documentation</span></p>
<ul>
Expand Down
10 changes: 8 additions & 2 deletions docs/_cpp_api/classtorch__tensorrt_1_1ptq_1_1Int8Calibrator.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Template Class Int8Calibrator &mdash; Torch-TensorRT v2.6.0.dev0+ce38387 documentation</title>
<title>Template Class Int8Calibrator &mdash; Torch-TensorRT v2.6.0.dev0+38771e6 documentation</title>



Expand Down Expand Up @@ -275,7 +275,7 @@


<div class="version">
v2.6.0.dev0+ce38387
v2.6.0.dev0+38771e6
</div>


Expand Down Expand Up @@ -315,6 +315,7 @@
<li class="toctree-l1"><a class="reference internal" href="../user_guide/saving_models.html">Saving models compiled with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/runtime.html">Deploying Torch-TensorRT Programs</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/using_dla.html">DLA</a></li>
<li class="toctree-l1"><a class="reference internal" href="../user_guide/mixed_precision.html">Compile Mixed Precision models with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage.html">Torch Compile Advanced Usage</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/vgg16_ptq.html">Deploy Quantized Models using Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/engine_caching_example.html">Engine Caching</a></li>
Expand Down Expand Up @@ -349,6 +350,11 @@
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/converter_overloading.html">Overloading Torch-TensorRT Converters with Custom Converters</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/custom_kernel_plugins.html">Using Custom Kernels within TensorRT Engines with Torch-TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/mutable_torchtrt_module_example.html">Mutable Torch TensorRT Module</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html">Compiling GPT2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_gpt2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html">Compiling Llama2 using the Torch-TensorRT with dynamo backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#the-output-sentences-should-look-like">The output sentences should look like</a></li>
<li class="toctree-l1"><a class="reference internal" href="../tutorials/_rendered_examples/dynamo/torch_export_llama2.html#pytorch-model-generated-text-dynamic-programming-is-an-algorithmic-technique-used-to-solve-complex-problems-by-breaking-them-down-into-smaller-subproblems-solving-each-subproblem-only-once-and">Pytorch model generated text: Dynamic programming is an algorithmic technique used to solve complex problems by breaking them down into smaller subproblems, solving each subproblem only once, and</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API Documentation</span></p>
<ul>
Expand Down
Loading

0 comments on commit 6a3ef30

Please sign in to comment.