Skip to content

Commit

Permalink
Update OnnxRuntime to 1.5.2 (#5439)
Browse files Browse the repository at this point in the history
* Added prerelease feed and updated to 1.5.2

* Remove prerelease feed

* Updated docs

* Update doc
  • Loading branch information
antoniovs1029 authored Oct 20, 2020
1 parent d95fe38 commit 212b5df
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion build/Dependencies.props
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
<GoogleProtobufPackageVersion>3.10.1</GoogleProtobufPackageVersion>
<LightGBMPackageVersion>2.2.3</LightGBMPackageVersion>
<MicrosoftExtensionsPackageVersion>2.1.0</MicrosoftExtensionsPackageVersion>
<MicrosoftMLOnnxRuntimePackageVersion>1.5.1</MicrosoftMLOnnxRuntimePackageVersion>
<MicrosoftMLOnnxRuntimePackageVersion>1.5.2</MicrosoftMLOnnxRuntimePackageVersion>
<MlNetMklDepsPackageVersion>0.0.0.9</MlNetMklDepsPackageVersion>
<ParquetDotNetPackageVersion>2.1.3</ParquetDotNetPackageVersion>
<SystemDrawingCommonPackageVersion>4.5.0</SystemDrawingCommonPackageVersion>
Expand Down
4 changes: 2 additions & 2 deletions src/Microsoft.ML.OnnxTransformer/OnnxTransform.cs
Original file line number Diff line number Diff line change
Expand Up @@ -727,7 +727,7 @@ public NamedOnnxValue GetNamedOnnxValue()
/// | Does this estimator need to look at the data to train its parameters? | No |
/// | Input column data type | Known-sized vector of <xref:System.Single> or <xref:System.Double> types |
/// | Output column data type | As specified by the ONNX model |
/// | Required NuGet in addition to Microsoft.ML | Microsoft.ML.OnnxTransformer (always), either Microsoft.ML.OnnxRuntime 1.3.0 (for CPU processing) or Microsoft.ML.OnnxRuntime.Gpu 1.3.0 (for GPU processing if GPU is available) |
/// | Required NuGet in addition to Microsoft.ML | Microsoft.ML.OnnxTransformer (always), either Microsoft.ML.OnnxRuntime 1.5.2 (for CPU processing) or Microsoft.ML.OnnxRuntime.Gpu 1.5.2 (for GPU processing if GPU is available) |
/// | Exportable to ONNX | No |
///
/// To create this estimator use the following APIs:
Expand All @@ -739,7 +739,7 @@ public NamedOnnxValue GetNamedOnnxValue()
///
/// To run on a GPU, use the
/// NuGet package [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/) instead of the Microsoft.ML.OnnxRuntime nuget (which is for CPU processing). Microsoft.ML.OnnxRuntime.Gpu
/// requires a [CUDA supported GPU](https://developer.nvidia.com/cuda-gpus#compute), the [CUDA 10.1 Toolkit](https://developer.nvidia.com/cuda-downloads), and [cuDNN 7.6.5](https://developer.nvidia.com/cudnn) (as indicated on [Onnxruntime's documentation](https://github.com/Microsoft/onnxruntime#default-gpu-cuda)).
/// requires a [CUDA supported GPU](https://developer.nvidia.com/cuda-gpus#compute), the [CUDA 10.2 Toolkit](https://developer.nvidia.com/cuda-downloads), and [cuDNN 8.0.3](https://developer.nvidia.com/cudnn) (as indicated on [Onnxruntime's documentation](https://github.com/Microsoft/onnxruntime#system-requirements)).
/// When creating the estimator through [ApplyOnnxModel](xref:Microsoft.ML.OnnxCatalog.ApplyOnnxModel*), set the parameter 'gpuDeviceId' to a valid non-negative integer. Typical device ID values are 0 or 1. If the GPU device isn't found but 'fallbackToCpu = true' then the estimator will run on the CPU. If the GPU device isn't found but 'fallbackToCpu = false' then the estimator will throw an exception
///
/// The inputs and outputs of the ONNX models must be Tensor type. Sequence and Maps are not yet supported.
Expand Down

0 comments on commit 212b5df

Please sign in to comment.