These collection of Jupyter Notebook files serves you runnable examples for various steps in the ML workflow that you help you master pipeline creation.
As GPURuntime step of AI Inference Server GPU accelerated
supports only ONNX models to run, we need to convert our model if it is created via other framework. In case your model is created via other framework, please find the most recent and appropriate converter on official https://onnx.ai pages.
- Our examples:
- Keras to ONNX
- PyTorch to ONNX