Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why onnxruntime register custom opsets when custom operators are declared ? #868

Open
Johansmm opened this issue Dec 24, 2024 · 0 comments

Comments

@Johansmm
Copy link

Context: I am using ONNXRuntime extensions to declare a custom operation and compare it with its original version :

import onnx
import numpy as np
import onnxruntime
import onnxruntime.tools
import onnxruntime.tools.update_onnx_opset
from onnxruntime_extensions import get_library_path, onnx_op


@onnx_op(op_type="custom_domain::my_op")
def custom_op(x):
    return 5 * x


model1 = onnx.parser.parse_model("""
    < ir_version: 7, opset_import: ["" : 15, "custom_domain": 1] >
    graph (float[N, 10] X) => (float[N, 10] Y){
        Y = custom_domain.my_op(X)
    }""")

# Register onnxruntime library to recognize 'custom_op' in 'model'
so = onnxruntime.SessionOptions()
so.register_custom_ops_library(get_library_path())
sess1 = onnxruntime.InferenceSession(model1.SerializeToString(), so,
                                     providers=["CPUExecutionProvider"])

feed = {'X': np.random.rand(1, 10).astype('float32')}
y1 = sess1.run(None, feed)
print(y1)

model2 = onnx.parser.parse_model("""
    < ir_version: 7, opset_import: ["" : 15] >
    graph (float[N, 10] X) => (float[N, 10] Y){
        const = Constant<float=5.0>()
        Y = Mul(const, X)
    }""")

another_so = onnxruntime.SessionOptions()
another_so.optimized_model_filepath = 'opt_model_path.onnx'
another_so.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_BASIC
# Optimize model2
onnxruntime.InferenceSession(model2.SerializeToString(), another_so,
                             providers=["CPUExecutionProvider"])

opt_model = onnx.load(another_so.optimized_model_filepath)
opt_sess = onnxruntime.InferenceSession(opt_model.SerializeToString(),
                                        providers=["CPUExecutionProvider"])
y2 = opt_sess.run(None, feed)
print(y2)

However, I am surprised to see ONNXRuntime optimizer has added custom opsets in the second model that does not contain my custom operator:

print(opt_model.opset_import)
# [domain: ""
# version: 15
# , domain: "com.microsoft.nchwc"
# version: 1
# , domain: "ai.onnx.ml"
# version: 5
# , domain: "ai.onnx.training"
# version: 1
# , domain: "ai.onnx.preview.training"
# version: 1
# , domain: "com.microsoft"
# version: 1
# , domain: "com.microsoft.experimental"
# version: 1
# , domain: "org.pytorch.aten"
# version: 1
# , domain: "custom_domain" <- Added when I register the library in the first model
# version: 1000
# , domain: "ai.onnx.contrib"  <- Added when I register the library in the first model
# version: 1000
# , domain: "com.microsoft.extensions"  <- Added when I register the library in the first model
# version: 1000
# ]

Therefore, I have the following questions :

  • Why has the onnxruntime optimizer added these opsets, seeing that opt_model has nothing to do with the custom operator definition ?
  • Is there any way to restrict the opset version that the operator adds when optimizing? I mean, when declaring the custom op through @onnx_op, can I include the version of the opset I want (e.g. a value different from 1000)?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant