-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support torch.compile #89
Comments
I'm on the PyTorch team working on custom ops. "int" in the schema string corresponds to a C++ function that accepts int64_t.
We've never seen this error before. Do you have a branch that demonstrates what you've tried? |
Thank you for your feedback @zou3519 . If I recall correctly, registering would also fail on I'll try and start a new branch today and replicate the issue. Unrelated question: since our last update, NATTEN also comes with an "auto-tuner" that can be optionally enabled by the user. Some of our kernels have multiple possible configurations that differ in performance, and the autotuner is that it benchmarks new problem sizes (according to tensor shape and dtype) with all the different configs and picks and caches the fastest one. Right now it's happening through torch and inside the graph, so I'm guessing this particular feature might not play nicely with graph mode, and torch.compile, but I'm not sure about that. If this will be an issue, would it be better if all of this happened at the C++ level and beyond the point where we use torch API, so that torch is completely unaware? |
int[] arguments maps to
It's hard to say, torch.compile has gotten better at handling code in Python. You should try torch.compile over that first and if it doesn't work then put it at the C++ level. |
Oh right, I just opened up the GDoc, and I'm seeing it's all in Python now. That's fantastic! |
@zou3519 I think I might have mislabeled this issue.
The real issue is that the graph breaks on custom ops, and I seem to have misunderstood that pytorch/pytorch#97295 solves that issue (at least according to pytorch/pytorch#101191). I am confused by one thing though; if I compile with fullgraph=True, and register the op natively, dynamo fails with this error:
Is there some other way we're supposed to pass int/bool tuples/arrays to autograd functions at the python level? By the way I'm sorry to leave this issue behind for 3 weeks; I had already changed so much in the API that it was difficult to branch out my changes and push them, so I waited until everything got merged. |
Sorry I was out of town for a bit.
Do you have a repro for this?
This one should be fixed on the latest nightlies |
No worries, I appreciate the help.
Yes, here you go: https://gist.github.com/alihassanijr/0b1b0277771a72b5eedad4db9e552876 Please let me know if anything else.
Fantastic. I'll start registering the ops and push out a new release when it makes it into the next torch release. |
NATTEN ops break graphs because they are not registered as native torch ops.
The correct way to do this since pytorch 2.0 is to register C++ ops through torch library and not torch extensions.
Notes
It's still unclear whether or not the different op registration will avoid splitting graphs, and upon my first attempt I'm finding that the torch API for registering custom ops is probably incomplete.
For example, defining signatures with
int
arguments works fine, as it should, and native torch ops definitely useint
args as well.But when we get to bind a C++ symbol to said defined signature, we hit a static assertion failure that explicitly prevents
int
/int32_t
, but allowsint8_t
andint64_t
.If we force our custom ops to take in
int64_t
instead, then we'll hit an error when we try to call these ops from python saying there's no conversion betweenint
andint64_t
.References
The text was updated successfully, but these errors were encountered: