-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace OMEinsum with TensorOperations #208
base: master
Are you sure you want to change the base?
Conversation
I'm obviously also curious about the benchmarks, and would gladly help out a bit. This being said, I think one of the main differences between OMEinsumeinsum and Tensoroperations implementation is that TensorOperations really requires strict Einstein summation, every index must appear exactly twice: either once in the input and once in the output, or twice in the inputs such that it is contracted. From a quick first glance, these are the errors you are having? I am not sure if this is easily resolved, both the contraction order determination and the elementary operations make use of this assumption. For instance, I think the hyper indices would require a major rewrite as we really have that contractions happen pairwise. I'm not familiar enough with OMEinsum, but maybe there is a way to only dispatch the implementation of certain pairwise contractions to TensorOperations? |
The edge cases that are currently failing are the following:
Cases (1) and (2) are managed by
We look for the contraction path with EinExprs and then call pairwise contractions with
I'm trying to remove OMEinsum completely since I try to keep Tenet as lightweight as posible, OMEinsum has terrible performance and compile times, and loads some unneeded packages like ChainRules (when they could be loaded as package extensions). |
I don't have anything against supporting that, but note that none of these cases are really true Einstein summation things, hence they don't currently work. |
I'm thinking that we should have sth like a "EinsumInterface.jl", similar to what "DifferentiationInterface.jl" does for AD engines. |
Maybe, but it's hard to cleanly define what you want to include. TensorOperations does have an interface and backend selection mechanisms, so you could plug in whatever implementation you want quite easily, but limits itself to simple einsum expressions. There is definitely a case to be made for hyperindices, summed indices, but at that point you might also want to consider fixed indices, and maybe you don't want to have just contractions with conjugations, but also allow 'abs' or 'max', and have different reduction operations etc. It's not that obvious where to draw the line, and any line seems as arbitrary as any other line, making it hard to establish a good interface... |
After some benchmarking, I've found out that OMEinsum is terribly slow and would like to see how TensorOperations performs compared to it. This PR replaces OMEinsum with TensorOperations. If it improves runtime, I will replace our backend for it.
@lkdvos @Jutho I'm running into some edge cases that are managed by OMEinsum but not TensorOperations. Would you mind lending me a hand?