Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts/discussion: Interop with glow compiler #20

Open
saulshanabrook opened this issue May 22, 2018 · 1 comment
Open

Thoughts/discussion: Interop with glow compiler #20

saulshanabrook opened this issue May 22, 2018 · 1 comment

Comments

@saulshanabrook
Copy link
Member

Glow compiler makes matrix math fast, by caring about cache locality: https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0

We could use compiled glow code as gumath kernels. they can compile aot: https://github.com/pytorch/glow/blob/master/docs/AOT.md (this would be a lot like our story with numba).

To create a glow network you either have to write C++ or compile from an onnyx model: https://github.com/pytorch/glow/blob/master/docs/Example.md https://github.com/pytorch/glow/blob/master/docs/IR.md#the-lifetime-of-a-glow-instruction

Could we have high level python APIs that compile to onnx spec? Like a lazy array/numpy library that builds up onnx graph as you interact with python objects? Then compiles that with glow and exposes gumath kernel for that operation?

If XND/gumath is the interop layer, then it could be used to combine tvm/glow/numba models. The underlying hypothesis is that the memory formats and computation could be expressed using xnd/gumath. I think the best way to answer this is to right code that attempts to interop and see where we stop.

@teoliphant
Copy link
Member

teoliphant commented May 22, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants