You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could we have high level python APIs that compile to onnx spec? Like a lazy array/numpy library that builds up onnx graph as you interact with python objects? Then compiles that with glow and exposes gumath kernel for that operation?
If XND/gumath is the interop layer, then it could be used to combine tvm/glow/numba models. The underlying hypothesis is that the memory formats and computation could be expressed using xnd/gumath. I think the best way to answer this is to right code that attempts to interop and see where we stop.
The text was updated successfully, but these errors were encountered:
On Tue, May 22, 2018 at 11:55 AM, Saul Shanabrook ***@***.***> wrote:
Glow compiler makes matrix math fast, by caring about cache locality:
https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0
We could use compiled glow code as gumath kernels. they can compile aot:
https://github.com/pytorch/glow/blob/master/docs/AOT.md (this would be a
lot like our story with numba).
To create a glow network you either have to write C++ or compile from an
onnyx model: https://github.com/pytorch/glow/blob/master/docs/Example.mdhttps://github.com/pytorch/glow/blob/master/docs/IR.md#
the-lifetime-of-a-glow-instruction
Could we have high level python APIs that compile to onnx spec? Like a
lazy array/numpy library that builds up onnx graph as you interact with
python objects? Then compiles that with glow and exposes gumath kernel for
that operation?
If XND/gumath is the interop layer, then it could be used to combine
tvm/glow/numba models. The underlying hypothesis is that the memory formats
and computation could be expressed using xnd/gumath. I think the best way
to answer this is to right code that attempts to interop and see where we
stop.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#20>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPjoPIazHUu186SJxvtIFqbHadB279pks5t1ELmgaJpZM4UJB7U>
.
Glow compiler makes matrix math fast, by caring about cache locality: https://gist.github.com/nadavrot/5b35d44e8ba3dd718e595e40184d03f0
We could use compiled glow code as gumath kernels. they can compile aot: https://github.com/pytorch/glow/blob/master/docs/AOT.md (this would be a lot like our story with numba).
To create a glow network you either have to write C++ or compile from an onnyx model: https://github.com/pytorch/glow/blob/master/docs/Example.md https://github.com/pytorch/glow/blob/master/docs/IR.md#the-lifetime-of-a-glow-instruction
Could we have high level python APIs that compile to onnx spec? Like a lazy array/numpy library that builds up onnx graph as you interact with python objects? Then compiles that with glow and exposes gumath kernel for that operation?
If XND/gumath is the interop layer, then it could be used to combine tvm/glow/numba models. The underlying hypothesis is that the memory formats and computation could be expressed using xnd/gumath. I think the best way to answer this is to right code that attempts to interop and see where we stop.
The text was updated successfully, but these errors were encountered: