Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto-fuse operations (alpha X + Y, alpha A * B + beta C ...) #31

Open
mratsim opened this issue Sep 10, 2017 · 3 comments
Open

Auto-fuse operations (alpha X + Y, alpha A * B + beta C ...) #31

mratsim opened this issue Sep 10, 2017 · 3 comments

Comments

@mratsim
Copy link
Owner

mratsim commented Sep 10, 2017

Arraymancer can leverage the nim compiler and term-rewriting macros to automatically detect operations that can be fused.

This is probably similar to what Tensorflow is doing with their XLA compiler.
See: https://developers.googleblog.com/2017/03/xla-tensorflow-compiled.html
and the overview.

A term-rewriting example is already included with fusing toTensor + reshape operations:

template rewriteToTensorReshape*{reshape(toTensor(oa, dummy_bugfix), shape)}(
oa: openarray,
shape: varargs[int],
dummy_bugfix: static[int]): auto =
## Fuse ``sequence.toTensor(Backend).reshape(new_shape)`` into a single operation.
toTensorReshape(oa, shape, dummy_bugfix)

@mratsim
Copy link
Owner Author

mratsim commented Sep 17, 2017

There is a Julia library called Devectorize that can perform more complex fusion operations:

@devec r = a + b + c
@devec r = sin(a) + exp(a + 1.0) .* log(c)

.* being the element wise product in Julia

@mratsim
Copy link
Owner Author

mratsim commented Sep 19, 2017

What can be done is implementing an elementwise template that would be used like this:

elementwise:
  let out_of_place = alpha * A + exp(B) + beta * sin(C)
  in_place += sin(A) + exp(A + 1) + C

operations and scalar will be automatically applied element-wise to Tensors
This would allow a nicer syntax than explicit broadcasting for some functions like sigmoid

Tentative broadcasting version:

proc sigmoid[T](x: Tensor[T]): Tensor[T] =
  result = 1.0f.bc |/| (1.0f.bc + exp(-x))

Tentative element-wise version:

proc sigmoid[T](x: Tensor[T]): Tensor[T] =
  result = newTensor(x.shape)
  elementwise:
    result = 1.0 / (1.0 + exp(-x))

@mratsim
Copy link
Owner Author

mratsim commented Nov 9, 2017

Implemented in https://github.com/mratsim/Arraymancer/blob/89a72b348f09b992006ebdb35d991ba80bd4c671/src/tensor/optim_ops_fusion.nim

It needs tests similar to #152 and more fusion for minus operator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant