Skip to content
This repository has been archived by the owner on Mar 12, 2021. It is now read-only.

Wrapper functions for NNlib #615

Merged
merged 3 commits into from
Apr 13, 2020
Merged

Wrapper functions for NNlib #615

merged 3 commits into from
Apr 13, 2020

Conversation

matsueushi
Copy link
Contributor

Related to #614. I found some of activation functions recently added to NNlib are imcompatible with GPU, so I defined @cufunc wrappers for them. I also updated the existing definitions to make them consistent with NNlib (https://github.com/FluxML/NNlib.jl/blob/master/src/activation.jl).

I skipped rrelu because rand is used within its definition
https://github.com/FluxML/NNlib.jl/blob/ac5101b2f4b4afc8cc01968e5c8dadaa0eaa862a/src/activation.jl#L92-L95 and couldn't figure out how it can be handled.

@maleadt
Copy link
Member

maleadt commented Mar 9, 2020

Thanks!
bors try

@maleadt maleadt requested a review from MikeInnes March 9, 2020 16:26
bors bot added a commit that referenced this pull request Mar 9, 2020
@bors
Copy link
Contributor

bors bot commented Mar 9, 2020

try

Build succeeded

@CarloLucibello
Copy link

Are all those definitions needed here? e.g. swish(x) = x * σ(x) won't work out of the box? (assuming it is defined like that in NNlib. If not, it should)

@matsueushi
Copy link
Contributor Author

Yes, if you remove @cufunc of swish, you will see warnings

┌ Warning: calls to Base intrinsics might be GPU incompatible
│   exception =
│    You called exp(x::T) where T<:Union{Float32, Float64} in Base.Math at special/exp.jl:75, maybe you intended to call exp(x::Float64) in CUDAnative at /root/.julia/packages/CUDAnative/hwB4d/src/device/cuda/math.jl:100 instead?
│    Stacktrace:
│     [1] exp at special/exp.jl:75
│     [2] #28 at /root/.julia/packages/GPUArrays/GLRnH/src/host/broadcast.jl:64
└ @ CUDAnative ~/.julia/packages/CUDAnative/hwB4d/src/compiler/irgen.jl:113
┌ Warning: calls to Base intrinsics might be GPU incompatible
│   exception =
│    You called exp(x::T) where T<:Union{Float32, Float64} in Base.Math at special/exp.jl:75, maybe you intended to call exp(x::Float64) in CUDAnative at /root/.julia/packages/CUDAnative/hwB4d/src/device/cuda/math.jl:100 instead?
│    Stacktrace:
│     [1] exp at special/exp.jl:75
│     [2] #28 at /root/.julia/packages/GPUArrays/GLRnH/src/host/broadcast.jl:64
└ @ CUDAnative ~/.julia/packages/CUDAnative/hwB4d/src/compiler/irgen.jl:113

I added all the activation functions in NNlib to test first and checked GPU compatibility by running tests with a blank "nnlib.jl", so unnecessary wrappers are not included. I guess if exp or tanh is used in a definition, we need to define its wrapper.

@maleadt
Copy link
Member

maleadt commented Mar 14, 2020

These are just warning, if the existing Base implementations work we can whitelist them in CUDAnative.jl instead of adding an implementation here. https://github.com/JuliaGPU/CUDAnative.jl/blob/75313e555ef7383f5da4cd71840eed92f76ccc9a/src/compiler/irgen.jl#L48

@matsueushi
Copy link
Contributor Author

Then I will remove wrappers for σ, elu, swith, selu and celu. The implementation of exp seems to work fine, but if log1p or tanh is used, a wrapper is still needed.

@maleadt
Copy link
Member

maleadt commented Apr 13, 2020

bors r+

bors bot added a commit that referenced this pull request Apr 13, 2020
615: Wrapper functions for NNlib r=maleadt a=matsueushi

Related to #614. I found some of activation functions recently added to NNlib are imcompatible with GPU, so I defined `@cufunc` wrappers for them. I also updated the existing definitions to make them consistent with NNlib (https://github.com/FluxML/NNlib.jl/blob/master/src/activation.jl).

I skipped `rrelu` because `rand` is used within its definition
https://github.com/FluxML/NNlib.jl/blob/ac5101b2f4b4afc8cc01968e5c8dadaa0eaa862a/src/activation.jl#L92-L95 and couldn't figure out how it can be handled.

Co-authored-by: matsueushi <matsueushi@gmail.com>
@maleadt
Copy link
Member

maleadt commented Apr 13, 2020

@matsueushi so only exp needs whitelisting in CUDAnative?

@bors
Copy link
Contributor

bors bot commented Apr 13, 2020

Build failed

@maleadt maleadt merged commit aeba8c9 into JuliaGPU:master Apr 13, 2020
@matsueushi
Copy link
Contributor Author

@maleadt Yes, other functions showed errors, not warnings.

@matsueushi matsueushi deleted the nnlib_wrapper branch April 13, 2020 23:17
maleadt added a commit to JuliaGPU/GPUCompiler.jl that referenced this pull request Apr 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants