-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement Lambert W function #84
Conversation
This copies the implementation of the Lambert W function and related functions and data from jlapeyre/LambertW.jl to SpecialFunctions.jl
Including the Lambert W function was first mentioned in #27 |
Anyone home ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left some comments but I can't speak to the accuracy of the algorithms.
src/lambertw.jl
Outdated
#export lambertw, lambertwbp | ||
using Compat | ||
|
||
const euler = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just do
using Compat.MathConstants: e
then use e
instead of euler
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I couldn't figure out the compat syntax from the docs, so I just did what I could.
src/lambertw.jl
Outdated
|
||
function __init__() | ||
omega_const_bf_[] = | ||
parse(BigFloat,"0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it necessary to rerun this every time the module is loaded? Seems like this should just be
const omega_const_bf = parse(BigFloat, "...")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, some storage is malloced when the big float is constructed and Julia keeps the pointer. The next time the module runs, the memory has been freed (of course), but the address Julia stored has not changed. BANG!... If you don't precompile, then the heap allocation is done every time the module is loaded. But, I want to precompile. AFAICT, this is the recommended way to a) allocate the storage every time the module is used
. b) precompile the module.
src/lambertw.jl
Outdated
two_t = convert(T,2) | ||
lastx = x | ||
lastdiff = zero(T) | ||
for i in 1:100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does 100 come from? Seems like a rather arbitrary "magic number"; perhaps we could have that be configurable, e.g.
function _lambertw(z::T, x::T, maxiter::Int=100) where T<:Number
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two things. 1) Some of this code was taken from mpmath. Maybe this number was, although I don't think a reason was given there either. I should acknowledge the source somewhere, it has a MIT compatible license. I will look at the python source. 2) Yes, I think changing it is ok. How ? Maybe.... pick a number far larger than the number of iterations required to converge in most, or all cases. But, make it small enough to finish the loop in a short time. We could have it error or warn about lack of precision if the entire loop is completed. I'm pretty sure I had something like this in there at one point.
src/lambertw.jl
Outdated
ex = exp(x) | ||
xexz = x * ex - z | ||
x1 = x + 1 | ||
x = x - xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x -=
instead of x = x -
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok.
src/lambertw.jl
Outdated
|
||
# The fancy initial condition selection does not seem to help speed, but we leave it for now. | ||
function lambertwk0(x::T)::T where T<:AbstractFloat | ||
x == Inf && return Inf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps isfinite(x) || return x
? That will also handle the case where x
is NaN
, or if T
is anything other than Float64
(Inf
is a Float64
literal).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't recall my reasoning here. It is highly likely that I compared to python and Mathematica. But, maybe there is a Julia convention. Some clues for what to do:
julia> sin(NaN)
NaN
julia> sin(Inf)
ERROR: DomainError with Inf:
sin(x) is only defined for finite x.
But sin
does not tend to a limit with increasing x
. So Inf
is in no sense the correct result. lambertw
tends to infinity, so Inf
in this sense is the correct answer.
In [5]: lambertw(inf) # python mpmath
Out[5]: mpf('+inf')
In [9]: lambertw(nan)
Out[9]: mpf('nan')
Another issue is:
julia> convert(BigFloat,Inf)
Inf
julia> typeof(convert(BigFloat,Inf))
BigFloat
Yikes, no visible difference. In any case, at a minimum, this should be convert(T,Inf)
.
src/lambertw.jl
Outdated
|
||
# maybe compute higher precision. converges very quickly | ||
function omega_const(::Type{BigFloat}) | ||
@compat precision(BigFloat) <= 256 && return omega_const_bf_[] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird indentation here, and why is @compat
necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think compatibility with v0.4. There is already a deprecation in v0.5:
julia> get_bigfloat_precision()
WARNING: get_bigfloat_precision() is deprecated
In other words, this can be removed. (Although I am interested what you/community think is the proper indentation, were @compat
necessary here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Julia standard for indentation is 4 spaces. That's what's used in the Julia repository as well as the majority of packages.
We don't need compatibility with 0.4 or even 0.5 here, since the package requires 0.6 at a minimum.
src/lambertw.jl
Outdated
|
||
const LAMWMU_FLOAT64 = lamwcoeff(Float64,500) | ||
|
||
function horner(x, p::AbstractArray,n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use Base.Math.@horner
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I knew about Base.Math.@horner
when I wrote this. So, the answer is I don't know. I'll look into it.
src/lambertw.jl
Outdated
function lambertwbp(x::Number,k::Integer) | ||
k == 0 && return _lambertw0(x) | ||
k == -1 && return _lambertwm1(x) | ||
error("expansion about branch point only implemented for k = 0 and -1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error regarding the arguments should be throw(ArgumentError("..."))
instead of error("...")
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. will do.
src/lambertw.jl
Outdated
|
||
#### Lambert W function #### | ||
|
||
const LAMBERTW_USE_NAN = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must say, I don't really like this. I think we should pick a convention that matches the other functions here and stick to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. IIRC, there was some debate around the time I wrote the module about how to handle this in general. I also wanted to see which I liked better. I think the issue is settled, so sure, I'll try to find the convention. It can only be changed by editing the source. In SpecialFunctions this certainly shouldn't be advertised as something to do.
@ararslan thanks for the review. I'll do these things. |
How do you prefer updates to the PR ? I could rebase and force push, but I think we lose link between comments and code this way. @stevengj I suggest putting off reviewing until this round of changes are made to avoid duplicating effort / wasting time. |
You can just push more commits to your branch and they'll show up here. We can squash on merge. |
The latest commit addresses all comments made by @ararslan. I did a bit of other refactoring and tidying. |
I'd like to close this and issue a new PR after v1.0. In the upstream source |
Just do a forced push to this branch to update the existing PR. |
I hope this gets merged soon. :) |
function __init__() | ||
# allocate storage for this BigFloat constant each time this module is loaded | ||
omega_const_bf_[] = | ||
parse(BigFloat,"0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't big"0.56..."
better than parse(...)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you check, for instance with @less big"1.0"
, you see that the macro does more work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. @code_llvm big"0.56..."
gives a LOT more code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, big"..."
doesn't respect current set precision:
julia> setprecision(1024) do
precision(big"1.0")
end
256
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that intended behavior of big"..."
? Or should one raise an issue?
Bump 🙂 I'd also love to have this one merged |
bump. need this one quite often! |
bumping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR, it would be a very nice addition to SpecialFunctions.jl!
Also merging it here would ensure that the implementation would be maintained by the community.
two_t = convert(T,2) | ||
lastx = x | ||
lastdiff = zero(T) | ||
converged::Bool = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
converged::Bool = false | |
converged = false |
I guess type-assert is not required here
ex = exp(x) | ||
xexz = x * ex - z | ||
x1 = x + 1 | ||
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) ) | |
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1)) |
Also, does it really help to define two_t = convert(T, 2)
? Maybe the compiler can figure out the best way to calculate x + 2
(like it's done for x + 1
above)?
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) ) | |
x -= xexz / (ex * x1 - (x + 2) * xexz / (2 * x1)) |
x1 = x + 1 | ||
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) ) | ||
xdiff = abs(lastx - x) | ||
if xdiff <= 3*eps(abs(lastx)) || lastdiff == xdiff # second condition catches two-value cycle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if xdiff <= 3*eps(abs(lastx)) || lastdiff == xdiff # second condition catches two-value cycle | |
if xdiff <= 3*eps(lastx) || lastdiff == xdiff # second condition catches two-value cycle |
lastx = x | ||
lastdiff = xdiff | ||
end | ||
converged || warn("lambertw with z=", z, " did not converge in ", maxits, " iterations.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
converged || warn("lambertw with z=", z, " did not converge in ", maxits, " iterations.") | |
converged || warn("lambertw(", z, ") did not converge in ", maxits, " iterations.") |
|
||
# Use Halley's root-finding method to find | ||
# x = lambertw(z) with initial point x. | ||
function _lambertw(z::T, x::T, maxits) where T <: Number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function _lambertw(z::T, x::T, maxits) where T <: Number | |
function _lambertw(z::T, x::T, maxits::Integer) where T <: Number |
# solve for α₂. We get α₂ = 0. | ||
# compute array of coefficients μ in (4.22). | ||
# m[1] is μ₀ | ||
function lamwcoeff(T::DataType, n::Int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function lamwcoeff(T::DataType, n::Int) | |
function lambertw_coefficients(T::DataType, n::Int) |
return m | ||
end | ||
|
||
const LAMWMU_FLOAT64 = lamwcoeff(Float64,500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const LAMWMU_FLOAT64 = lamwcoeff(Float64,500) | |
const LambertW_μ = lambertw_coefficients(Float64, 500) |
x < 4e-11 && return wser3(p) | ||
x < 1e-5 && return wser7(p) | ||
x < 1e-3 && return wser12(p) | ||
x < 1e-2 && return wser19(p) | ||
x < 3e-2 && return wser26(p) | ||
x < 5e-2 && return wser32(p) | ||
x < 1e-1 && return wser50(p) | ||
x < 1.9e-1 && return wser100(p) | ||
x > 1/MathConstants.e && throw(DomainError(x)) # radius of convergence | ||
return wser290(p) # good for x approx .32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x < 4e-11 && return wser3(p) | |
x < 1e-5 && return wser7(p) | |
x < 1e-3 && return wser12(p) | |
x < 1e-2 && return wser19(p) | |
x < 3e-2 && return wser26(p) | |
x < 5e-2 && return wser32(p) | |
x < 1e-1 && return wser50(p) | |
x < 1.9e-1 && return wser100(p) | |
x > 1/MathConstants.e && throw(DomainError(x)) # radius of convergence | |
return wser290(p) # good for x approx .32 | |
x < 4e-11 && return Base.Math.@horner p LambertW_μ[1:3]... | |
x < 1e-5 && return Base.Math.@horner p LambertW_μ[1:7]... | |
n = x < 1e-3 ? 12 : | |
x < 1e-2 ? 19 : | |
x < 3e-2 ? 26 : | |
x < 5e-2 ? 32 : | |
x < 1e-1 ? 50 : | |
x < 1.9e-1 ? 100 : | |
x <= 1/MathConstants.e ? 290 : # good for x approx .32 | |
throw(DomainError(x)) # outside of convergence range | |
return evalpoly(p, view(LambertW_μ, 1:n)) |
|
||
# Converges to Float64 precision | ||
# We could get finer tuning by separating k=0,-1 branches. | ||
function wser(p,x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function wser(p,x) | |
function lambertw_series(p, x) |
# Base.Math.@horner requires literal coefficients | ||
# But, we have an array `p` of computed coefficients | ||
function horner(x, p::AbstractArray, n) | ||
n += 1 | ||
ex = p[n] | ||
for i = n-1:-1:2 | ||
ex = :(muladd(t, $ex, $(p[i]))) | ||
end | ||
ex = :( t * $ex) | ||
return Expr(:block, :(t = $x), ex) | ||
end | ||
|
||
function mkwser(name, n) | ||
iex = horner(:x,LAMWMU_FLOAT64,n) | ||
return :(function ($name)(x) $iex end) | ||
end | ||
|
||
eval(mkwser(:wser3, 3)) | ||
eval(mkwser(:wser5, 5)) | ||
eval(mkwser(:wser7, 7)) | ||
eval(mkwser(:wser12, 12)) | ||
eval(mkwser(:wser19, 19)) | ||
eval(mkwser(:wser26, 26)) | ||
eval(mkwser(:wser32, 32)) | ||
eval(mkwser(:wser50, 50)) | ||
eval(mkwser(:wser100, 100)) | ||
eval(mkwser(:wser290, 290)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's possible to use @horner
/evalpoly()
directly, so these functions are not needed.
# Base.Math.@horner requires literal coefficients | |
# But, we have an array `p` of computed coefficients | |
function horner(x, p::AbstractArray, n) | |
n += 1 | |
ex = p[n] | |
for i = n-1:-1:2 | |
ex = :(muladd(t, $ex, $(p[i]))) | |
end | |
ex = :( t * $ex) | |
return Expr(:block, :(t = $x), ex) | |
end | |
function mkwser(name, n) | |
iex = horner(:x,LAMWMU_FLOAT64,n) | |
return :(function ($name)(x) $iex end) | |
end | |
eval(mkwser(:wser3, 3)) | |
eval(mkwser(:wser5, 5)) | |
eval(mkwser(:wser7, 7)) | |
eval(mkwser(:wser12, 12)) | |
eval(mkwser(:wser19, 19)) | |
eval(mkwser(:wser26, 26)) | |
eval(mkwser(:wser32, 32)) | |
eval(mkwser(:wser50, 50)) | |
eval(mkwser(:wser100, 100)) | |
eval(mkwser(:wser290, 290)) |
### omega constant ### | ||
|
||
const omega_const_ = 0.567143290409783872999968662210355 | ||
# The BigFloat `omega_const_bf_` is set via a literal in the function __init__ to prevent a segfault |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if the issue (segfault) that required omega_const_bf
initialization in __init__()
is resolved, and it could be declared here:
### omega constant ### | |
const omega_const_ = 0.567143290409783872999968662210355 | |
# The BigFloat `omega_const_bf_` is set via a literal in the function __init__ to prevent a segfault | |
### Omega constant: Ω exp(Ω) = 1 (Ω = lambertw(1)) ### | |
const Omega_constant = 0.567143290409783872999968662210355 | |
const Omega_constant_BigFloat = BigFloat("0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194") |
Or maybe Omega constant could be moved to IrrationalConstants.jl
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "workaround" for the segfault was the standard way to deal with this problem. AFAIK, it has not changed. This is work that has to be done after the dynamic library (GMP, or something like that in this case) is linked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. The potential alternative that does not implicate __init__()
could be
const Omega_BigFloat256 = Ref{BigFloat}()
@irrational Omega 0.567143290409783872999968662210355 Omega_constant()
# compute BigFloat Omega constant at arbitrary precision
function Omega_constant()
isassigned(Omega_BigFloat256) || (Omega_BigFloat256[] = BigFloat("0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194"))
oc = Omega_BigFloat256[]
precision(BigFloat) <= 256 && return oc
myeps = eps(BigFloat)
for i in 1:100
nextoc = (1 + oc) / (1 + exp(oc))
abs(oc - nextoc) <= myeps && break
oc = nextoc
end
return oc
end
@alyst, this PR is out of date. |
Let me know how you want to do it. |
Seems like lots of people want this, but maybe it is easiest to just refresh the package that @jlapeyre has. Is it registered? Should we move it to JuliaMath? |
It's not clear to me the advantage of moving @ViralBShah : Yes, this package is registered. I very much support moving this to JuliaMath. It seems like a bit of work to get this moved to SpecialFunctions. One issue is the big, flat, namespace in SpecialFunctions. What to do with I'm not completely against moving |
|
How about using
Another option is |
My experience so far has been the opposite: large packages like SpecialFunctions.jl and Distributions.jl struggle to keep maintainers. Smaller packages tend to have an easier time keeping maintainers (I presume due to a lower maintenance burden, and the code author feeling an ownership of the package). |
However, lots of small packages are a burden to the user. |
Perhaps, as a middle ground solution, the functions can be reexported then? |
I agree with @simonbyrne, I don't think it's necessarily better maintained if it is part of SpecialFunctions. IMO separate packages are fine in this case. In my experience they are annoying mainly if one always has to load a specific set of packages to achieve some basic functionality but I don't think this is the case here. One can use LambertW just fine without loading SpecialFunctions, and similarly SpecialFunctions without LambertW. One problem with reexporting LambertW is that any breaking release of LambertW requires a breaking release of SpecialFunctions. Due to the large number of dependencies (1625 packages), it takes quite a while until a breaking release of SpecialFunctions is propagated through the Julia ecosystem and it requires work and time from many downstream developers, whereas a breaking release of LambertW (if SpecialFunctions does not reexport it) would have less severe effects since it only has a few (4 to be precise) dependencies. |
The problem is that Omega should be of type
"Feeling an ownership" is a double-edged sword. In my experience the code quality of community-maintained packages is better -- more eyes to look at the code, more chances to keep up with the evolving Julia standards, more predictable PR review and acceptance policy. |
It seems that just transferring to LambertW.jl to JuliaMath would also achieve this? It could then be seen as a community package, no longer with a sole owner. Also more people will have commit access, code reviews, etc. ... Looked at it this way, it would seem that transferring LambertW.jl to JuliaMath is the happy solution for everybody. |
Authorship aside, it's yet another package to make sure that e.g. |
I thumbs-upped the comments supporting not putting LambertW in SpecialFunctions. I propose moving it to JuliaMath. This does not preclude, or make it any more difficult, to later move it to SpecialFunctions. I know there are arguments both ways. The problem is you can't put yourself in the shoes of every user and know what their workflow is like. |
@simonbyrne Can I ask why was this closed? Is it still the plan to transfer lambertw to this package? |
I assume it was closed in favor of #371. |
That, and it seems there was a consensus in keeping it as a separate package? |
I didn't interpret there to be a strong consensus one way or another |
This copies the implementation of the Lambert W function and
related functions and data from jlapeyre/LambertW.jl to
SpecialFunctions.jl