Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement Lambert W function #84

Closed
wants to merge 2 commits into from

Conversation

jlapeyre
Copy link

This copies the implementation of the Lambert W function and
related functions and data from jlapeyre/LambertW.jl to
SpecialFunctions.jl

This copies the implementation of the Lambert W function and
related functions and data from jlapeyre/LambertW.jl to
SpecialFunctions.jl
@jlapeyre
Copy link
Author

Including the Lambert W function was first mentioned in #27

@jlapeyre
Copy link
Author

Anyone home ?

Copy link
Member

@ararslan ararslan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left some comments but I can't speak to the accuracy of the algorithms.

src/lambertw.jl Outdated
#export lambertw, lambertwbp
using Compat

const euler =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just do

using Compat.MathConstants: e

then use e instead of euler.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I couldn't figure out the compat syntax from the docs, so I just did what I could.

src/lambertw.jl Outdated

function __init__()
omega_const_bf_[] =
parse(BigFloat,"0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it necessary to rerun this every time the module is loaded? Seems like this should just be

const omega_const_bf = parse(BigFloat, "...")

Copy link
Author

@jlapeyre jlapeyre Apr 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, some storage is malloced when the big float is constructed and Julia keeps the pointer. The next time the module runs, the memory has been freed (of course), but the address Julia stored has not changed. BANG!... If you don't precompile, then the heap allocation is done every time the module is loaded. But, I want to precompile. AFAICT, this is the recommended way to a) allocate the storage every time the module is used. b) precompile the module.

src/lambertw.jl Outdated
two_t = convert(T,2)
lastx = x
lastdiff = zero(T)
for i in 1:100
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does 100 come from? Seems like a rather arbitrary "magic number"; perhaps we could have that be configurable, e.g.

function _lambertw(z::T, x::T, maxiter::Int=100) where T<:Number

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things. 1) Some of this code was taken from mpmath. Maybe this number was, although I don't think a reason was given there either. I should acknowledge the source somewhere, it has a MIT compatible license. I will look at the python source. 2) Yes, I think changing it is ok. How ? Maybe.... pick a number far larger than the number of iterations required to converge in most, or all cases. But, make it small enough to finish the loop in a short time. We could have it error or warn about lack of precision if the entire loop is completed. I'm pretty sure I had something like this in there at one point.

src/lambertw.jl Outdated
ex = exp(x)
xexz = x * ex - z
x1 = x + 1
x = x - xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) )
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x -= instead of x = x -

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok.

src/lambertw.jl Outdated

# The fancy initial condition selection does not seem to help speed, but we leave it for now.
function lambertwk0(x::T)::T where T<:AbstractFloat
x == Inf && return Inf
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps isfinite(x) || return x? That will also handle the case where x is NaN, or if T is anything other than Float64 (Inf is a Float64 literal).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't recall my reasoning here. It is highly likely that I compared to python and Mathematica. But, maybe there is a Julia convention. Some clues for what to do:

julia> sin(NaN)
NaN

julia> sin(Inf)
ERROR: DomainError with Inf:
sin(x) is only defined for finite x.

But sin does not tend to a limit with increasing x. So Inf is in no sense the correct result. lambertw tends to infinity, so Inf in this sense is the correct answer.

In [5]: lambertw(inf)  # python mpmath
Out[5]: mpf('+inf')
In [9]: lambertw(nan)
Out[9]: mpf('nan')

Another issue is:

julia> convert(BigFloat,Inf)
Inf

julia> typeof(convert(BigFloat,Inf))
BigFloat

Yikes, no visible difference. In any case, at a minimum, this should be convert(T,Inf).

src/lambertw.jl Outdated

# maybe compute higher precision. converges very quickly
function omega_const(::Type{BigFloat})
@compat precision(BigFloat) <= 256 && return omega_const_bf_[]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird indentation here, and why is @compat necessary?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think compatibility with v0.4. There is already a deprecation in v0.5:

julia> get_bigfloat_precision()
WARNING: get_bigfloat_precision() is deprecated

In other words, this can be removed. (Although I am interested what you/community think is the proper indentation, were @compat necessary here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Julia standard for indentation is 4 spaces. That's what's used in the Julia repository as well as the majority of packages.

We don't need compatibility with 0.4 or even 0.5 here, since the package requires 0.6 at a minimum.

src/lambertw.jl Outdated

const LAMWMU_FLOAT64 = lamwcoeff(Float64,500)

function horner(x, p::AbstractArray,n)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use Base.Math.@horner?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I knew about Base.Math.@horner when I wrote this. So, the answer is I don't know. I'll look into it.

src/lambertw.jl Outdated
function lambertwbp(x::Number,k::Integer)
k == 0 && return _lambertw0(x)
k == -1 && return _lambertwm1(x)
error("expansion about branch point only implemented for k = 0 and -1")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error regarding the arguments should be throw(ArgumentError("...")) instead of error("...").

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. will do.

src/lambertw.jl Outdated

#### Lambert W function ####

const LAMBERTW_USE_NAN = false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must say, I don't really like this. I think we should pick a convention that matches the other functions here and stick to it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. IIRC, there was some debate around the time I wrote the module about how to handle this in general. I also wanted to see which I liked better. I think the issue is settled, so sure, I'll try to find the convention. It can only be changed by editing the source. In SpecialFunctions this certainly shouldn't be advertised as something to do.

@ararslan ararslan requested a review from stevengj April 18, 2018 15:24
@jlapeyre
Copy link
Author

@ararslan thanks for the review. I'll do these things.

@jlapeyre
Copy link
Author

How do you prefer updates to the PR ? I could rebase and force push, but I think we lose link between comments and code this way.

@stevengj I suggest putting off reviewing until this round of changes are made to avoid duplicating effort / wasting time.

@ararslan
Copy link
Member

You can just push more commits to your branch and they'll show up here. We can squash on merge.

@jlapeyre
Copy link
Author

The latest commit addresses all comments made by @ararslan. I did a bit of other refactoring and tidying.

@jlapeyre
Copy link
Author

jlapeyre commented Jul 6, 2018

I'd like to close this and issue a new PR after v1.0. In the upstream source LambertW.jl, I dropped v0.6 support, and have improved clarity and compliance with style standards. The upstream source already incorporates changes made during the code review here. (I have not yet pushed all these changes to github.)

@stevengj
Copy link
Member

Just do a forced push to this branch to update the existing PR.

@cossio
Copy link
Contributor

cossio commented Sep 26, 2018

I hope this gets merged soon. :)

function __init__()
# allocate storage for this BigFloat constant each time this module is loaded
omega_const_bf_[] =
parse(BigFloat,"0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't big"0.56..." better than parse(...)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you check, for instance with @less big"1.0", you see that the macro does more work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. @code_llvm big"0.56..." gives a LOT more code.

Copy link
Member

@giordano giordano May 20, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, big"..." doesn't respect current set precision:

julia> setprecision(1024) do
           precision(big"1.0")
       end
256

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that intended behavior of big"..."? Or should one raise an issue?

@giordano
Copy link
Member

Bump 🙂 I'd also love to have this one merged

@chriscoey
Copy link

bump. need this one quite often!

@longemen3000
Copy link

bumping

Copy link

@alyst alyst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR, it would be a very nice addition to SpecialFunctions.jl!
Also merging it here would ensure that the implementation would be maintained by the community.

two_t = convert(T,2)
lastx = x
lastdiff = zero(T)
converged::Bool = false
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
converged::Bool = false
converged = false

I guess type-assert is not required here

ex = exp(x)
xexz = x * ex - z
x1 = x + 1
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) )
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) )
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1))

Also, does it really help to define two_t = convert(T, 2)? Maybe the compiler can figure out the best way to calculate x + 2 (like it's done for x + 1 above)?

Suggested change
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) )
x -= xexz / (ex * x1 - (x + 2) * xexz / (2 * x1))

x1 = x + 1
x -= xexz / (ex * x1 - (x + two_t) * xexz / (two_t * x1 ) )
xdiff = abs(lastx - x)
if xdiff <= 3*eps(abs(lastx)) || lastdiff == xdiff # second condition catches two-value cycle
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if xdiff <= 3*eps(abs(lastx)) || lastdiff == xdiff # second condition catches two-value cycle
if xdiff <= 3*eps(lastx) || lastdiff == xdiff # second condition catches two-value cycle

lastx = x
lastdiff = xdiff
end
converged || warn("lambertw with z=", z, " did not converge in ", maxits, " iterations.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
converged || warn("lambertw with z=", z, " did not converge in ", maxits, " iterations.")
converged || warn("lambertw(", z, ") did not converge in ", maxits, " iterations.")


# Use Halley's root-finding method to find
# x = lambertw(z) with initial point x.
function _lambertw(z::T, x::T, maxits) where T <: Number
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function _lambertw(z::T, x::T, maxits) where T <: Number
function _lambertw(z::T, x::T, maxits::Integer) where T <: Number

# solve for α₂. We get α₂ = 0.
# compute array of coefficients μ in (4.22).
# m[1] is μ₀
function lamwcoeff(T::DataType, n::Int)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function lamwcoeff(T::DataType, n::Int)
function lambertw_coefficients(T::DataType, n::Int)

return m
end

const LAMWMU_FLOAT64 = lamwcoeff(Float64,500)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const LAMWMU_FLOAT64 = lamwcoeff(Float64,500)
const LambertW_μ = lambertw_coefficients(Float64, 500)

Comment on lines +289 to +298
x < 4e-11 && return wser3(p)
x < 1e-5 && return wser7(p)
x < 1e-3 && return wser12(p)
x < 1e-2 && return wser19(p)
x < 3e-2 && return wser26(p)
x < 5e-2 && return wser32(p)
x < 1e-1 && return wser50(p)
x < 1.9e-1 && return wser100(p)
x > 1/MathConstants.e && throw(DomainError(x)) # radius of convergence
return wser290(p) # good for x approx .32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
x < 4e-11 && return wser3(p)
x < 1e-5 && return wser7(p)
x < 1e-3 && return wser12(p)
x < 1e-2 && return wser19(p)
x < 3e-2 && return wser26(p)
x < 5e-2 && return wser32(p)
x < 1e-1 && return wser50(p)
x < 1.9e-1 && return wser100(p)
x > 1/MathConstants.e && throw(DomainError(x)) # radius of convergence
return wser290(p) # good for x approx .32
x < 4e-11 && return Base.Math.@horner p LambertW_μ[1:3]...
x < 1e-5 && return Base.Math.@horner p LambertW_μ[1:7]...
n = x < 1e-3 ? 12 :
x < 1e-2 ? 19 :
x < 3e-2 ? 26 :
x < 5e-2 ? 32 :
x < 1e-1 ? 50 :
x < 1.9e-1 ? 100 :
x <= 1/MathConstants.e ? 290 : # good for x approx .32
throw(DomainError(x)) # outside of convergence range
return evalpoly(p, view(LambertW_μ, 1:n))


# Converges to Float64 precision
# We could get finer tuning by separating k=0,-1 branches.
function wser(p,x)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
function wser(p,x)
function lambertw_series(p, x)

Comment on lines +258 to +284
# Base.Math.@horner requires literal coefficients
# But, we have an array `p` of computed coefficients
function horner(x, p::AbstractArray, n)
n += 1
ex = p[n]
for i = n-1:-1:2
ex = :(muladd(t, $ex, $(p[i])))
end
ex = :( t * $ex)
return Expr(:block, :(t = $x), ex)
end

function mkwser(name, n)
iex = horner(:x,LAMWMU_FLOAT64,n)
return :(function ($name)(x) $iex end)
end

eval(mkwser(:wser3, 3))
eval(mkwser(:wser5, 5))
eval(mkwser(:wser7, 7))
eval(mkwser(:wser12, 12))
eval(mkwser(:wser19, 19))
eval(mkwser(:wser26, 26))
eval(mkwser(:wser32, 32))
eval(mkwser(:wser50, 50))
eval(mkwser(:wser100, 100))
eval(mkwser(:wser290, 290))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's possible to use @horner/evalpoly() directly, so these functions are not needed.

Suggested change
# Base.Math.@horner requires literal coefficients
# But, we have an array `p` of computed coefficients
function horner(x, p::AbstractArray, n)
n += 1
ex = p[n]
for i = n-1:-1:2
ex = :(muladd(t, $ex, $(p[i])))
end
ex = :( t * $ex)
return Expr(:block, :(t = $x), ex)
end
function mkwser(name, n)
iex = horner(:x,LAMWMU_FLOAT64,n)
return :(function ($name)(x) $iex end)
end
eval(mkwser(:wser3, 3))
eval(mkwser(:wser5, 5))
eval(mkwser(:wser7, 7))
eval(mkwser(:wser12, 12))
eval(mkwser(:wser19, 19))
eval(mkwser(:wser26, 26))
eval(mkwser(:wser32, 32))
eval(mkwser(:wser50, 50))
eval(mkwser(:wser100, 100))
eval(mkwser(:wser290, 290))

Comment on lines +153 to +156
### omega constant ###

const omega_const_ = 0.567143290409783872999968662210355
# The BigFloat `omega_const_bf_` is set via a literal in the function __init__ to prevent a segfault
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the issue (segfault) that required omega_const_bf initialization in __init__() is resolved, and it could be declared here:

Suggested change
### omega constant ###
const omega_const_ = 0.567143290409783872999968662210355
# The BigFloat `omega_const_bf_` is set via a literal in the function __init__ to prevent a segfault
### Omega constant: Ω exp(Ω) = 1 (Ω = lambertw(1)) ###
const Omega_constant = 0.567143290409783872999968662210355
const Omega_constant_BigFloat = BigFloat("0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194")

Or maybe Omega constant could be moved to IrrationalConstants.jl?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "workaround" for the segfault was the standard way to deal with this problem. AFAIK, it has not changed. This is work that has to be done after the dynamic library (GMP, or something like that in this case) is linked.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. The potential alternative that does not implicate __init__() could be

const Omega_BigFloat256 = Ref{BigFloat}()

@irrational Omega 0.567143290409783872999968662210355 Omega_constant()

# compute BigFloat Omega constant at arbitrary precision
function Omega_constant()
    isassigned(Omega_BigFloat256) || (Omega_BigFloat256[] = BigFloat("0.5671432904097838729999686622103555497538157871865125081351310792230457930866845666932194"))
    oc = Omega_BigFloat256[]
    precision(BigFloat) <= 256 && return oc
    myeps = eps(BigFloat)
    for i in 1:100
        nextoc = (1 + oc) / (1 + exp(oc))
        abs(oc - nextoc) <= myeps && break
        oc = nextoc
    end
    return oc
end

@jlapeyre
Copy link
Author

@alyst, this PR is out of date. LambertW has changed in the meantime. I (or we) need to merge the changes into this PR branch.

@alyst
Copy link

alyst commented Nov 15, 2021

@alyst, this PR is out of date. LambertW has changed in the meantime. I (or we) need to merge the changes into this PR branch.

Let me know how you want to do it.

@ViralBShah
Copy link
Member

ViralBShah commented Dec 29, 2021

Seems like lots of people want this, but maybe it is easiest to just refresh the package that @jlapeyre has. Is it registered? Should we move it to JuliaMath?

jlapeyre/LambertW.jl

@jlapeyre
Copy link
Author

It's not clear to me the advantage of moving lambertw to SpecialFunctions. A few people want it moved, but I didn't see an argument. Maybe they use it frequently and already import SpecialFunctions a lot, and don't want to be required to do import LambertW.

@ViralBShah : Yes, this package is registered. I very much support moving this to JuliaMath.

It seems like a bit of work to get this moved to SpecialFunctions. One issue is the big, flat, namespace in SpecialFunctions. What to do with LambertW.omega? I think SpecialFunctions.omega is not a good choice.

I'm not completely against moving LambertW.jl, but I'd like to see arguments for why.

@alyst
Copy link

alyst commented Dec 31, 2021

I'm not completely against moving LambertW.jl, but I'd like to see arguments for why.

  • it's more likely that packages like SpecialFunctions.jl will have (co-)maintainers in the long run
  • even if (co-)maintainers of "fine-grained" packages are available, it would be an extra burden for them to do the tedious, but necessary work of keeping the API, documentation, unit tests and CI scripts up-to-date
  • Labert W function does not depend on any other packages, and the code base is sufficiently lean and straightforward, so adding it to SpecialFunctions.jl would not make the latter heavier
  • it's a matter of taste, but I find it's easier to discover/evaluate methods/tools when they are organized into dedicated registries like SpecialFunctions.jl

@cossio
Copy link
Contributor

cossio commented Dec 31, 2021

What to do with LambertW.omega? I think SpecialFunctions.omega is not a good choice.

How about using lambertw(1) as the public API to access the value of the Omega constant? No need for an additional name. Just guarantee that lambertw(1) has a fast evaluation path.

I think SpecialFunctions.omega is not a good choice.

Another option is SpecialFunctions.lambert_omega, and it doesn't have to be exported.

@simonbyrne
Copy link
Member

  • it's more likely that packages like SpecialFunctions.jl will have (co-)maintainers in the long run

My experience so far has been the opposite: large packages like SpecialFunctions.jl and Distributions.jl struggle to keep maintainers. Smaller packages tend to have an easier time keeping maintainers (I presume due to a lower maintenance burden, and the code author feeling an ownership of the package).

@ViralBShah
Copy link
Member

However, lots of small packages are a burden to the user.

@originalsouth
Copy link

Perhaps, as a middle ground solution, the functions can be reexported then?

@devmotion
Copy link
Member

I agree with @simonbyrne, I don't think it's necessarily better maintained if it is part of SpecialFunctions.

IMO separate packages are fine in this case. In my experience they are annoying mainly if one always has to load a specific set of packages to achieve some basic functionality but I don't think this is the case here. One can use LambertW just fine without loading SpecialFunctions, and similarly SpecialFunctions without LambertW.

One problem with reexporting LambertW is that any breaking release of LambertW requires a breaking release of SpecialFunctions. Due to the large number of dependencies (1625 packages), it takes quite a while until a breaking release of SpecialFunctions is propagated through the Julia ecosystem and it requires work and time from many downstream developers, whereas a breaking release of LambertW (if SpecialFunctions does not reexport it) would have less severe effects since it only has a few (4 to be precise) dependencies.

@alyst
Copy link

alyst commented Jan 4, 2022

How about using lambertw(1) as the public API to access the value of the Omega constant? No need for an additional name. Just guarantee that lambertw(1) has a fast evaluation path.

The problem is that Omega should be of type Base.Irrational{:XXX} (as the other constants), which could then be explicitly converted to a numeric type the user specifies, so it cannot be returned by lambertw(1) directly for the sake of type stability.
But the fast path to calculate lambertw(1)/lambertw(big(1))/lambertw(complex(1,0)) is already there.

Smaller packages tend to have an easier time keeping maintainers (I presume due to a lower maintenance burden, and the code author feeling an ownership of the package).

"Feeling an ownership" is a double-edged sword. In my experience the code quality of community-maintained packages is better -- more eyes to look at the code, more chances to keep up with the evolving Julia standards, more predictable PR review and acceptance policy.
Anyway, what I am saying about community maintenance only concerns the functionality that is "adjacent" to the core of the language. Clearly, SpecialFunctions.jl belongs to this category. I believe lambertw() belongs there too, given its numerous applications in different branches of science. But then again, it is up to the core Julia "stakeholders" to define and implement the strategy that would keep such functionality in good shape.
For my part I have just implemented in #371 what I have thought is need for this PR to be accepted.

@cossio
Copy link
Contributor

cossio commented Jan 4, 2022

In my experience the code quality of community-maintained packages is better -- more eyes to look at the code, more chances to keep up with the evolving Julia standards, more predictable PR review and acceptance policy.

It seems that just transferring to LambertW.jl to JuliaMath would also achieve this? It could then be seen as a community package, no longer with a sole owner. Also more people will have commit access, code reviews, etc. ...
And that way it remains independent of SpecialFunctions.jl, which has a lot more dependenciesdependants.

Looked at it this way, it would seem that transferring LambertW.jl to JuliaMath is the happy solution for everybody.

@alyst
Copy link

alyst commented Jan 4, 2022

Looked at it this way, it would seem that transferring LambertW.jl to JuliaMath is the happy solution for everybody.

Authorship aside, it's yet another package to make sure that e.g. .github/workflows or docs/make.jl scripts are up-to-date.
So far lambertw() doesn't have any dependencies, so I don't expect that in the future SpecialFunctions.jl would need minor version updates because of lambertw()
(Well, for my fork I had to add Compat to support evalpoly() on Julia 1.3, but if that is critical, I can switch to @evalpoly macro that should work everywhere).

@jlapeyre
Copy link
Author

jlapeyre commented Jan 9, 2022

I thumbs-upped the comments supporting not putting LambertW in SpecialFunctions. I propose moving it to JuliaMath. This does not preclude, or make it any more difficult, to later move it to SpecialFunctions. I know there are arguments both ways. The problem is you can't put yourself in the shoes of every user and know what their workflow is like.

@simonbyrne simonbyrne closed this Mar 7, 2023
@cossio
Copy link
Contributor

cossio commented Mar 7, 2023

@simonbyrne Can I ask why was this closed? Is it still the plan to transfer lambertw to this package?

@ararslan
Copy link
Member

ararslan commented Mar 8, 2023

I assume it was closed in favor of #371.

@simonbyrne
Copy link
Member

That, and it seems there was a consensus in keeping it as a separate package?

@ararslan
Copy link
Member

ararslan commented Mar 8, 2023

I didn't interpret there to be a strong consensus one way or another

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.