You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Move Training utilities out of Experimental. One major recent push has been to clean up the training and ensure it is used everywhere in tutorials. It is about time to guarantee semvar for it.
Experimental.@compact and Experimental.StatefulLuxLayer
Not specifying true/false in StatefulLuxLayer.
Underlying packages that need to be updated to 1.0
LuxCore -- the naming changes is probably too disruptive without any major benefit. We will only do the following 2 name changes: feat!: 1.0 release LuxCore.jl#43
Another consideration is whether we should do a 0.6 release or move directly to a 1.0 release.
I have read posts in the wild complaining about the rapid release, which led to the ..57 patches, which might give a (false) notion that we were fixing bugs, but most of them were features 😓
Features
Preliminary XLA support. Please have a look at Meta Issue for Reactant Compilation of Lux Models #733 for a detailed discussionDispatchDoctor.jl
#683. Adding this directly to LuxCore. While not strictly breaking, I want to move this to 0.6. Fixed in feat: easy mechanism to set preferences #798Minor TODOs
RemoveSee refactor: usexlogy
andxlogx
in favor of https://github.com/JuliaStats/LogExpFunctions.jlxlogx
andxlogy
from LogExpFunctions #796 unsafe for GPU broadcastingDeprecations
Base.keys
rng
cpu
/gpu
disable_stacktrace_truncation!
Experimental.@compact
andExperimental.StatefulLuxLayer
true/false
in StatefulLuxLayer.Underlying packages that need to be updated to 1.0
AbstractLuxLayer
AbstractLuxContainerLayer
LuxDeviceUtils
-->MLDataDevices
Documentation Updates
The text was updated successfully, but these errors were encountered: