Releases: SciNim/numericalnim
Trilinear interpolator and new `sortDataset`
- Trilinear interpolator for uniformly gridded data. Use it by constructing a 3D Tensor with the function values and use
newTrilinearSpline
to create the spline. - Updated
sortDataset
and addedsortAndTrimDataset
which sorts and removes duplicates. If non-matching duplicates (same x-value, different y-values) are found it will raiseValueError
. newCubicSpline
doesn't require the inputs to be sorted anymore.
Gridded 2D interpolation
Implements 2D interpolation for gridded data using, nearest neighbour, bilinear, and bicubic interpolation.
import arraymancer
let z = [[1.0, 2, 3], [2.0, 3, 4], [5.0, 6, 7]].toTensor
let xLim = (0.0, 10.0)
let yLim = (5.0, 20.0)
let nn = newNearestNeighbour2D(z, xLim, yLim)
echo nn.eval(5.0, 10.0) # evaluates at (x, y) = (5, 10)
let bl = newBilinearSpline(z, xLim, yLim)
echo bl.eval(5.0, 10.0) # evaluates at (x, y) = (5, 10)
let bc = newBicubicSpline(z, xLim, yLim)
echo bc.eval(5.0, 10.0) # evaluates at (x, y) = (5, 10)
bump version number
bump version number so numericalnim.nimble and github say the same
Proper contexts has arrived!
In this update, ODE and integrate have gotten support for a context NumContext
which provides a persistent storage between function calls. It can be used to pass in parameters or to save extra information during the processing. For example:
- If you have a matrix you want to reuse often in your function, you can save it in the
NumContext
once and then access it with a key. This way you only have to evaluate it once instead of creating it on every function call or making it a global variable. - You want to modify a parameter during the processing and access the modified value during later calls.
Warning: NumContext
is a ref-type. So it means that the context variable you pass in will be altered if you do anything to it in your function. The ctx
in your function is the exact same as the one you pass in, so if you change it, you will also change the original variable.
What does this mean for your old code? It means you have to change the proc signature for your functions:
- ODE:
# Old code
proc f[T](x: float, y: T): T =
# do your calculation here
# New code
proc(t: float, y: T, ctx: NumContext[T]): T =
# do your calculations here
- integrate:
# Old code
proc(x: float, optional: seq[T]): T =
# do your calculations here
# New code
proc(x: float, ctx: NumContext[T]): T =
# do your calculations here
So how do we create a NumContext
then? Here is an example:
import arraymancer, numericalnim
var ctx = newNumContext[Tensor[float]]()
# Values of type `T`, `Tensor[float]` in this case is accessed using `[]` with either a string or enum as key.
ctx["A"] = @[@[1.0, 2.0], @[3.0, 4.0]].toTensor
# `NumContext` does always have a float storage as well accessed using `setF` and `getF`.
ctx.setF("k", 3.14)
# it can then be accessed using ctx.getF("k")
As for passing in the NumContext
you pass it in as the parameter ctx
to the integration proc or solveODE
:
proc f_integrate(x: float, ctx: NumContext[float]): float = sin(x)
let I = adaptiveGauss(f_integrate, 0.0, 2.0, ctx = ctx)
proc f_ode(t: float, y: float, ctx: NumContext[float]): float = sin(t)*y
let (ts, ys) = solveODE(f_ode, y0 = 1.0, tspan = [0.0, 2.0], ctx = ctx)
If you don't pass in a ctx
, an empty one will be created for you but you won't be able to access after the proc has finished and returned the results.
Using enums as keys for NumContext
If you want to avoid KeyErrors regarding mistyped keys, you can use enums for the keys instead. The enum value will be converted to a string internally so there is no constraint that all keys must be from the same enum. Here is one example of how to use it:
type
MyKeys = enum
key1
key2
key3
var ctx = newNumContext[float]()
ctx[key1] = 3.14
ctx[key2] = 6.28
Introduce HermiteSpline and improved adaptiveGauss, ode error control
- This release features the new
HermiteSpline
which works for generic datatypes. - The error control for
odeSolver
has been improved to use both absolute and relative errors. - The big thing is that
adaptiveGauss
has been improved and is the clear winner among all our quadratures. It can now handle infinite intervals, uses a much more robust global error scheme.
More ODE integrators
Thanks to @BarrOff NumericalNim now has three new ODE solvers in Tsit54
, Vern65
and Vern76
.
Spline more!
The interpolation module has been introduced with natural cubic splines. The cubic spline can be converted into both a normal proc but also procs that work well with the other parts of NumericalNim. More information can be found in the README.
There has also been a change of operators so that they follow common presedence rules:
.*
-> *.
.*=
-> *.=
./
-> /.
./=
-> /.=
Optimize, optimize, optimize!
After contributions from @JoegottabeGitenme NumericalNim does now feature optimization methods!
The ones that are included are these 1D optimization methods:
steepest_descent
- Standard method for local minimum finding over a 2D planeconjugate_gradient
- iterative implementation of solving Ax = bnewtons
- Newton-Raphson implementation for 1-dimensional functions
More ODE integrators
The list of available ODE integrators has expanded. The new additions are mostly lower order methods, a few adaptive ones and a few with fixed timesteps. This is the list of available integrators now:
rk21
- Heun's Adaptive 2nd order method.BS32
- Bogacki–Shampine 3rd order adaptive method.DOPRI54
- Dormand & Prince's adaptive 5th order method.Heun2
- Heun's 2nd order fixed timestep method.Ralston2
- Ralston's 2nd order fixed timestep method.Kutta3
- Kutta's 3rd order fixed timestep method.Heun3
- Heuns's 3rd order fixed timestep method.Ralston3
- Ralston's 3rd order fixed timestep method.SSPRK3
- Strong Stability Preserving Runge-Kutta 3rd order fixed timestep method.Ralston4
- Ralston's 4th order fixed timestep method.Kutta4
- Kutta's 4th order fixed timestep method.RK4
- The standard 4th order, fixed timestep method we all know and love.
Adaptive Gauss-Kronrod Quadrature
You can now use an adaptive Gauss-Kronrod method to do your 1D integration. From my testing, it is more efficient than adaptiveSimpson
. You can use it with the proc adaptiveGauss
.