Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing Explicit Runge Kutta Methods #952

Merged
merged 16 commits into from
Nov 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added

- Implemented real/imag for VectorValues
- Explicit Runge-Kutta ODE Solvers. Since PR [#952](https://github.com/gridap/Gridap.jl/pull/952)

### Fixed

Expand Down
57 changes: 57 additions & 0 deletions docs/ODEs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
Let us consider the transient problem $A(u,\dot{u}) = 0$. We can consider that this is a ODE, since the spatial part is not important here. The most standard situation is the one in which
$$
A(u,d_t{u}) = Md_t{u} + K u,
$$
i.e., $A$ is linear with respect to $\dot{u}$, but we can consider the more general case here.

# $θ$-method

Our motivation here is to split the time domain into time steps, and for each time step $[t^n,t^{n+1}]$, create an approximation of the map $u^n \mapsto u^{n+1}$ such that $A(R(u^{n+1},u^n),\Delta_t(u^{n+1},u^n)) = 0$. The operator $\Delta_t$ is an approximation of the time derivative and the operator $R$ is some time approximation of $u$ in $[t^n,t^{n+1}]$.

Let us consider the $θ$-method to fix ideas. In this case, we consider at each time step the initial value $u^n$, we approximate the time derivative using finite differences as $\Delta_t(u^n,u^{n+1}) = \frac{u^{n+1}-u^{n}}{\delta t}$. For Backward-Euler, we compute $R(u^n,u^{n+1}) = u^{n+1}$, for Forward Euler $R(u^n,u^{n+1}) = u^{n}$, and for Crank-Nicolson $R(u^n,u^{n+1}) = \frac{u^{n+1}+u^{n}}{2}$ (or more precisely, $R(u^n,u^{n+1}) = u^{n+1}(t-t^n) + u^n(t^{n+1}-t)$).

Now, we want to approximate the operator using any of these methods. We can readily check that we can write the Newton linearisation of any of these problems as:
$$
[\frac{∂A}{∂u}\frac{\partial R}{\partial u^{n+1}} + \frac{∂A}{∂\dot{u}}\frac{\partial \Delta}{\partial u^{n+1}} ] \delta u^{n+1} = - A(R(u^{n+1},u^n),\Delta_t(u^{n+1},u^n)).
$$
We can denote $J_0 \doteq \frac{∂A}{∂u}$ and $J_1 \doteq \frac{∂A}{∂\dot{u}}$.

E.g., for BE we have
$$
\frac{\partial R}{\partial u^{n+1}} = 1, \qquad \frac{\partial \Delta}{\partial u^{n+1}} = 1/δt,
$$
for FE we have
$$
\frac{\partial R}{\partial u^{n+1}} = 0, \qquad \frac{\partial \Delta}{\partial u^{n+1}} = 1/δt,
$$
and for CN we have
$$
\frac{\partial R}{\partial u^{n+1}} = 1/2, \qquad \frac{\partial \Delta}{\partial u^{n+1}} = 1/δt.
$$
Analogously, we can define $\gamma_0 \doteq \frac{\partial R}{\partial u^{n+1}}$ and $\gamma_1 \doteq \frac{\partial \Delta}{\partial u^{n+1}}$. Note that the standard Jacobian is pre-multiplied by $\gamma_0$. Thus, it makes no sense to compute $J_0$ when $\gamma_0 = 0$. The same happens for the case of $\gamma_1 = 0$ and $J_1$. But this is not the case for ODEs. $J_1$ is the mass matrix.

We have decided to write the problem in terms of $u^{n+1}$, but we could do something different. E.g., for the $\theta$-method, we can write the problem in terms of $u^{n+\theta}$ for $\theta > 0$. In this case, we have to define the $R$ and $Δ_t$ operators in terms of $u^{n+\theta}$ and $u^n$ and perform exactly as above. The only difference is a final update step.

From the discussion above, it seems quite clear that FE should work with the current machinery, since we would be computing $M + 0*L$. That is the reason why I say that the current machinery should work for FE but we need to avoid computing $L$ by using a if statement in the code, and only compute it for $\gamma_0 > 0$.

When FE works, we can start the discussion about the general RK solver.

# RK methods

In DIRK methods, we make the following assumption:
$$
A(t,u,d_t(u)) \doteq M d_t(u) + K(t,u) = 0.
$$
Given a Butcher tableau, the method reads as follows. Given $u^{n}$, compute for $s = 1,\ldots,n$,
$$
M k_s = -K(t_n + c_s \delta t, u^{n} + \delta t \sum_{i=1}^{s} a_{s,i} k_i),
$$
or
$$
M \delta k_s = - M k_s -K(t_n + c_s \delta t, u^{n} + \delta t \sum_{i=1}^{s} a_{s,i} k_i, k_s).
$$
Then, $u^{n+1} = u^n + \delta t \sum_{i=1}^{n} b_i k_i$. Note that we are only considering DIRK methods, since we only allow $i = 1, \ldots, s$ at each stage computation. In this case, the jacobians to be computed are as above, $J_0$ and $J_1$. However, $J_1 = M$ and $J_0 = \frac{\partial K}{\partial u}$. In any case, we could just define $A$ as above and compute its Jacobians as above.

At each stage, we have $\gamma_1^s = 1$ and $\gamma_0^s = \delta t a_{s,s}$. We can use exactly the same machinery as above with these coefficients.

In the case of explicit methods, $a_{s,s} = 0$ and we don't need to compute $J_0$, as above for FE.
2 changes: 2 additions & 0 deletions src/ODEs/Exports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ end
@publish_gridapodes ODETools ThetaMethod
@publish_gridapodes ODETools RungeKutta
@publish_gridapodes ODETools IMEXRungeKutta
@publish_gridapodes ODETools EXRungeKutta
@publish_gridapodes ODETools Newmark
@publish_gridapodes ODETools GeneralizedAlpha
@publish_gridapodes ODETools ∂t
Expand All @@ -26,3 +27,4 @@ end
@publish_gridapodes TransientFETools TransientConstantMatrixFEOperator
@publish_gridapodes TransientFETools TransientRungeKuttaFEOperator
@publish_gridapodes TransientFETools TransientIMEXRungeKuttaFEOperator
@publish_gridapodes TransientFETools TransientEXRungeKuttaFEOperator
154 changes: 154 additions & 0 deletions src/ODEs/ODETools/EXRungeKutta.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
"""
Explicit Runge-Kutta ODE solver
"""
struct EXRungeKutta <: ODESolver
ls::LinearSolver
dt::Float64
tableau::EXButcherTableau
function EXRungeKutta(ls::LinearSolver, dt, type::Symbol)
bt = EXButcherTableau(type)
new(ls, dt, bt)
end
end

"""
solve_step!(uf,odesol,op,u0,t0,cache)
"""
function solve_step!(uf::AbstractVector,
solver::EXRungeKutta,
op::ODEOperator,
u0::AbstractVector,
t0::Real,
cache)

# Unpack variables
dt = solver.dt
s = solver.tableau.s
a = solver.tableau.a
b = solver.tableau.b
c = solver.tableau.c
d = solver.tableau.d

# Create cache if not there
if cache === nothing
ode_cache = allocate_cache(op)
vi = similar(u0)
ki = [similar(u0) for i in 1:s]
M = allocate_jacobian(op,t0,uf,ode_cache)
get_mass_matrix!(M,op,t0,uf,ode_cache)
l_cache = nothing
else
ode_cache, vi, ki, M, l_cache = cache
end

lop = EXRungeKuttaStageOperator(op,t0,dt,u0,ode_cache,vi,ki,0,a,M)

for i in 1:s

# solve at stage i
ti = t0 + c[i]*dt
ode_cache = update_cache!(ode_cache,op,ti)
update!(lop,ti,ki[i],i)
l_cache = solve!(uf,solver.ls,lop,l_cache)

update!(lop,ti,uf,i)

end

# update final solution
tf = t0 + dt

@. uf = u0
for i in 1:s
@. uf = uf + dt*b[i]*lop.ki[i]
end

cache = (ode_cache, vi, ki, M, l_cache)

return (uf,tf,cache)


end




mutable struct EXRungeKuttaStageOperator <: RungeKuttaNonlinearOperator
odeop::ODEOperator
ti::Float64
dt::Float64
u0::AbstractVector
ode_cache
vi::AbstractVector
ki::AbstractVector
i::Int
a::Matrix
M::AbstractMatrix
end


"""
ODE: A(t,u,∂u = M ∂u/∂t + K(t,u) = 0 -> solve for u
EX-RK: A(t,u,ki) = M ki + K(ti,u0 + dt ∑_{j<i} a_ij * kj) = 0
= M ki + K(ti,ui) = 0 -> solve for ki
where ui = u0 + dt ∑_{j<i} a_ij * kj for i = 1,…,s
The Jacobian is always M. Compute and store M in EXRungeKuttaStageOperator
"""
function residual!(b::AbstractVector,
op::EXRungeKuttaStageOperator,
x::AbstractVector)

ui = x
vi = op.vi

lhs!(b,op.odeop,op.ti,(ui,vi),op.ode_cache)

@. ui = op.u0
for j = 1:op.i-1
@. ui = ui + op.dt * op.a[op.i,j] * op.ki[j]
end

rhs = similar(op.u0)
rhs!(rhs,op.odeop,op.ti,(ui,vi),op.ode_cache)

@. b = b + rhs
@. b = -1.0 * b
b
end

function jacobian!(A::AbstractMatrix,
op::EXRungeKuttaStageOperator,
x::AbstractVector)
@. A = op.M
end


function allocate_residual(op::EXRungeKuttaStageOperator,x::AbstractVector)
allocate_residual(op.odeop,op.ti,x,op.ode_cache)
end

function allocate_jacobian(op::EXRungeKuttaStageOperator,x::AbstractVector)
allocate_jacobian(op.odeop,op.ti,x,op.ode_cache)
end


function update!(op::EXRungeKuttaStageOperator,
ti::Float64,
ki::AbstractVector,
i::Int)
op.ti = ti
@. op.ki[i] = ki
op.i = i
end


function get_mass_matrix!(A::AbstractMatrix,
odeop::ODEOperator,
t0::Float64,
u0::AbstractVector,
ode_cache)
z = zero(eltype(A))
fillstored!(A,z)
jacobian!(A,odeop,t0,(u0,u0),2,1.0,ode_cache)
A
end
2 changes: 2 additions & 0 deletions src/ODEs/ODETools/ODESolvers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,8 @@ include("RungeKutta.jl")

include("IMEXRungeKutta.jl")

include("EXRungeKutta.jl")

include("Newmark.jl")

include("AffineNewmark.jl")
Expand Down
1 change: 1 addition & 0 deletions src/ODEs/ODETools/ODETools.jl
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ export MidPoint
export ThetaMethod
export RungeKutta
export IMEXRungeKutta
export EXRungeKutta
export Newmark
export GeneralizedAlpha

Expand Down
66 changes: 66 additions & 0 deletions src/ODEs/ODETools/Tableaus.jl
Original file line number Diff line number Diff line change
Expand Up @@ -190,3 +190,69 @@ end
function IMEXButcherTableau(type::Symbol)
eval(:(IMEXButcherTableau($type())))
end

"""
Explicit Butcher tableaus
"""

abstract type EXButcherTableauType end

struct EX_FE_1_0_1 <: EXButcherTableauType end
struct EX_SSP_3_0_3 <: EXButcherTableauType end

"""
Explicit Butcher tableaus
"""
struct EXButcherTableau{T <: EXButcherTableauType}
s::Int # stages
p::Int # embedded order
q::Int # order
a::Matrix # A_ij explicit
b::Vector # b_j explicit
c::Vector # c_i explicit
d::Vector # d_j (embedded)
end

# EX Butcher Tableaus constructors

"""
EX Forward-Backward-Euler

number of stages: 1
embedded method: no
order: 1
"""
function EXButcherTableau(::EX_FE_1_0_1)
s = 1
p = 0
q = 1
a = reshape([0.0],1,1)
b = [1.0]
c = [0.0]
d = [0.0]
EXButcherTableau{EX_FE_1_0_1}(s,p,q,a,b,c,d)
end

"""
EX SSPRK3

number of stages: 3
embedded method: no
order: 3
"""
function EXButcherTableau(::EX_SSP_3_0_3)
s = 3
p = 0
q = 3
a = [0.0 0.0 0.0; 1.0 0.0 0.0; 1/4 1/4 0.0]
b = [1/6, 1/6, 2/3]
c = [0.0, 1.0, 1/2]
d = [0.0, 0.0, 0.0]


EXButcherTableau{EX_SSP_3_0_3}(s,p,q,a,b,c,d)
end

function EXButcherTableau(type::Symbol)
eval(:(EXButcherTableau($type())))
end
Loading
Loading