Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shape derivative #653

Merged
merged 44 commits into from
Feb 27, 2022
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
7801aba
Generalising
ConnorMallon Sep 4, 2021
e1c75ae
revert accidental change
ConnorMallon Sep 5, 2021
d5f7ef7
reverting changes 2
ConnorMallon Sep 6, 2021
5b65777
revert 3
ConnorMallon Sep 6, 2021
7f082e4
clean
ConnorMallon Sep 6, 2021
fae5379
add cellfield op and some more
ConnorMallon Sep 7, 2021
1834af6
Merge remote-tracking branch 'upstream/master' into shape_derivative
ConnorMallon Sep 7, 2021
a2459dd
type unstable change to get stuff working
ConnorMallon Sep 21, 2021
0172f8e
fix type stability
ConnorMallon Sep 21, 2021
56bea13
manual type inference for bimaterial autodiff
ConnorMallon Sep 21, 2021
1ae1762
making it work in the general case
ConnorMallon Sep 22, 2021
518cfad
making work for mutifield
ConnorMallon Sep 26, 2021
d21be90
making all tests work
ConnorMallon Sep 27, 2021
e6e8885
Revert "making all tests work"
ConnorMallon Sep 27, 2021
dcb91ff
Revert "making work for mutifield"
ConnorMallon Sep 26, 2021
3e65f7e
Revert "making it work in the general case"
ConnorMallon Sep 22, 2021
ff71270
Revert "manual type inference for bimaterial autodiff"
ConnorMallon Sep 21, 2021
018e18d
Revert "fix type stability"
ConnorMallon Sep 21, 2021
99bd36e
Revert "type unstable change to get stuff working"
ConnorMallon Sep 21, 2021
6682724
Merge remote-tracking branch 'upstream/master' into shape_derivative
ConnorMallon Dec 1, 2021
a318539
fix type stability
ConnorMallon Dec 1, 2021
426c3bd
using promote_type
ConnorMallon Dec 5, 2021
d228d73
adding promotion rule + clean
ConnorMallon Dec 5, 2021
1f083df
Merge tag 'v0.17.7' into shape_derivative
ConnorMallon Dec 5, 2021
6a8c137
Merge branch 'master' of github.com:Gridap/Gridap.jl into shape_deriv…
amartinhuertas Feb 11, 2022
140413d
add news
ConnorMallon Feb 14, 2022
1941946
generalising finding type
ConnorMallon Feb 14, 2022
4186210
removing old function
ConnorMallon Feb 14, 2022
67608be
Merge branch 'master' into shape_derivative
amartinhuertas Feb 15, 2022
4ae4dea
converting objects before appending
ConnorMallon Feb 17, 2022
ce66f50
_gradient_nd! dispatch based on isbits
ConnorMallon Feb 17, 2022
dcf4648
fix error
ConnorMallon Feb 21, 2022
9111354
add name TisbitsType
ConnorMallon Feb 21, 2022
dca3992
convert before constructor
ConnorMallon Feb 22, 2022
e62a3f2
isbits based cache allocation
ConnorMallon Feb 22, 2022
038fc46
remove revise
ConnorMallon Feb 24, 2022
6abba75
revert AppendedArray constrcuter to original form
ConnorMallon Feb 24, 2022
34f7937
changes as per comments
ConnorMallon Feb 24, 2022
7b7e168
Removing redefinition of Tisbitsparameter
amartinhuertas Feb 24, 2022
5e37245
Undo old changes
amartinhuertas Feb 25, 2022
fe01bb6
More undo old changes
amartinhuertas Feb 25, 2022
3910286
More undo old changes
amartinhuertas Feb 25, 2022
56dd373
Paremeterized types of member variables of CellPoint
amartinhuertas Feb 25, 2022
eb39b81
Trying to fix failing tests
amartinhuertas Feb 25, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion src/Arrays/AppendedArrays.jl
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,9 @@ struct AppendedArray{T,A,B} <: AbstractVector{T}
B = typeof(b)
ai = testitem(a)
bi = testitem(b)
T = eltype(collect((ai,bi)))
#T = eltype(collect((ai,bi)))
#new{T,A,B}(a,b)
T = Union{typeof(ai),typeof(bi)}
amartinhuertas marked this conversation as resolved.
Show resolved Hide resolved
new{T,A,B}(a,b)
end
end
Expand Down
6 changes: 5 additions & 1 deletion src/Arrays/LazyArrays.jl
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,11 @@ end

function _array_cache!(hash::Dict,a::LazyArray)
@boundscheck begin
@notimplementedif ! all(map(isconcretetype, map(eltype, a.args)))
if ! all(map(isconcretetype, map(eltype, a.args)))
for n in 1:length(a.args)
@notimplementedif ! all(map(isconcretetype, map(eltype, a.args[n])))
end
end
if ! (eltype(a.maps) <: Function)
@notimplementedif ! isconcretetype(eltype(a.maps))
end
Expand Down
2 changes: 1 addition & 1 deletion src/CellData/CellData.jl
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ import Gridap.Geometry: get_triangulation

import Gridap.TensorValues: inner, outer, double_contraction, symmetric_part
import LinearAlgebra: det, tr, cross, dot, ⋅
import Base: inv, abs, abs2, *, +, -, /, adjoint, transpose, real, imag, conj
import Base: inv, abs, abs2, *, +, -, /, adjoint, transpose, real, imag, conj, exp

export gradient, ∇
export ∇∇
Expand Down
4 changes: 2 additions & 2 deletions src/CellData/CellFields.jl
Original file line number Diff line number Diff line change
Expand Up @@ -637,7 +637,7 @@ Base.:(∘)(f::Function,g::Tuple{Vararg{Union{Function,CellField}}}) = Operation

# Unary ops

for op in (:symmetric_part,:inv,:det,:abs,:abs2,:+,:-,:tr,:transpose,:adjoint,:grad2curl,:real,:imag,:conj)
for op in (:symmetric_part,:inv,:det,:abs,:abs2,:+,:-,:tr,:transpose,:adjoint,:grad2curl,:real,:imag,:conj,:exp)
@eval begin
($op)(a::CellField) = Operation($op)(a)
end
Expand Down Expand Up @@ -861,4 +861,4 @@ struct Interpolable{M,A} <: Function
end

return_cache(a::Interpolable,x::Point) = return_cache(a.uh,x)
evaluate!(cache,a::Interpolable,x::Point) = evaluate!(cache,a.uh,x)
evaluate!(cache,a::Interpolable,x::Point) = evaluate!(cache,a.uh,x)
10 changes: 9 additions & 1 deletion src/CellData/CellQuadratures.jl
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,14 @@ function get_cell_measure(trian::Triangulation)
cell_to_dV = integrate(1,quad)
cell_to_bgcell = get_cell_to_bgcell(trian)
bgtrian = get_background_triangulation(trian)
bgcell_to_dV = zeros(num_cells(bgtrian))

if typeof(trian) <: AppendedTriangulation
T = eltype(eltype(trian.a.subcells.point_to_coords))
else
T = eltype(cell_to_dV)
end

bgcell_to_dV = zeros(T,num_cells(bgtrian))
_meas_K_fill!(bgcell_to_dV,cell_to_dV,cell_to_bgcell)
bgcell_to_dV
end
Expand All @@ -216,3 +223,4 @@ function _meas_K_fill!(bgcell_to_dV,cell_to_dV,cell_to_bgcell)
bgcell_to_dV[bgcell] += dV
end
end

23 changes: 19 additions & 4 deletions src/Polynomials/MonomialBases.jl
Original file line number Diff line number Diff line change
Expand Up @@ -119,12 +119,21 @@ return_type(::MonomialBasis{D,T}) where {D,T} = T

function return_cache(f::MonomialBasis{D,T},x::AbstractVector{<:Point}) where {D,T}
@assert D == length(eltype(x)) "Incorrect number of point components"

point_primitive_type = eltype(eltype(x))

if isempty(T.parameters)
Tp = point_primitive_type
else
Tp = VectorValue{T.parameters[1],point_primitive_type}
end

np = length(x)
ndof = length(f.terms)*num_components(T)
n = 1 + _maximum(f.orders)
r = CachedArray(zeros(T,(np,ndof)))
v = CachedArray(zeros(T,(ndof,)))
c = CachedArray(zeros(eltype(T),(D,n)))
r = CachedArray(zeros(Tp,(np,ndof)))
v = CachedArray(zeros(Tp,(ndof,)))
c = CachedArray(zeros(eltype(Tp),(D,n)))
(r, v, c)
end

Expand Down Expand Up @@ -380,7 +389,13 @@ function _gradient_nd!(
_gradient_1d!(g,x,orders[d],d)
end

z = zero(Mutable(VectorValue{D,T}))
#z = zero(Mutable(VectorValue{D,T}))
if isbitstype(T)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you check if z is inferible? E.g. with @code_warntype ?

I would say that this brakes type-stability of this function, unless the compiler is smart enough to take advantage of the if statement.

We want to preserve type-stability here. This is a performance-critical point.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have done
InteractiveUtils.@code_warntype Gridap.Polynomials._gradient_nd!(v,xi,orders,terms,c,g,Float64)
and there is no red so I think it is fine. Is that the right way to go about it ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for running the test. Which were the types of the parameters in the call you tried? I mean the types of v,xi,orders,terms,c,g.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used the types of the parameters that were used when that function was called in the MWE here. Namely:

Tv=Gridap.Arrays.CachedVector{Gridap.TensorValues.VectorValue{2, Float64}, Vector{Gridap.TensorValues.VectorValue{2, Float64}}}
Txi=Gridap.TensorValues.VectorValue{2, Float64}
Torders=Tuple{Int64, Int64}
Tterms=Vector{CartesianIndex{2}}
Tc=Gridap.Arrays.CachedMatrix{Float64, Matrix{Float64}}
Tg=Gridap.Arrays.CachedMatrix{Float64, Matrix{Float64}}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, good call. Thus, in your example call isbitstype(T) is going to be true, right? as T=Float64, and isbitstype(Float64) is true. Have you tried what happens with type stability if isbitstype(T) is false? On the other hand, why do you need this if else construct? (for my understanding).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I forgot to mention that the VSCode extension for Julia has also some support for integrated profiling https://discourse.julialang.org/t/julia-vs-code-extension-version-v0-17-released/42865

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my understanding. Did you identify any difference among your manual call to the function, and the one that you triggered from the debugger at the particular context in which the function is called? I would like to know if there is any cause for the difference

the difference is i made an error trying to cook the function (did not import all the functions required in my scope). With amendment, there is now no red even when cooking it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, perfect. We are on the safe side here, then. (except for the dynamic memory allocation issue)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thus, the only think that it comes into my mind to fix this is to return a regular dynamic memory array as you are doing now. The only thing is that I would try to avoid allocating the array on each call to _gradient_nd!, for performance reasons. You can may be have two different implementations for _gradient_nd!, one for isbitstypes, and the other for non isbitstypes. The latter has an additional argument with the work array that is allocated once from the calling process. This may involved propagating this into the cache of FieldGradientArray. This are just thoughs, this idea has to be refined further.

I have made an attempt at this in the most recent commit. Is this the right idea ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have made an attempt at this in the most recent commit. Is this the right idea ?

Yes, it is the right idea. Well done. There are still some comments to be addressed. I have put these along with the corresponding LOC.

z = zero(Mutable(VectorValue{D,T}))
else
z = zeros(T,D)
end

o = one(T)
k = 1

Expand Down