Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor/tensor values #239

Merged
merged 41 commits into from
Jun 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
03f6193
First stage of TensorValues refactoring #210
victorsndvg Mar 25, 2020
182d80e
Fix pending adaptation from previous commit.
victorsndvg Mar 26, 2020
a8b6557
Fix to override get_array from TensorValues
victorsndvg Mar 26, 2020
215ccfa
Last fix to make initial TensorValues refactoring work
victorsndvg Mar 27, 2020
8f64acd
Enhanced TensorValues types signatures #210
victorsndvg Mar 27, 2020
6843699
TensorValues code cleaning (ongoing)
victorsndvg Mar 30, 2020
f8703f2
Desambiguate Vararg function arguments
victorsndvg Mar 31, 2020
9325b15
fix previous commit: add extra constructor for TensorValues (dnt unde…
victorsndvg Mar 31, 2020
70ec735
fix last commit for julia 1.3.1
victorsndvg Apr 1, 2020
ed98f87
pretify and add get_index from CartesianIndex
victorsndvg Apr 2, 2020
b2e59e0
Group code by Type. Start SymTensorValue implementation. Tests pending
victorsndvg Apr 2, 2020
b68e7ed
SymTensorValue implementation almost complete. #210
victorsndvg Apr 7, 2020
be0775c
SymTensorValue operations implemented #210
victorsndvg Apr 7, 2020
3b91585
Minor fixes in SymTensorValueType
victorsndvg Apr 14, 2020
88d52ad
Initial work with SymFourthOrderTensorValue type
victorsndvg Apr 14, 2020
2fbfa0a
Complete constructors and add indexing features for SymFourthOrderTen…
victorsndvg Apr 15, 2020
939c1c9
Add basic +,- operations for SymFourthOrderTensorValues. Add some tests.
victorsndvg Apr 15, 2020
43f8cb2
Merge branch 'master' of https://github.com/gridap/Gridap.jl into ref…
victorsndvg Apr 15, 2020
8dc4ba4
Fix typo
victorsndvg Apr 16, 2020
739b40a
Improve TensorValue types according @fverdugo comments.
victorsndvg Apr 20, 2020
4302358
Adapt LinearIndex access for MultiValue as @fverdugo requests
victorsndvg Apr 21, 2020
9f3eec4
Fix eachindex function and tests for multivalues indexing
victorsndvg Apr 21, 2020
739077f
Rename files according @fverdugo comments
victorsndvg Apr 21, 2020
4f2cc69
Add changes discussed in latest meeting. Ready to PR
victorsndvg Apr 23, 2020
786ebe5
Merge branch 'master' of github.com:gridap/Gridap.jl into refactor/Te…
fverdugo May 22, 2020
1370375
Make refactoring of tensor values work
fverdugo May 22, 2020
fc5f3d5
Enhancements in the TensorValues constructors
fverdugo May 22, 2020
90d4eed
More enhancements in TensorValues
fverdugo May 22, 2020
3669dc4
Merge branch 'master' of github.com:gridap/Gridap.jl into refactor/Te…
fverdugo Jun 8, 2020
0716a6d
Saving unfinished changes
fverdugo Jun 8, 2020
e484f61
Some fixes
fverdugo Jun 9, 2020
62e053c
More efficient implementation of inverse
fverdugo Jun 9, 2020
870710c
Some improvements
fverdugo Jun 9, 2020
6910034
Removing src/TensorValues/Misc.jl
fverdugo Jun 9, 2020
ab78343
More work in symmetric 2nd and 4rd order tensors
fverdugo Jun 9, 2020
7fd0766
Deprecate * for single contraction and n_components
fverdugo Jun 9, 2020
98b245a
Minor bugfix
fverdugo Jun 10, 2020
101cb88
Ironing some rough corners
fverdugo Jun 10, 2020
01ffe8c
Merge branch 'master' of github.com:gridap/Gridap.jl into refactor/Te…
fverdugo Jun 10, 2020
85bfcdf
Update NEWS.md
fverdugo Jun 10, 2020
a2d662b
Moving to v0.11.0
fverdugo Jun 10, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,25 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]

### Added

- Added `⊙` (\odot) as an alias of `inner`. Since PR [#239](https://github.com/gridap/Gridap.jl/pull/239).
- Added `⊗` (\otimes) as an alias of `outer`. Since PR [#239](https://github.com/gridap/Gridap.jl/pull/239).

### Changed

- Major refactoring in the module `Gridap.TensorValues`.
Since PR [#239](https://github.com/gridap/Gridap.jl/pull/239).
**The following changes are likely to affect all users:**
- The operator `*` is not allowed for expressing the dot product anymore. Use `LinearAlgebra.dot`
function aka `⋅` (\cdot).
- The syntax `∇*u` is not allowed anymore. Use `∇⋅u` instead.
- Gridap re-exports `dot`, `⋅`, and other names from LinearAlbegra that are used
often in Gridap code.
- Function `n_components` is renamed to `num_components`.

## [0.10.4] - 2020-6-8

### Added
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Gridap"
uuid = "56d4f2e9-7ea1-5844-9cf6-b9c51ca7ce8e"
authors = ["Santiago Badia <santiago.badia@monash.edu>", "Francesc Verdugo <fverdugo@cimne.upc.edu>"]
version = "0.10.4"
version = "0.11.0"

[deps]
AbstractTrees = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
Expand Down
7 changes: 7 additions & 0 deletions src/Exports.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ macro publish(mod,name)
end
end

# Reexport from LinearAlgebra (just for convenience)
using LinearAlgebra: det, inv, tr, cross, dot, norm, ×, ⋅
export det, inv, tr, cross, dot, norm, ×, ⋅

@publish Helpers operate
@publish Helpers GridapType

Expand Down Expand Up @@ -33,6 +37,9 @@ end
@publish TensorValues inner
@publish TensorValues outer
@publish TensorValues diagonal_tensor
@publish TensorValues num_components
using Gridap.TensorValues: ⊙; export ⊙
using Gridap.TensorValues: ⊗; export ⊗

@publish Fields gradient
@publish Fields ∇
Expand Down
4 changes: 2 additions & 2 deletions src/FESpaces/CLagrangianFESpaces.jl
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ function _generate_dof_layout_component_major(::Type{<:Real},nnodes::Integer)
end

function _generate_dof_layout_component_major(::Type{T},nnodes::Integer) where T
ncomps = n_components(T)
ncomps = num_components(T)
V = change_eltype(T,Int)
ndofs = ncomps*nnodes
dof_to_comp = zeros(Int8,ndofs)
Expand Down Expand Up @@ -175,7 +175,7 @@ function _generate_cell_dofs_clagrangian_fespace(
cell_to_ctype,
node_and_comp_to_dof) where T

ncomps = n_components(T)
ncomps = num_components(T)

ctype_to_lnode_to_comp_to_ldof = map(get_node_and_comp_to_dof,reffes)
ctype_to_num_ldofs = map(num_dofs,reffes)
Expand Down
2 changes: 1 addition & 1 deletion src/FESpaces/ExtendedFESpaces.jl
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ function get_cell_basis(f::ExtendedFESpace)
vi = testitem(cell_to_val)
Tv = field_return_type(vi,xi)
T = eltype(Tv)
D = n_components(eltype(xi))
D = num_components(eltype(xi))
void_to_val = Fill(VoidBasis{T,D}(),length(f.trian.void_to_oldcell))

array = ExtendedVector(
Expand Down
8 changes: 4 additions & 4 deletions src/Fields/AffineMaps.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
"""
"""
struct AffineMap{D,T,L} <:Field
jacobian::TensorValue{D,T,L}
jacobian::TensorValue{D,D,T,L}
origin::Point{D,T}
function AffineMap(jacobian::TensorValue{D,T,L}, origin::Point{D,T}) where {D,T,L}
function AffineMap(jacobian::TensorValue{D,D,T,L}, origin::Point{D,T}) where {D,T,L}
new{D,T,L}(jacobian,origin)
end
end
Expand All @@ -28,11 +28,11 @@ end
function _apply_affine_map(h,x)
t = h.origin
s = h.jacobian
(s*x)+t
(sx)+t
end

struct AffineMapGrad{D,T,L} <: Field
jacobian::TensorValue{D,T,L}
jacobian::TensorValue{D,D,T,L}
end

function field_gradient(h::AffineMap)
Expand Down
4 changes: 2 additions & 2 deletions src/Fields/Attachmap.jl
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ function kernel_cache(k::PhysGrad,a,b)
_attachmap_checks(a,b)
Ta = eltype(a)
Tb = eltype(b)
T = return_type(*,return_type(inv,Tb),Ta)
T = return_type(,return_type(inv,Tb),Ta)
r = zeros(T,size(a))
CachedArray(r)
end
Expand All @@ -53,7 +53,7 @@ end
for p in 1:np
@inbounds jacinv = inv(b[p])
for i in 1:ni
@inbounds c[p,i] = jacinv * a[p,i]
@inbounds c[p,i] = jacinv a[p,i]
end
end
c
Expand Down
30 changes: 20 additions & 10 deletions src/Fields/DiffOperators.jl
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,24 @@ function laplacian(f)
end

"""
*f
f

Equivalent to

divergence(f)
"""
(*)(::typeof(∇),f) = divergence(f)
(*)(::typeof(∇),f::GridapType) = divergence(f)
dot(::typeof(∇),f) = divergence(f)
dot(::typeof(∇),f::GridapType) = divergence(f)

function (*)(::typeof(∇),f)
msg = "Syntax ∇*f has been removed, use ∇⋅f (\\nabla \\cdot f) instead"
error(msg)
end

function (*)(::typeof(∇),f::GridapType)
msg = "Syntax ∇*f has been removed, use ∇⋅f (\\nabla \\cdot f) instead"
error(msg)
end

"""
outer(∇,f)
Expand Down Expand Up @@ -118,23 +128,23 @@ function gradient(f::Function)
end

function _grad_f(f,x,fx)
VectorValue(ForwardDiff.gradient(f,x.array))
VectorValue(ForwardDiff.gradient(f,get_array(x)))
end

function _grad_f(f,x,fx::VectorValue)
TensorValue(transpose(ForwardDiff.jacobian(y->f(y).array,x.array)))
TensorValue(transpose(ForwardDiff.jacobian(y->get_array(f(y)),get_array(x))))
end

function _grad_f(f,x,fx::MultiValue)
@notimplemented
end

function divergence(f::Function)
x -> tr(ForwardDiff.jacobian(y->f(y).array,x.array))
x -> tr(ForwardDiff.jacobian(y->get_array(f(y)),get_array(x)))
end

function curl(f::Function)
x -> grad2curl(TensorValue(transpose(ForwardDiff.jacobian(y->f(y).array,x.array))))
x -> grad2curl(TensorValue(transpose(ForwardDiff.jacobian(y->get_array(f(y)),get_array(x)))))
end

function laplacian(f::Function)
Expand All @@ -144,14 +154,14 @@ function laplacian(f::Function)
end

function _lapl_f(f,x,fx)
tr(ForwardDiff.jacobian(y->ForwardDiff.gradient(f,y), x.array))
tr(ForwardDiff.jacobian(y->ForwardDiff.gradient(f,y), get_array(x)))
end

function _lapl_f(f,x,fx::VectorValue)
A = length(x)
B = length(fx)
a = ForwardDiff.jacobian(y->transpose(ForwardDiff.jacobian(z->f(z).array,y)), x.array)
tr(MultiValue{Tuple{A,A,B}}(Tuple(transpose(a))))
a = ForwardDiff.jacobian(y->transpose(ForwardDiff.jacobian(z->get_array(f(z)),y)), get_array(x))
tr(ThirdOrderTensorValue{A,A,B}(Tuple(transpose(a))))
end

function _lapl_f(f,x,fx::MultiValue)
Expand Down
2 changes: 2 additions & 0 deletions src/Fields/Fields.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ using Gridap.Arrays: BCasted
using Gridap.Arrays: NumberOrArray
using Gridap.Arrays: AppliedArray
using Gridap.Arrays: Contracted
using LinearAlgebra: ⋅

using Test
using DocStringExtensions
Expand Down Expand Up @@ -81,6 +82,7 @@ import Gridap.TensorValues: symmetric_part
import Base: +, - , *
import LinearAlgebra: cross
import LinearAlgebra: tr
import LinearAlgebra: dot
import Base: transpose
import Base: adjoint

Expand Down
4 changes: 2 additions & 2 deletions src/Geometry/GenericBoundaryTriangulations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -416,8 +416,8 @@ function kernel_evaluate(k::NormalVectorValued,x,J,refn)
apply(k,Jx,refn)
end

function _map_normal(J::TensorValue{D,T},n::VectorValue{D,T}) where {D,T}
v = inv(J)*n
function _map_normal(J::TensorValue{D,D,T},n::VectorValue{D,T}) where {D,T}
v = inv(J)n
m = sqrt(inner(v,v))
if m < eps()
return zero(n)
Expand Down
1 change: 1 addition & 0 deletions src/Geometry/Geometry.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ module Geometry
using Test
using DocStringExtensions
using FillArrays
using LinearAlgebra: ⋅

using Gridap.Helpers
using Gridap.Arrays
Expand Down
4 changes: 2 additions & 2 deletions src/Gridap.jl
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ include("Io/Io.jl")

include("Algebra/Algebra.jl")

include("TensorValues/TensorValues.jl")

include("Arrays/Arrays.jl")

include("TensorValues/TensorValues.jl")

include("Fields/Fields.jl")

include("Polynomials/Polynomials.jl")
Expand Down
14 changes: 7 additions & 7 deletions src/Polynomials/MonomialBases.jl
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ get_value_type(::Type{MonomialBasis{D,T}}) where {D,T} = T
function field_cache(f::MonomialBasis{D,T},x) where {D,T}
@assert D == length(eltype(x)) "Incorrect number of point components"
np = length(x)
ndof = length(f.terms)*n_components(T)
ndof = length(f.terms)*num_components(T)
n = 1 + _maximum(f.orders)
r = CachedArray(zeros(T,(np,ndof)))
v = CachedArray(zeros(T,(ndof,)))
Expand All @@ -127,7 +127,7 @@ end
function evaluate_field!(cache,f::MonomialBasis{D,T},x) where {D,T}
r, v, c = cache
np = length(x)
ndof = length(f.terms)*n_components(T)
ndof = length(f.terms)*num_components(T)
n = 1 + _maximum(f.orders)
setsize!(r,(np,ndof))
setsize!(v,(ndof,))
Expand All @@ -145,7 +145,7 @@ end
function gradient_cache(f::MonomialBasis{D,V},x) where {D,V}
@assert D == length(eltype(x)) "Incorrect number of point components"
np = length(x)
ndof = length(f.terms)*n_components(V)
ndof = length(f.terms)*num_components(V)
xi = testitem(x)
T = gradient_type(V,xi)
n = 1 + _maximum(f.orders)
Expand All @@ -159,7 +159,7 @@ end
function evaluate_gradient!(cache,f::MonomialBasis{D,T},x) where {D,T}
r, v, c, g = cache
np = length(x)
ndof = length(f.terms) * n_components(T)
ndof = length(f.terms) * num_components(T)
n = 1 + _maximum(f.orders)
setsize!(r,(np,ndof))
setsize!(v,(ndof,))
Expand All @@ -178,7 +178,7 @@ end
function hessian_cache(f::MonomialBasis{D,V},x) where {D,V}
@assert D == length(eltype(x)) "Incorrect number of point components"
np = length(x)
ndof = length(f.terms)*n_components(V)
ndof = length(f.terms)*num_components(V)
xi = testitem(x)
T = gradient_type(gradient_type(V,xi),xi)
n = 1 + _maximum(f.orders)
Expand All @@ -193,7 +193,7 @@ end
function evaluate_hessian!(cache,f::MonomialBasis{D,T},x) where {D,T}
r, v, c, g, h = cache
np = length(x)
ndof = length(f.terms) * n_components(T)
ndof = length(f.terms) * num_components(T)
n = 1 + _maximum(f.orders)
setsize!(r,(np,ndof))
setsize!(v,(ndof,))
Expand Down Expand Up @@ -386,7 +386,7 @@ function _hessian_nd!(
_hessian_1d!(h,x,orders[d],d)
end

z = zero(mutable(TensorValue{D,T,D*D}))
z = zero(mutable(TensorValue{D,D,T}))
o = one(T)
k = 1

Expand Down
Loading