Skip to content

Commit

Permalink
Merge branch 'master' into feature/eigsolve
Browse files Browse the repository at this point in the history
  • Loading branch information
mofeing authored Jul 7, 2024
2 parents 3c7bf85 + 5a09e3d commit be81daa
Show file tree
Hide file tree
Showing 34 changed files with 2,594 additions and 11 deletions.
6 changes: 6 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
KrylovKit = "0b1a1467-8014-51b9-945f-bf0ae24f4b77"
Makie = "ee78f7c6-11fb-53f2-987a-cfe4a2b5a57a"
Reactant = "3c362404-f566-11ee-1572-e11a4b42c853"
Quac = "b9105292-1415-45cf-bff1-d6ccf71e6143"
Yao = "5872b779-8223-5990-8dd0-5abbb0748c8c"

[extensions]
TenetAdaptExt = "Adapt"
Expand All @@ -39,6 +41,8 @@ TenetFiniteDifferencesExt = "FiniteDifferences"
TenetGraphMakieExt = ["GraphMakie", "Makie"]
TenetKrylovKitExt = ["KrylovKit"]
TenetReactantExt = "Reactant"
TenetQuacExt = "Quac"
TenetYaoExt = "Yao"

[compat]
AbstractTrees = "0.4"
Expand All @@ -58,9 +62,11 @@ LinearAlgebra = "1.9"
Makie = "0.18,0.19,0.20, 0.21"
Muscle = "0.2"
OMEinsum = "0.7, 0.8"
Quac = "0.3"
Random = "1.9"
Reactant = "0.1"
ScopedValues = "1"
SparseArrays = "1.9"
UUIDs = "1.9"
Yao = "0.8, 0.9"
julia = "1.9"
12 changes: 2 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,6 @@
[![Documentation: stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://bsc-quantic.github.io/Tenet.jl/)
[![Documentation: dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://bsc-quantic.github.io/Tenet.jl/dev/)

> [!IMPORTANT]
> The code for quantum tensor networks has been moved to the new [`Qrochet`](https://github.com/bsc-quantic/Qrochet.jl) library.
A Julia library for **Ten**sor **Net**works. `Tenet` can be executed both at local environments and on large supercomputers. Its goals are,

- **Expressiveness** _Simple to use._ 👶
Expand All @@ -22,14 +19,9 @@ A Julia library for **Ten**sor **Net**works. `Tenet` can be executed both at loc
- Tensor Network slicing/cuttings
- Automatic Differentiation of TN contraction
- Distributed contraction
- Local Tensor Network transformations
- Hyperindex converter
- Rank simplification
- Diagonal reduction
- Anti-diagonal gauging
- Column reduction
- Split simplification
- Local Tensor Network transformations/simplifications
- 2D & 3D visualization of large networks, powered by [`Makie`](https://github.com/MakieOrg/Makie.jl)
- Quantum Tensor Networks

## Preview

Expand Down
4 changes: 4 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,10 @@ makedocs(;
"Tensor Networks" => "tensor-network.md",
"Contraction" => "contraction.md",
"Transformations" => "transformations.md",
"Quantum" => [
"Introduction" => "quantum.md",
"Ansatzes" => ["`Product` ansatz" => "ansatz/product.md", "`Chain` ansatz" => "ansatz/chain.md"],
],
"Visualization" => "visualization.md",
"Alternatives" => "alternatives.md",
"References" => "references.md",
Expand Down
58 changes: 58 additions & 0 deletions docs/src/ansatz/chain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Matrix Product States (MPS)

Matrix Product States (MPS) are a Quantum Tensor Network ansatz whose tensors are laid out in a 1D chain.
Due to this, these networks are also known as _Tensor Trains_ in other mathematical fields.
Depending on the boundary conditions, the chains can be open or closed (i.e. periodic boundary conditions).

```@setup viz
using Makie
Makie.inline!(true)
set_theme!(resolution=(800,200))
using CairoMakie
using Tenet
using NetworkLayout
```

```@example viz
fig = Figure() # hide
tn_open = rand(MatrixProduct{State,Open}, n=10, χ=4) # hide
tn_periodic = rand(MatrixProduct{State,Periodic}, n=10, χ=4) # hide
plot!(fig[1,1], tn_open, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide
plot!(fig[1,2], tn_periodic, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide
Label(fig[1,1, Bottom()], "Open") # hide
Label(fig[1,2, Bottom()], "Periodic") # hide
fig # hide
```

## Matrix Product Operators (MPO)

Matrix Product Operators (MPO) are the operator version of [Matrix Product State (MPS)](#matrix-product-states-mps).
The major difference between them is that MPOs have 2 indices per site (1 input and 1 output) while MPSs only have 1 index per site (i.e. an output).

```@example viz
fig = Figure() # hide
tn_open = rand(MatrixProduct{Operator,Open}, n=10, χ=4) # hide
tn_periodic = rand(MatrixProduct{Operator,Periodic}, n=10, χ=4) # hide
plot!(fig[1,1], tn_open, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide
plot!(fig[1,2], tn_periodic, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide
Label(fig[1,1, Bottom()], "Open") # hide
Label(fig[1,2, Bottom()], "Periodic") # hide
fig # hide
```

In `Tenet`, the generic `MatrixProduct` ansatz implements this topology. Type variables are used to address their functionality (`State` or `Operator`) and their boundary conditions (`Open` or `Periodic`).

```@docs
MatrixProduct
MatrixProduct(::Any)
```
1 change: 1 addition & 0 deletions docs/src/ansatz/product.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# `Product` ansatz
39 changes: 39 additions & 0 deletions docs/src/quantum.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# `Quantum` Tensor Networks

```@docs
Quantum
Tenet.TensorNetwork(::Quantum)
Base.adjoint(::Quantum)
sites
nsites
```

## Queries

```@docs
Tenet.inds(::Quantum; kwargs...)
Tenet.tensors(::Quantum; kwargs...)
```

## Connecting `Quantum` Tensor Networks

```@docs
inputs
outputs
lanes
ninputs
noutputs
nlanes
```

```@docs
Socket
socket(::Quantum)
Scalar
State
Operator
```

```@docs
Base.merge(::Quantum, ::Quantum...)
```
9 changes: 9 additions & 0 deletions examples/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,17 @@ Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
Chairmarks = "0ca39b1e-fe0b-4e98-acfc-b1656634c4de"
ClusterManagers = "34f1f09b-3a8b-5176-ab39-66d58a4d544e"
Dagger = "d58978e5-989f-55fb-8d15-ea34adc7bf54"
Distributed = "8ba89e20-285c-5b6f-9357-94700520ee1b"
EinExprs = "b1794770-133b-4de1-afb4-526377e9f4c5"
Enzyme = "7da242da-08ed-463a-9acd-ee780be4f1d9"
IterTools = "c8e1da08-722c-5040-9ed9-7db0dc04731e"
KaHyPar = "2a6221f6-aa48-11e9-3542-2d9e0ef01880"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
ProgressMeter = "92933f4c-e287-5a05-a399-4b506db050ca"
Reactant = "3c362404-f566-11ee-1572-e11a4b42c853"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
Tenet = "85d41934-b9cd-44e1-8730-56d86f15f3ec"
TimespanLogging = "a526e669-04d3-4846-9525-c66122c55f63"
Yao = "5872b779-8223-5990-8dd0-5abbb0748c8c"
70 changes: 70 additions & 0 deletions examples/dagger.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
using Tenet
using Yao: Yao
using EinExprs
using AbstractTrees
using Distributed
using Dagger
using TimespanLogging
using KaHyPar

m = 10
circuit = Yao.EasyBuild.rand_google53(m);
H = Quantum(circuit)
ψ = Product(fill([1, 0], Yao.nqubits(circuit)))
qtn = merge(Quantum(ψ), H, Quantum(ψ)')
tn = Tenet.TensorNetwork(qtn)

contract_smaller_dims = 20
target_size = 24

Tenet.transform!(tn, Tenet.ContractSimplification())
path = einexpr(
tn;
optimizer=HyPar(;
parts=2,
imbalance=0.41,
edge_scaler=(ind_size) -> 10 * Int(round(log2(ind_size))),
vertex_scaler=(prod_size) -> 100 * Int(round(exp2(prod_size))),
),
);

max_dims_path = @show maximum(ndims, Branches(path))
flops_path = @show mapreduce(flops, +, Branches(path))
@show log10(flops_path)

grouppath = deepcopy(path);
function recursiveforeach!(f, expr)
f(expr)
return foreach(arg -> recursiveforeach!(f, arg), args(expr))
end
sizedict = merge(Iterators.map(i -> i.size, Leaves(path))...);
recursiveforeach!(grouppath) do expr
merge!(expr.size, sizedict)
if all(<(contract_smaller_dims) ndims, expr.args)
empty!(expr.args)
end
end

max_dims_grouppath = maximum(ndims, Branches(grouppath))
flops_grouppath = mapreduce(flops, +, Branches(grouppath))
targetinds = findslices(SizeScorer(), grouppath; size=2^(target_size));

subexprs = map(Leaves(grouppath)) do expr
only(EinExprs.select(path, tuple(head(expr)...)))
end

addprocs(3)
@everywhere using Dagger, Tenet

disttn = Tenet.TensorNetwork(
map(subexprs) do subexpr
Tensor(
distribute( # data
parent(Tenet.contract(tn; path=subexpr)),
Blocks([i targetinds ? 1 : 2 for i in head(subexpr)]...),
),
head(subexpr), # inds
)
end,
)
@show Tenet.contract(disttn; path=grouppath)
102 changes: 102 additions & 0 deletions examples/distributed.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
using Yao: Yao
using Tenet
using EinExprs
using KaHyPar
using Random
using Distributed
using ClusterManagers
using AbstractTrees

n = 64
depth = 6

circuit = Yao.chain(n)

for _ in 1:depth
perm = randperm(n)

for (i, j) in Iterators.partition(perm, 2)
push!(circuit, Yao.put((i, j) => Yao.EasyBuild.FSimGate(2π * rand(), 2π * rand())))
# push!(circuit, Yao.control(n, i, j => Yao.phase(2π * rand())))
end
end

H = Quantum(circuit)
ψ = zeros(Product, n)

tn = TensorNetwork(merge(Quantum(ψ), H, Quantum(ψ)'))
transform!(tn, Tenet.ContractSimplification())

path = einexpr(
tn;
optimizer=HyPar(;
parts=2,
imbalance=0.41,
edge_scaler=(ind_size) -> 10 * Int(round(log2(ind_size))),
vertex_scaler=(prod_size) -> 100 * Int(round(exp2(prod_size))),
),
)

@show maximum(ndims, Branches(path))
@show maximum(length, Branches(path)) * sizeof(eltype(tn)) / 1024^3

@show log10(mapreduce(flops, +, Branches(path)))

cutinds = findslices(SizeScorer(), path; size=2^24)
cuttings = [[i => dim for dim in 1:size(tn, i)] for i in cutinds]

# mock sliced path - valid for all slices
proj_inds = first.(cuttings)
slice_path = view(path.path, proj_inds...)

expr = Tenet.codegen(Val(:outplace), slice_path)

manager = SlurmManager(2 * 112 - 1)
addprocs(manager; cpus_per_task=1, exeflags="--project=$(Base.active_project())")
# @everywhere using LinearAlgebra
# @everywhere LinearAlgebra.BLAS.set_num_threads(2)

@everywhere using Tenet, EinExprs, IterTools, LinearAlgebra, Reactant, AbstractTrees
@everywhere tn = $tn
@everywhere slice_path = $slice_path
@everywhere cuttings = $cuttings
@everywhere expr = $expr

partial_results = map(enumerate(workers())) do (i, worker)
Distributed.@spawnat worker begin
# interleaved chunking without instantiation
it = takenth(Iterators.drop(Iterators.product(cuttings...), i - 1), nworkers())

f = @eval $expr
mock_slice = view(tn, first(it)...)
tensors′ = [
Tensor(Reactant.ConcreteRArray(copy(parent(mock_slice[head(leaf)...]))), inds(mock_slice[head(leaf)...])) for leaf in Leaves(slice_path)
]
g = Reactant.compile(f, Tuple(tensors′))

# local reduction of chunk
accumulator = zero(eltype(tn))

for proj_inds in it
slice = view(tn, proj_inds...)
tensors′ = [
Tensor(
Reactant.ConcreteRArray(copy(parent(mock_slice[head(leaf)...]))),
inds(mock_slice[head(leaf)...]),
) for leaf in Leaves(slice_path)
]
res = only(g(tensors′...))

# avoid OOM due to garbage accumulation
GC.gc()

accumulator += res
end

return accumulator
end
end

@show result = sum(Distributed.fetch.(partial_results))

rmprocs(workers())
5 changes: 4 additions & 1 deletion ext/TenetAdaptExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,10 @@ using Tenet
using Adapt

Adapt.adapt_structure(to, x::Tensor) = Tensor(adapt(to, parent(x)), inds(x))

Adapt.adapt_structure(to, x::TensorNetwork) = TensorNetwork(adapt.(Ref(to), tensors(x)))

Adapt.adapt_structure(to, x::Quantum) = Quantum(adapt(to, TensorNetwork(x)), x.sites)
Adapt.adapt_structure(to, x::Product) = Product(adapt(to, Quantum(x)))
Adapt.adapt_structure(to, x::Chain) = Chain(adapt(to, Quantum(x)), boundary(x))

end
12 changes: 12 additions & 0 deletions ext/TenetChainRulesCoreExt/frules.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,15 @@ function ChainRulesCore.frule((_, ȧ, ḃ), ::typeof(contract), a::Tensor, b::T
= contract(ȧ, b; kwargs...) + contract(a, ḃ; kwargs...)
return c, ċ
end

function ChainRulesCore.frule((_, ẋ, _), ::Type{Quantum}, x::TensorNetwork, sites)
y = Quantum(x, sites)
= Tangent{Quantum}(; tn=ẋ)
return y, ẏ
end

ChainRulesCore.frule((_, ẋ), ::Type{T}, x::Quantum) where {T<:Ansatz} = T(x), Tangent{T}(; super=ẋ)

function ChainRulesCore.frule((_, ẋ, _), ::Type{T}, x::Quantum, boundary) where {T<:Ansatz}
return T(x, boundary), Tangent{T}(; super=ẋ, boundary=NoTangent())
end
Loading

0 comments on commit be81daa

Please sign in to comment.