-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' into feature/eigsolve
- Loading branch information
Showing
34 changed files
with
2,594 additions
and
11 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# Matrix Product States (MPS) | ||
|
||
Matrix Product States (MPS) are a Quantum Tensor Network ansatz whose tensors are laid out in a 1D chain. | ||
Due to this, these networks are also known as _Tensor Trains_ in other mathematical fields. | ||
Depending on the boundary conditions, the chains can be open or closed (i.e. periodic boundary conditions). | ||
|
||
```@setup viz | ||
using Makie | ||
Makie.inline!(true) | ||
set_theme!(resolution=(800,200)) | ||
using CairoMakie | ||
using Tenet | ||
using NetworkLayout | ||
``` | ||
|
||
```@example viz | ||
fig = Figure() # hide | ||
tn_open = rand(MatrixProduct{State,Open}, n=10, χ=4) # hide | ||
tn_periodic = rand(MatrixProduct{State,Periodic}, n=10, χ=4) # hide | ||
plot!(fig[1,1], tn_open, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide | ||
plot!(fig[1,2], tn_periodic, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide | ||
Label(fig[1,1, Bottom()], "Open") # hide | ||
Label(fig[1,2, Bottom()], "Periodic") # hide | ||
fig # hide | ||
``` | ||
|
||
## Matrix Product Operators (MPO) | ||
|
||
Matrix Product Operators (MPO) are the operator version of [Matrix Product State (MPS)](#matrix-product-states-mps). | ||
The major difference between them is that MPOs have 2 indices per site (1 input and 1 output) while MPSs only have 1 index per site (i.e. an output). | ||
|
||
```@example viz | ||
fig = Figure() # hide | ||
tn_open = rand(MatrixProduct{Operator,Open}, n=10, χ=4) # hide | ||
tn_periodic = rand(MatrixProduct{Operator,Periodic}, n=10, χ=4) # hide | ||
plot!(fig[1,1], tn_open, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide | ||
plot!(fig[1,2], tn_periodic, layout=Spring(iterations=1000, C=0.5, seed=100)) # hide | ||
Label(fig[1,1, Bottom()], "Open") # hide | ||
Label(fig[1,2, Bottom()], "Periodic") # hide | ||
fig # hide | ||
``` | ||
|
||
In `Tenet`, the generic `MatrixProduct` ansatz implements this topology. Type variables are used to address their functionality (`State` or `Operator`) and their boundary conditions (`Open` or `Periodic`). | ||
|
||
```@docs | ||
MatrixProduct | ||
MatrixProduct(::Any) | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
# `Product` ansatz |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
# `Quantum` Tensor Networks | ||
|
||
```@docs | ||
Quantum | ||
Tenet.TensorNetwork(::Quantum) | ||
Base.adjoint(::Quantum) | ||
sites | ||
nsites | ||
``` | ||
|
||
## Queries | ||
|
||
```@docs | ||
Tenet.inds(::Quantum; kwargs...) | ||
Tenet.tensors(::Quantum; kwargs...) | ||
``` | ||
|
||
## Connecting `Quantum` Tensor Networks | ||
|
||
```@docs | ||
inputs | ||
outputs | ||
lanes | ||
ninputs | ||
noutputs | ||
nlanes | ||
``` | ||
|
||
```@docs | ||
Socket | ||
socket(::Quantum) | ||
Scalar | ||
State | ||
Operator | ||
``` | ||
|
||
```@docs | ||
Base.merge(::Quantum, ::Quantum...) | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
using Tenet | ||
using Yao: Yao | ||
using EinExprs | ||
using AbstractTrees | ||
using Distributed | ||
using Dagger | ||
using TimespanLogging | ||
using KaHyPar | ||
|
||
m = 10 | ||
circuit = Yao.EasyBuild.rand_google53(m); | ||
H = Quantum(circuit) | ||
ψ = Product(fill([1, 0], Yao.nqubits(circuit))) | ||
qtn = merge(Quantum(ψ), H, Quantum(ψ)') | ||
tn = Tenet.TensorNetwork(qtn) | ||
|
||
contract_smaller_dims = 20 | ||
target_size = 24 | ||
|
||
Tenet.transform!(tn, Tenet.ContractSimplification()) | ||
path = einexpr( | ||
tn; | ||
optimizer=HyPar(; | ||
parts=2, | ||
imbalance=0.41, | ||
edge_scaler=(ind_size) -> 10 * Int(round(log2(ind_size))), | ||
vertex_scaler=(prod_size) -> 100 * Int(round(exp2(prod_size))), | ||
), | ||
); | ||
|
||
max_dims_path = @show maximum(ndims, Branches(path)) | ||
flops_path = @show mapreduce(flops, +, Branches(path)) | ||
@show log10(flops_path) | ||
|
||
grouppath = deepcopy(path); | ||
function recursiveforeach!(f, expr) | ||
f(expr) | ||
return foreach(arg -> recursiveforeach!(f, arg), args(expr)) | ||
end | ||
sizedict = merge(Iterators.map(i -> i.size, Leaves(path))...); | ||
recursiveforeach!(grouppath) do expr | ||
merge!(expr.size, sizedict) | ||
if all(<(contract_smaller_dims) ∘ ndims, expr.args) | ||
empty!(expr.args) | ||
end | ||
end | ||
|
||
max_dims_grouppath = maximum(ndims, Branches(grouppath)) | ||
flops_grouppath = mapreduce(flops, +, Branches(grouppath)) | ||
targetinds = findslices(SizeScorer(), grouppath; size=2^(target_size)); | ||
|
||
subexprs = map(Leaves(grouppath)) do expr | ||
only(EinExprs.select(path, tuple(head(expr)...))) | ||
end | ||
|
||
addprocs(3) | ||
@everywhere using Dagger, Tenet | ||
|
||
disttn = Tenet.TensorNetwork( | ||
map(subexprs) do subexpr | ||
Tensor( | ||
distribute( # data | ||
parent(Tenet.contract(tn; path=subexpr)), | ||
Blocks([i ∈ targetinds ? 1 : 2 for i in head(subexpr)]...), | ||
), | ||
head(subexpr), # inds | ||
) | ||
end, | ||
) | ||
@show Tenet.contract(disttn; path=grouppath) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
using Yao: Yao | ||
using Tenet | ||
using EinExprs | ||
using KaHyPar | ||
using Random | ||
using Distributed | ||
using ClusterManagers | ||
using AbstractTrees | ||
|
||
n = 64 | ||
depth = 6 | ||
|
||
circuit = Yao.chain(n) | ||
|
||
for _ in 1:depth | ||
perm = randperm(n) | ||
|
||
for (i, j) in Iterators.partition(perm, 2) | ||
push!(circuit, Yao.put((i, j) => Yao.EasyBuild.FSimGate(2π * rand(), 2π * rand()))) | ||
# push!(circuit, Yao.control(n, i, j => Yao.phase(2π * rand()))) | ||
end | ||
end | ||
|
||
H = Quantum(circuit) | ||
ψ = zeros(Product, n) | ||
|
||
tn = TensorNetwork(merge(Quantum(ψ), H, Quantum(ψ)')) | ||
transform!(tn, Tenet.ContractSimplification()) | ||
|
||
path = einexpr( | ||
tn; | ||
optimizer=HyPar(; | ||
parts=2, | ||
imbalance=0.41, | ||
edge_scaler=(ind_size) -> 10 * Int(round(log2(ind_size))), | ||
vertex_scaler=(prod_size) -> 100 * Int(round(exp2(prod_size))), | ||
), | ||
) | ||
|
||
@show maximum(ndims, Branches(path)) | ||
@show maximum(length, Branches(path)) * sizeof(eltype(tn)) / 1024^3 | ||
|
||
@show log10(mapreduce(flops, +, Branches(path))) | ||
|
||
cutinds = findslices(SizeScorer(), path; size=2^24) | ||
cuttings = [[i => dim for dim in 1:size(tn, i)] for i in cutinds] | ||
|
||
# mock sliced path - valid for all slices | ||
proj_inds = first.(cuttings) | ||
slice_path = view(path.path, proj_inds...) | ||
|
||
expr = Tenet.codegen(Val(:outplace), slice_path) | ||
|
||
manager = SlurmManager(2 * 112 - 1) | ||
addprocs(manager; cpus_per_task=1, exeflags="--project=$(Base.active_project())") | ||
# @everywhere using LinearAlgebra | ||
# @everywhere LinearAlgebra.BLAS.set_num_threads(2) | ||
|
||
@everywhere using Tenet, EinExprs, IterTools, LinearAlgebra, Reactant, AbstractTrees | ||
@everywhere tn = $tn | ||
@everywhere slice_path = $slice_path | ||
@everywhere cuttings = $cuttings | ||
@everywhere expr = $expr | ||
|
||
partial_results = map(enumerate(workers())) do (i, worker) | ||
Distributed.@spawnat worker begin | ||
# interleaved chunking without instantiation | ||
it = takenth(Iterators.drop(Iterators.product(cuttings...), i - 1), nworkers()) | ||
|
||
f = @eval $expr | ||
mock_slice = view(tn, first(it)...) | ||
tensors′ = [ | ||
Tensor(Reactant.ConcreteRArray(copy(parent(mock_slice[head(leaf)...]))), inds(mock_slice[head(leaf)...])) for leaf in Leaves(slice_path) | ||
] | ||
g = Reactant.compile(f, Tuple(tensors′)) | ||
|
||
# local reduction of chunk | ||
accumulator = zero(eltype(tn)) | ||
|
||
for proj_inds in it | ||
slice = view(tn, proj_inds...) | ||
tensors′ = [ | ||
Tensor( | ||
Reactant.ConcreteRArray(copy(parent(mock_slice[head(leaf)...]))), | ||
inds(mock_slice[head(leaf)...]), | ||
) for leaf in Leaves(slice_path) | ||
] | ||
res = only(g(tensors′...)) | ||
|
||
# avoid OOM due to garbage accumulation | ||
GC.gc() | ||
|
||
accumulator += res | ||
end | ||
|
||
return accumulator | ||
end | ||
end | ||
|
||
@show result = sum(Distributed.fetch.(partial_results)) | ||
|
||
rmprocs(workers()) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.