Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sparseH for LGPU #526

Merged
merged 52 commits into from
Oct 24, 2023
Merged
Show file tree
Hide file tree
Changes from 42 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
beaa17a
Init commit
multiphaseCFD Oct 20, 2023
97d3f85
Fix std::endl;
vincentmr Oct 20, 2023
705f549
Use more generic indices in base std::size_t.
vincentmr Oct 20, 2023
6781f3d
merge add_py_LGPUMPI
multiphaseCFD Oct 20, 2023
343c62a
add pybind layer
multiphaseCFD Oct 20, 2023
1c0bda2
add python layer
multiphaseCFD Oct 20, 2023
255f6d5
Quick and dirty spham bindings.
vincentmr Oct 20, 2023
25b90c8
Add sparse_ham serialization.
vincentmr Oct 20, 2023
e815e6f
Add sparse_ham tests in tests/test_adjoint_jacobian.py'
vincentmr Oct 20, 2023
7d8f0d9
Bug fix sparse product.
vincentmr Oct 20, 2023
002daee
Merge remote-tracking branch 'origin/add_sparseH_LGPU' into add_spars…
vincentmr Oct 20, 2023
71ebe4b
add sparseH
multiphaseCFD Oct 21, 2023
c06593d
Trigger CI
multiphaseCFD Oct 21, 2023
1cbc50f
Fix python bindings LGPU idxT
vincentmr Oct 23, 2023
5ba180a
Merge remote-tracking branch 'origin/add_sparseH_LGPU' into add_spars…
vincentmr Oct 23, 2023
6956e78
Fix serial tests and update changelog.
vincentmr Oct 23, 2023
b256ead
add more unit tests for sparseH base class
multiphaseCFD Oct 23, 2023
5813cf4
Fix tidy & sparse adjoint test device name.
vincentmr Oct 23, 2023
0045e0f
Merge branch 'add_sparseH_LGPU' into add_sparseH_LKokkos
vincentmr Oct 23, 2023
cc66546
Fix tidy warning for sparse_ham.
vincentmr Oct 23, 2023
dccfe3e
Send backend-specific ops in respective modules.
vincentmr Oct 23, 2023
a014177
Fix sparse_hamiltonianmpi_c and add getWires test.
vincentmr Oct 23, 2023
2f9d14b
Add sparseH diff capability in LQ.
vincentmr Oct 23, 2023
812a1c6
Add sparse Hamiltonian support for Lightning-Kokkos (#527)
vincentmr Oct 23, 2023
bf2bb3f
Merge branch 'add_sparseH_LQubit' into add_sparseH_LGPU
vincentmr Oct 23, 2023
8b2f752
Fix clang tidy
vincentmr Oct 23, 2023
cf2866b
Comment workflows but tidy.
vincentmr Oct 23, 2023
ccace74
Fix tidy warn
vincentmr Oct 23, 2023
ae615e4
Add override to sp::getWires
vincentmr Oct 23, 2023
4d108f3
Restore triggers
vincentmr Oct 23, 2023
f015e2f
Update tests_linux_x86_mpi.yml
vincentmr Oct 23, 2023
ca93578
Add constructibility tests.
vincentmr Oct 24, 2023
62850d7
Move L-Kokkos-CUDA tests to workflow call, called from tests_gpu_cu11…
vincentmr Oct 24, 2023
8777cbe
Merge remote-tracking branch 'origin/add_py_LGPUMPI' into add_sparseH…
vincentmr Oct 24, 2023
a72e2f7
Remove GPU deadlock.
vincentmr Oct 24, 2023
b7b81de
Merge remote-tracking branch 'origin/add_py_LGPUMPI' into add_sparseH…
vincentmr Oct 24, 2023
58f742b
Bug fix Python MPI.
vincentmr Oct 24, 2023
9535f48
Upload both outputs.
vincentmr Oct 24, 2023
6648230
Update gcc version in format.yml.
vincentmr Oct 24, 2023
d0125f3
Merge remote-tracking branch 'origin/add_py_LGPUMPI' into add_sparseH…
vincentmr Oct 24, 2023
200d81d
Update .github/CHANGELOG.md [skip ci]
vincentmr Oct 24, 2023
1a91db5
Update .github/workflows/tests_gpu_kokkos.yml [skip ci]
vincentmr Oct 24, 2023
049cc30
Merge remote-tracking branch 'origin/add_py_LGPUMPI' into add_sparseH…
vincentmr Oct 24, 2023
32236d1
rename argn [skip ci]
vincentmr Oct 24, 2023
999b2c2
Remove unused lines [skip ci]
multiphaseCFD Oct 24, 2023
e3f4854
Fix SparseHamiltonianBase::isEqual. [skip ci]
vincentmr Oct 24, 2023
c26c99b
Trigger CI
vincentmr Oct 24, 2023
031d3fb
Auto update version
github-actions[bot] Oct 24, 2023
4c319a9
Trigger CI
vincentmr Oct 24, 2023
3a53340
resolve comments
multiphaseCFD Oct 24, 2023
3df482b
rename dev_kokkos to dev
multiphaseCFD Oct 24, 2023
b7eb831
Fix tidy.
vincentmr Oct 24, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@

### New features since last release

* Add `SparseHamiltonian` support for Lightning-GPU.
[(#526)] (https://github.com/PennyLaneAI/pennylane-lightning/pull/526)

* Add `SparseHamiltonian` support for Lightning-Kokkos.
[(#527)] (https://github.com/PennyLaneAI/pennylane-lightning/pull/527)

* Integrate python/pybind layer of distributed Lightning-GPU into the Lightning monorepo with python unit tests.
[(#518)] (https://github.com/PennyLaneAI/pennylane-lightning/pull/518)

Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ jobs:
cp -rf ${{ github.workspace}}/Kokkos_install/${{ matrix.exec_model }}/* Kokkos/

- name: Install dependencies
run: sudo apt update && sudo apt -y install clang-tidy-14 cmake g++-10 ninja-build libomp-14-dev
run: sudo apt update && sudo apt -y install clang-tidy-14 cmake gcc-11 g++-11 ninja-build libomp-14-dev
env:
DEBIAN_FRONTEND: noninteractive

Expand All @@ -96,5 +96,6 @@ jobs:
-DBUILD_TESTS=ON \
-DENABLE_WARNINGS=ON \
-DPL_BACKEND=${{ matrix.pl_backend }} \
-DCMAKE_CXX_COMPILER="$(which g++-10)"
-DCMAKE_CXX_COMPILER="$(which g++-11)" \
-DCMAKE_CXX_COMPILER="$(which gcc-11)"
cmake --build ./Build
11 changes: 7 additions & 4 deletions .github/workflows/tests_gpu_cu11.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
name: Testing::Linux::x86_64::LGPU
on:
pull_request:
push:
branches:
- master
workflow_run:
workflows: ["Testing::LKokkos::CUDA"]
types:
- completed
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved

env:
CI_CUDA_ARCH: 86
Expand All @@ -17,7 +17,10 @@ concurrency:
cancel-in-progress: true

jobs:


builddeps:
needs: ["test_lightning_kokkos"]
runs-on:
- self-hosted
- ubuntu-22.04
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Testing (GPU)
name: Testing::LKokkos::GPU
on:
pull_request:
push:
Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/tests_linux_x86_mpi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -246,15 +246,16 @@ jobs:
run: |
source /etc/profile.d/modules.sh && module use /opt/modules/ && module load ${{ matrix.mpilib }}
PL_DEVICE=lightning.gpu python -m pytest ./tests/ $COVERAGE_FLAGS
mv coverage.xml coverage-${{ github.job }}-lightning_gpu_${{ matrix.mpilib }}-0.xml
PL_DEVICE=lightning.gpu /opt/mpi/${{ matrix.mpilib }}/bin/mpirun -np 2 python -m pytest ./mpitests $COVERAGE_FLAGS
mv coverage.xml coverage-${{ github.job }}-lightning_gpu_${{ matrix.mpilib }}-1.xml
# PL_DEVICE=lightning.gpu /opt/mpi/${{ matrix.mpilib }}/bin/mpirun --oversubscribe -n 4 pytest -s -x mpitests/test_device.py -k test_create_device $COVERAGE_FLAGS
mv coverage.xml coverage-${{ github.job }}-lightning_gpu_${{ matrix.mpilib }}.xml

- name: Upload code coverage results
uses: actions/upload-artifact@v3
with:
name: ubuntu-codecov-results-python
path: coverage-${{ github.job }}-lightning_gpu_${{ matrix.mpilib }}.xml
path: coverage-${{ github.job }}-lightning_gpu_${{ matrix.mpilib }}-*.xml
if-no-files-found: error

- name: Cleanup
Expand Down
76 changes: 62 additions & 14 deletions pennylane_lightning/core/_serialize.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@
Identity,
StatePrep,
Rot,
Hamiltonian,
SparseHamiltonian,
)
from pennylane.operation import Tensor
from pennylane.tape import QuantumTape
Expand Down Expand Up @@ -52,6 +54,7 @@ class QuantumScriptSerializer:
# pylint: disable=import-outside-toplevel, too-many-instance-attributes, c-extension-no-member
def __init__(self, device_name, use_csingle: bool = False, use_mpi: bool = False):
self.use_csingle = use_csingle
self.device_name = device_name
if device_name == "lightning.qubit":
try:
import pennylane_lightning.lightning_qubit_ops as lightning_ops
Expand Down Expand Up @@ -84,20 +87,24 @@ def __init__(self, device_name, use_csingle: bool = False, use_mpi: bool = False
self.tensor_prod_obs_c128 = lightning_ops.observables.TensorProdObsC128
self.hamiltonian_c64 = lightning_ops.observables.HamiltonianC64
self.hamiltonian_c128 = lightning_ops.observables.HamiltonianC128
self.sparse_hamiltonian_c64 = lightning_ops.observables.SparseHamiltonianC64
self.sparse_hamiltonian_c128 = lightning_ops.observables.SparseHamiltonianC128

self._use_mpi = False

if use_mpi:
self._use_mpi = use_mpi
self.statevectormpi_c128 = lightning_ops.StateVectorMPIC128
self.named_obsmpi_c64 = lightning_ops.observablesMPI.NamedObsMPIC64
self.named_obsmpi_c128 = lightning_ops.observablesMPI.NamedObsMPIC128
self.hermitian_obsmpi_c64 = lightning_ops.observablesMPI.HermitianObsMPIC64
self.hermitian_obsmpi_c128 = lightning_ops.observablesMPI.HermitianObsMPIC128
self.tensor_prod_obsmpi_c64 = lightning_ops.observablesMPI.TensorProdObsMPIC64
self.tensor_prod_obsmpi_c128 = lightning_ops.observablesMPI.TensorProdObsMPIC128
self.hamiltonianmpi_c64 = lightning_ops.observablesMPI.HamiltonianMPIC64
self.hamiltonianmpi_c128 = lightning_ops.observablesMPI.HamiltonianMPIC128
self.statevector_mpi_c128 = lightning_ops.StateVectorMPIC128
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
self.named_obs_mpi_c64 = lightning_ops.observablesMPI.NamedObsMPIC64
self.named_obs_mpi_c128 = lightning_ops.observablesMPI.NamedObsMPIC128
self.hermitian_obs_mpi_c64 = lightning_ops.observablesMPI.HermitianObsMPIC64
self.hermitian_obs_mpi_c128 = lightning_ops.observablesMPI.HermitianObsMPIC128
self.tensor_prod_obs_mpi_c64 = lightning_ops.observablesMPI.TensorProdObsMPIC64
self.tensor_prod_obs_mpi_c128 = lightning_ops.observablesMPI.TensorProdObsMPIC128
self.hamiltonian_mpi_c64 = lightning_ops.observablesMPI.HamiltonianMPIC64
self.hamiltonian_mpi_c128 = lightning_ops.observablesMPI.HamiltonianMPIC128

self._mpi_manager = lightning_ops.MPIManager

@property
def ctype(self):
Expand All @@ -113,37 +120,44 @@ def rtype(self):
def sv_type(self):
"""State vector matching ``use_csingle`` precision (and MPI if it is supported)."""
if self._use_mpi:
return self.statevectormpi_c128
return self.statevector_mpi_c128
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
return self.statevector_c128

@property
def named_obs(self):
"""Named observable matching ``use_csingle`` precision."""
if self._use_mpi:
return self.named_obsmpi_c64 if self.use_csingle else self.named_obsmpi_c128
return self.named_obs_mpi_c64 if self.use_csingle else self.named_obs_mpi_c128
return self.named_obs_c64 if self.use_csingle else self.named_obs_c128

@property
def hermitian_obs(self):
"""Hermitian observable matching ``use_csingle`` precision."""
if self._use_mpi:
return self.hermitian_obsmpi_c64 if self.use_csingle else self.hermitian_obsmpi_c128
return self.hermitian_obs_mpi_c64 if self.use_csingle else self.hermitian_obs_mpi_c128
return self.hermitian_obs_c64 if self.use_csingle else self.hermitian_obs_c128

@property
def tensor_obs(self):
"""Tensor product observable matching ``use_csingle`` precision."""
if self._use_mpi:
return self.tensor_prod_obsmpi_c64 if self.use_csingle else self.tensor_prod_obsmpi_c128
return (
self.tensor_prod_obs_mpi_c64 if self.use_csingle else self.tensor_prod_obs_mpi_c128
)
return self.tensor_prod_obs_c64 if self.use_csingle else self.tensor_prod_obs_c128

@property
def hamiltonian_obs(self):
"""Hamiltonian observable matching ``use_csingle`` precision."""
if self._use_mpi:
return self.hamiltonianmpi_c64 if self.use_csingle else self.hamiltonianmpi_c128
return self.hamiltonian_mpi_c64 if self.use_csingle else self.hamiltonian_mpi_c128
return self.hamiltonian_c64 if self.use_csingle else self.hamiltonian_c128

@property
def sparse_hamiltonian_obs(self):
"""SparseHamiltonian observable matching ``use_csingle`` precision."""
return self.sparse_hamiltonian_c64 if self.use_csingle else self.sparse_hamiltonian_c128

def _named_obs(self, observable, wires_map: dict):
"""Serializes a Named observable"""
wires = [wires_map[w] for w in observable.wires]
Expand All @@ -168,6 +182,38 @@ def _hamiltonian(self, observable, wires_map: dict):
terms = [self._ob(t, wires_map) for t in observable.ops]
return self.hamiltonian_obs(coeffs, terms)

def _sparse_hamiltonian(self, observable, wires_map: dict):
"""Serialize an observable (Sparse Hamiltonian)

Args:
observable (Observable): the input observable (Sparse Hamiltonian)
wire_map (dict): a dictionary mapping input wires to the device's backend wires

Returns:
sparse_hamiltonian_obs (SparseHamiltonianC64 or SparseHamiltonianC128): A Sparse Hamiltonian observable object compatible with the C++ backend
"""

if self._use_mpi:
Hmat = Hamiltonian([1.0], [Identity(0)]).sparse_matrix()
H_sparse = SparseHamiltonian(Hmat, wires=range(1))
spm = H_sparse.sparse_matrix()
# Only root 0 needs the overall sparsematrix data
if self._mpi_manager().getRank() == 0:
spm = observable.sparse_matrix()
self._mpi_manager().Barrier()
else:
spm = observable.sparse_matrix()
spm = observable.sparse_matrix()
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
data = np.array(spm.data).astype(self.ctype)
indices = np.array(spm.indices).astype(np.int64)
offsets = np.array(spm.indptr).astype(np.int64)

wires = []
wires_list = observable.wires.tolist()
wires.extend([wires_map[w] for w in wires_list])

return self.sparse_hamiltonian_obs(data, indices, offsets, wires)

def _pauli_word(self, observable, wires_map: dict):
"""Serialize a :class:`pennylane.pauli.PauliWord` into a Named or Tensor observable."""
if len(observable) == 1:
Expand Down Expand Up @@ -195,6 +241,8 @@ def _ob(self, observable, wires_map):
return self._tensor_ob(observable, wires_map)
if observable.name == "Hamiltonian":
return self._hamiltonian(observable, wires_map)
if observable.name == "SparseHamiltonian":
return self._sparse_hamiltonian(observable, wires_map)
if isinstance(observable, (PauliX, PauliY, PauliZ, Identity, Hadamard)):
return self._named_obs(observable, wires_map)
if observable._pauli_rep is not None:
Expand Down
7 changes: 5 additions & 2 deletions pennylane_lightning/core/src/bindings/Bindings.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,8 @@ void registerInfo(py::module_ &m) {
* @tparam StateVectorT
* @param m Pybind module
*/
template <class StateVectorT> void registerObservables(py::module_ &m) {
template <class StateVectorT>
void registerBackendAgnosticObservables(py::module_ &m) {
using PrecisionT =
typename StateVectorT::PrecisionT; // Statevector's precision.
using ComplexT =
Expand Down Expand Up @@ -627,7 +628,9 @@ template <class StateVectorT> void lightningClassBindings(py::module_ &m) {
/* Observables submodule */
py::module_ obs_submodule =
m.def_submodule("observables", "Submodule for observables classes.");
registerObservables<StateVectorT>(obs_submodule);
// registerBackendAgnosticObservables<StateVectorT>(obs_submodule);
vincentmr marked this conversation as resolved.
Show resolved Hide resolved
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
registerBackendAgnosticObservables<StateVectorT>(obs_submodule);
registerBackendSpecificObservables<StateVectorT>(obs_submodule);

//***********************************************************************//
// Measurements
Expand Down
46 changes: 46 additions & 0 deletions pennylane_lightning/core/src/bindings/BindingsMPI.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,10 @@ template <class StateVectorT> void registerObservablesMPI(py::module_ &m) {

using np_arr_c = py::array_t<std::complex<ParamT>, py::array::c_style>;
using np_arr_r = py::array_t<ParamT, py::array::c_style>;
using np_arr_sparse_ind = typename std::conditional<
std::is_same<ParamT, float>::value,
py::array_t<int32_t, py::array::c_style | py::array::forcecast>,
py::array_t<int64_t, py::array::c_style | py::array::forcecast>>::type;

std::string class_name;

Expand Down Expand Up @@ -191,6 +195,48 @@ template <class StateVectorT> void registerObservablesMPI(py::module_ &m) {
return self == other_cast;
},
"Compare two observables");
#ifdef _ENABLE_PLGPU
class_name = "SparseHamiltonianMPIC" + bitsize;
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
using SpIDX = typename SparseHamiltonianMPI<StateVectorT>::IdxT;
py::class_<SparseHamiltonianMPI<StateVectorT>,
std::shared_ptr<SparseHamiltonianMPI<StateVectorT>>,
Observable<StateVectorT>>(m, class_name.c_str(),
py::module_local())
.def(py::init([](const np_arr_c &data, const np_arr_sparse_ind &indices,
const np_arr_sparse_ind &offsets,
const std::vector<std::size_t> &wires) {
const py::buffer_info buffer_data = data.request();
const auto *data_ptr = static_cast<ComplexT *>(buffer_data.ptr);

const py::buffer_info buffer_indices = indices.request();
const auto *indices_ptr = static_cast<SpIDX *>(buffer_indices.ptr);

const py::buffer_info buffer_offsets = offsets.request();
const auto *offsets_ptr = static_cast<SpIDX *>(buffer_offsets.ptr);

return SparseHamiltonianMPI<StateVectorT>{
std::vector<ComplexT>({data_ptr, data_ptr + data.size()}),
std::vector<SpIDX>({indices_ptr, indices_ptr + indices.size()}),
std::vector<SpIDX>({offsets_ptr, offsets_ptr + offsets.size()}),
wires};
}))
.def("__repr__", &SparseHamiltonianMPI<StateVectorT>::getObsName)
.def("get_wires", &SparseHamiltonianMPI<StateVectorT>::getWires,
"Get wires of observables")
.def(
"__eq__",
[](const SparseHamiltonianMPI<StateVectorT> &self,
py::handle other) -> bool {
if (!py::isinstance<SparseHamiltonianMPI<StateVectorT>>(
other)) {
return false;
}
auto other_cast =
other.cast<SparseHamiltonianMPI<StateVectorT>>();
return self == other_cast;
},
"Compare two observables");
#endif
}

/**
Expand Down
Loading