Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow broadcasting in the numerical representations of standard operations #2609

Merged
merged 26 commits into from
Jun 2, 2022
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
d317048
commit old changes
dwierichs May 23, 2022
dde9478
intermed
dwierichs May 24, 2022
393784f
clean up, move broadcast dimension first
dwierichs May 24, 2022
d3bc1b3
Merge branch 'parameter-broadcasting-1' into parameter-broadcasting-2
dwierichs May 26, 2022
2ade7cf
update tests that manually set ndim_params for default ops
dwierichs May 26, 2022
c438d45
pin protobuf<4.21.0
dwierichs May 26, 2022
2fb80bd
improve shape coersion order
dwierichs May 26, 2022
bb9bf14
changelog formatting
dwierichs May 26, 2022
bfe8d77
Merge branch 'parameter-broadcasting-1' into parameter-broadcasting-2
dwierichs May 28, 2022
24088d8
broadcasted pow tests
dwierichs May 29, 2022
e318ce2
attribute test, ControlledQubitUnitary update
dwierichs May 29, 2022
32d1286
test kwargs attributes
dwierichs May 29, 2022
9956534
Apply suggestions from code review
dwierichs May 30, 2022
bb1538a
changelog
dwierichs May 30, 2022
4cf4d94
review
dwierichs May 30, 2022
f88dc2c
remove prints
dwierichs May 30, 2022
6b5cf8b
explicit attribute supports_broadcasting tests
dwierichs May 30, 2022
28cfc4c
tests disentangle
dwierichs May 31, 2022
9846d23
fix
dwierichs May 31, 2022
20e689a
PauliRot broadcasted identity compatible with TF
dwierichs May 31, 2022
e12edbf
rename "batched" into "broadcasted" for uniform namespace
dwierichs May 31, 2022
ab77959
old TF version support in qubitunitary unitarity check
dwierichs May 31, 2022
a4062e7
python3.7 support
dwierichs May 31, 2022
bc3f5f1
merge
dwierichs May 31, 2022
cfa60f6
Apply suggestions from code review
dwierichs Jun 2, 2022
0a454f0
linebreak
dwierichs Jun 2, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,29 @@

<h3>New features since last release</h3>

* Many parametrized operations now have the attribute `ndim_params` and
allow arguments with a broadcasting dimension. Also see entries for #2590
and #2575 below for details.
[(#2609)](https://github.com/PennyLaneAI/pennylane/pull/2609)
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

Previously unsupported broadcasted parameters are allowed for example in standard
rotation gates and matrix operations. The broadcasted dimension is the first dimension
in numerical operator representations. Note that the broadcasted parameter
has to be passed as an `array` but not as a python `list` or `tuple` for most operations.
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

```pycon
>>> op = qml.RX(np.array([0.1, 0.2, 0.3], requires_grad=True), 0)
>>> np.round(op.matrix(), 4)
tensor([[[0.9988+0.j , 0. -0.05j ],
[0. -0.05j , 0.9988+0.j ]],
[[0.995 +0.j , 0. -0.0998j],
[0. -0.0998j, 0.995 +0.j ]],
[[0.9888+0.j , 0. -0.1494j],
[0. -0.1494j, 0.9888+0.j ]]], requires_grad=True)
>>> op.matrix().shape
(3, 2, 2)
```
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

* Devices have a new capability flag `capabilities()["supports_broadcasting"]`
and are now able to handle broadcasting of tapes. In addition, the tape transform
`broadcast_expand` was added, which allows a tape that uses broadcasting
Expand Down
1 change: 1 addition & 0 deletions pennylane/math/single_dispatch.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ def _i(name):
# qml.SparseHamiltonian are not automatically 'unwrapped' to dense NumPy arrays.
ar.register_function("scipy", "to_numpy", lambda x: x)
ar.register_function("scipy", "shape", np.shape)
ar.register_function("scipy", "ndim", np.ndim)
dwierichs marked this conversation as resolved.
Show resolved Hide resolved


def _scatter_element_add_numpy(tensor, index, value):
Expand Down
27 changes: 19 additions & 8 deletions pennylane/operation.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,29 +190,38 @@ def expand_matrix(base_matrix, wires, wire_order):
# TODO[Maria]: In future we should consider making ``utils.expand`` differentiable and calling it here.
wire_order = Wires(wire_order)
n = len(wires)
interface = qml.math._multi_dispatch(base_matrix) # pylint: disable=protected-access
shape = qml.math.shape(base_matrix)
batch_dim = shape[0] if len(shape) == 3 else None
interface = qml.math.get_interface(base_matrix) # pylint: disable=protected-access

# operator's wire positions relative to wire ordering
op_wire_pos = wire_order.indices(wires)

identity = qml.math.reshape(
qml.math.eye(2 ** len(wire_order), like=interface), [2] * len(wire_order) * 2
qml.math.eye(2 ** len(wire_order), like=interface), [2] * (len(wire_order) * 2)
)
axes = (list(range(n, 2 * n)), op_wire_pos)
# The first axis entries are range(n, 2n) for batch_dim=None and range(n+1, 2n+1) else
axes = (list(range(-n, 0)), op_wire_pos)

# reshape op.matrix()
op_matrix_interface = qml.math.convert_like(base_matrix, identity)
mat_op_reshaped = qml.math.reshape(op_matrix_interface, [2] * n * 2)
shape = [batch_dim] + [2] * (n * 2) if batch_dim else [2] * (n * 2)
mat_op_reshaped = qml.math.reshape(op_matrix_interface, shape)
mat_tensordot = qml.math.tensordot(
mat_op_reshaped, qml.math.cast_like(identity, mat_op_reshaped), axes
)

unused_idxs = [idx for idx in range(len(wire_order)) if idx not in op_wire_pos]
# permute matrix axes to match wire ordering
perm = op_wire_pos + unused_idxs
mat = qml.math.moveaxis(mat_tensordot, wire_order.indices(wire_order), perm)
sources = wire_order.indices(wire_order)
if batch_dim:
perm = [p + 1 for p in perm]
sources = [s + 1 for s in sources]

mat = qml.math.reshape(mat, (2 ** len(wire_order), 2 ** len(wire_order)))
mat = qml.math.moveaxis(mat_tensordot, sources, perm)
shape = [batch_dim] + [2 ** len(wire_order)] * 2 if batch_dim else [2 ** len(wire_order)] * 2
mat = qml.math.reshape(mat, shape)
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

return mat

Expand Down Expand Up @@ -804,7 +813,9 @@ def label(self, decimals=None, base_label=None, cache=None):

if len(qml.math.shape(params[0])) != 0:
# assume that if the first parameter is matrix-valued, there is only a single parameter
# this holds true for all current operations and templates
# this holds true for all current operations and templates unless tensor-batching
# is used
# TODO[dwierichs]: Implement a proper label for tensor-batched operators
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
if (
cache is None
or not isinstance(cache.get("matrices", None), list)
Expand Down Expand Up @@ -1404,7 +1415,7 @@ def matrix(self, wire_order=None):
canonical_matrix = self.compute_matrix(*self.parameters, **self.hyperparameters)

if self.inverse:
canonical_matrix = qml.math.conj(qml.math.T(canonical_matrix))
canonical_matrix = qml.math.conj(qml.math.moveaxis(canonical_matrix, -2, -1))

if wire_order is None or self.wires == Wires(wire_order):
return canonical_matrix
Expand Down
2 changes: 1 addition & 1 deletion pennylane/ops/functions/matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,6 @@ def _matrix(tape, wire_order=None):

for op in tape.operations:
U = matrix(op, wire_order=wire_order)
unitary_matrix = qml.math.dot(U, unitary_matrix)
unitary_matrix = qml.math.tensordot(U, unitary_matrix, axes=[[-1], [-2]])

return unitary_matrix
26 changes: 26 additions & 0 deletions pennylane/ops/qubit/attributes.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,3 +199,29 @@ def __contains__(self, obj):
representation using ``np.linalg.eigvals``, which fails for some tensor types that the matrix
may be cast in on backpropagation devices.
"""

supports_tensorbatching = Attribute(
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
[
"QubitUnitary",
"ControlledQubitUnitary",
"DiagonalQubitUnitary",
"RX",
"RY",
"RZ",
"PhaseShift",
"ControlledPhaseShift",
"Rot",
"MultiRZ",
"PauliRot",
"CRX",
"CRY",
"CRZ",
"CRot",
"U1",
"U2",
"U3",
"IsingXX",
"IsingYY",
"IsingZZ",
]
)
79 changes: 60 additions & 19 deletions pennylane/ops/qubit/matrix_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ class QubitUnitary(Operation):

* Number of wires: Any (the operation can act on any number of wires)
* Number of parameters: 1
* Number of dimensions per parameter: (2,)
* Gradient recipe: None

Args:
Expand All @@ -55,6 +56,9 @@ class QubitUnitary(Operation):
num_params = 1
"""int: Number of trainable parameters that the operator depends on."""

ndim_params = (2,)
"""tuple[int]: Number of dimensions per trainable parameter that the operator depends on."""

grad_method = None
"""Gradient computation method."""

Expand All @@ -65,20 +69,28 @@ def __init__(self, *params, wires, do_queue=True):
# of wires fits the dimensions of the matrix
if not isinstance(self, ControlledQubitUnitary):
U = params[0]
U_shape = qml.math.shape(U)

dim = 2 ** len(wires)

if qml.math.shape(U) != (dim, dim):
if not (len(U_shape) in {2, 3} and U_shape[-2:] == (dim, dim)):
raise ValueError(
f"Input unitary must be of shape {(dim, dim)} to act on {len(wires)} wires."
f"Input unitary must be of shape {(dim, dim)} or ({dim, dim}, batch_size) "
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
f"to act on {len(wires)} wires."
)

# Check for unitarity; due to variable precision across the different ML frameworks,
# here we issue a warning to check the operation, instead of raising an error outright.
if not qml.math.is_abstract(U) and not qml.math.allclose(
qml.math.dot(U, qml.math.T(qml.math.conj(U))),
qml.math.eye(qml.math.shape(U)[0]),
atol=1e-6,
if not (
qml.math.is_abstract(U)
or all(
qml.math.allclose(
qml.math.dot(_U, qml.math.T(qml.math.conj(_U))),
qml.math.eye(dim),
atol=1e-6,
)
for _U in (U if len(U_shape) == 3 else [U])
)
):
warnings.warn(
f"Operator {U}\n may not be unitary."
Expand Down Expand Up @@ -142,16 +154,24 @@ def compute_decomposition(U, wires):
"""
# Decomposes arbitrary single-qubit unitaries as Rot gates (RZ - RY - RZ format),
# or a single RZ for diagonal matrices.
if qml.math.shape(U) == (2, 2):
shape = qml.math.shape(U)
if shape == (2, 2):
return qml.transforms.decompositions.zyz_decomposition(U, Wires(wires)[0])

if qml.math.shape(U) == (4, 4):
if shape == (4, 4):
return qml.transforms.two_qubit_decomposition(U, Wires(wires))

# TODO[dwierichs]: Implement decomposition of broadcasted unitary
if len(shape) == 3:
raise DecompositionUndefinedError(
"The decomposition of QubitUnitary does not support broadcasting."
)
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

return super(QubitUnitary, QubitUnitary).compute_decomposition(U, wires=wires)

def adjoint(self):
return QubitUnitary(qml.math.T(qml.math.conj(self.matrix())), wires=self.wires)
U = self.matrix()
return QubitUnitary(qml.math.moveaxis(qml.math.conj(U), -2, -1), wires=self.wires)

def pow(self, z):
if isinstance(z, int):
Expand Down Expand Up @@ -179,6 +199,7 @@ class ControlledQubitUnitary(QubitUnitary):

* Number of wires: Any (the operation can act on any number of wires)
* Number of parameters: 1
* Number of dimensions per parameter: (2,)
* Gradient recipe: None

Args:
Expand Down Expand Up @@ -215,6 +236,9 @@ class ControlledQubitUnitary(QubitUnitary):
num_params = 1
"""int: Number of trainable parameters that the operator depends on."""

ndim_params = (2,)
"""tuple[int]: Number of dimensions per trainable parameter that the operator depends on."""

grad_method = None
"""Gradient computation method."""

Expand Down Expand Up @@ -281,8 +305,9 @@ def compute_matrix(
[ 0. +0.j 0. +0.j -0.31594146+0.j 0.94877869+0.j]]
"""
target_dim = 2 ** len(u_wires)
if len(U) != target_dim:
raise ValueError(f"Input unitary must be of shape {(target_dim, target_dim)}")
shape = qml.math.shape(U)
if not (len(shape) in {2, 3} and shape[-2:] == (target_dim, target_dim)):
raise ValueError(f"Input unitary must be of shape {(target_dim, target_dim)} or ({target_dim}, {target_dim}, batch_size).")
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

# A multi-controlled operation is a block-diagonal matrix partitioned into
# blocks where the operation being applied sits in the block positioned at
Expand All @@ -303,19 +328,21 @@ def compute_matrix(
raise ValueError("Length of control bit string must equal number of control wires.")

# Make sure all values are either 0 or 1
if any(x not in ["0", "1"] for x in control_values):
if not set(control_values).issubset({"0", "1"}):
dwierichs marked this conversation as resolved.
Show resolved Hide resolved
raise ValueError("String of control values can contain only '0' or '1'.")

control_int = int(control_values, 2)
else:
raise ValueError("Alternative control values must be passed as a binary string.")

padding_left = control_int * len(U)
padding_right = 2 ** len(total_wires) - len(U) - padding_left
padding_left = control_int * target_dim
padding_right = 2 ** len(total_wires) - target_dim - padding_left

interface = qml.math.get_interface(U)
left_pad = qml.math.cast_like(qml.math.eye(padding_left, like=interface), 1j)
right_pad = qml.math.cast_like(qml.math.eye(padding_right, like=interface), 1j)
if len(qml.math.shape(U)) == 3:
return qml.math.stack([qml.math.block_diag([left_pad, _U, right_pad]) for _U in U])
return qml.math.block_diag([left_pad, U, right_pad])

@property
Expand Down Expand Up @@ -348,6 +375,7 @@ class DiagonalQubitUnitary(Operation):

* Number of wires: Any (the operation can act on any number of wires)
* Number of parameters: 1
* Number of dimensions per parameter: (1,)
* Gradient recipe: None

Args:
Expand All @@ -360,6 +388,9 @@ class DiagonalQubitUnitary(Operation):
num_params = 1
"""int: Number of trainable parameters that the operator depends on."""

ndim_params = (1,)
"""tuple[int]: Number of dimensions per trainable parameter that the operator depends on."""

grad_method = None
"""Gradient computation method."""

Expand Down Expand Up @@ -389,6 +420,10 @@ def compute_matrix(D): # pylint: disable=arguments-differ
if not qml.math.allclose(D * qml.math.conj(D), qml.math.ones_like(D)):
raise ValueError("Operator must be unitary.")

# The diagonal is supposed to have one-dimension. If it is broadcasted, it has two
if len(qml.math.shape(D)) == 2:
return qml.math.stack([qml.math.diag(_D) for _D in D])
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

return qml.math.diag(D)

@staticmethod
Expand Down Expand Up @@ -419,8 +454,9 @@ def compute_eigvals(D): # pylint: disable=arguments-differ
"""
D = qml.math.asarray(D)

if not qml.math.is_abstract(D) and not qml.math.allclose(
D * qml.math.conj(D), qml.math.ones_like(D)
if not (
qml.math.is_abstract(D)
or qml.math.allclose(D * qml.math.conj(D), qml.math.ones_like(D))
):
raise ValueError("Operator must be unitary.")

Expand Down Expand Up @@ -450,20 +486,25 @@ def compute_decomposition(D, wires):
[QubitUnitary(array([[1, 0], [0, 1]]), wires=[0])]

"""
return [QubitUnitary(qml.math.diag(D), wires=wires)]
return [QubitUnitary(DiagonalQubitUnitary.compute_matrix(D), wires=wires)]
dwierichs marked this conversation as resolved.
Show resolved Hide resolved

def adjoint(self):
return DiagonalQubitUnitary(qml.math.conj(self.parameters[0]), wires=self.wires)

def pow(self, z):
if isinstance(self.data[0], list):
return [DiagonalQubitUnitary([(x + 0.0j) ** z for x in self.data[0]], wires=self.wires)]
if isinstance(self.data[0][0], list):
# Support broadcasted list
new_data = [[(el + 0j) ** z for el in x] for x in self.data[0]]
else:
new_data = [(x + 0.0j) ** z for x in self.data[0]]
return [DiagonalQubitUnitary(new_data, wires=self.wires)]
casted_data = qml.math.cast(self.data[0], np.complex128)
return [DiagonalQubitUnitary(casted_data**z, wires=self.wires)]

def _controlled(self, control):
DiagonalQubitUnitary(
qml.math.concatenate([np.ones_like(self.parameters[0]), self.parameters[0]]),
qml.math.hstack([np.ones_like(self.parameters[0]), self.parameters[0]]),
wires=Wires(control) + self.wires,
)

Expand Down
Loading