-
Notifications
You must be signed in to change notification settings - Fork 585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Closes #6047] Implements custom compute_decomposition
method in Hermitian
#6062
[Closes #6047] Implements custom compute_decomposition
method in Hermitian
#6062
Conversation
compute_decomposition
method in Hermitian
compute_decomposition
method in Hermitian
Hi @lillian542, I've opened up this PR as a draft in case you have some early comments. Two things I could use some clarification on: (1) Any feedback on my approach to benchmarking the method's performance? To me over a second of computation time could warrant some warning from the program. Although, perhaps my experience is a bit biased since I'm used to relatively small quantum system sizes 😁.
Just wanted to double check, thanks! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #6062 +/- ##
==========================================
- Coverage 99.66% 99.65% -0.01%
==========================================
Files 430 430
Lines 41845 41560 -285
==========================================
- Hits 41704 41418 -286
- Misses 141 142 +1 ☔ View full report in Codecov by Sentry. |
compute_decomposition
method in Hermitian
compute_decomposition
method in Hermitian
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for opening this PR @andrijapau ! It looks really polished. I've left a few comments that need to be addressed before I can approve the changes, but overall, looking great 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing all the comments so fast! I just left a few more comments pertaining to tests. I'm very happy to approve once these comments are addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @andrijapau ! Thanks for the fast responses and for adding this feature 🎉
I'll request a 2nd review from the team.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing @mudit2812 's comments 👍 The benchmarking graph looks great :) thanks for producing that.
No concerns from my side, and I'm happy to approve 🚀
Great! Looks like the PR is ready to be merged whenever you are. Thank you all for your help and guidance and I'm excited to be a Pennylane contributor! 😊 |
Thank you so much, @andrijapau! I appreciate the effort you put into the implementation. I have a few questions about your plot: I noticed you plotted the theoretical time complexity O(n 4^n). Could you explain why this is the expected complexity? Also, how should I interpret the horizontal bars on your plot? |
Certainly, @Alex-Preciado!
The short answer is that I read it in the documentation. ⏰ Open for Time Complexity Analysis ⏰The long answer is that we can analyze the operations in `_generalized_pauli_decompose` that won't take constant time and determine the dominant time complexity. # Permute by XORing
indices = [qml.math.array(range(shape[0]))]
for idx in range(shape[0] - 1): # O(2^n)
indices.append(qml.math.bitwise_xor(indices[-1], (idx + 1) ^ (idx)))
term_mat = qml.math.cast(
qml.math.stack(
[qml.math.gather(matrix[idx], indice) for idx, indice in enumerate(indices)]
),
complex,
) # O((2^n)^2), stacking 2^n arrays of size 2^n. Permuting indices would be overall # Perform Hadamard transformation on columns
hadamard_transform_mat = _walsh_hadamard_transform(qml.math.transpose(term_mat)) # O(n 2^n), from documentation
# Account for the phases from Y
phase_mat = qml.math.ones(shape, dtype=complex).reshape((2,) * (2 * num_qubits))
for idx in range(num_qubits):
index = [slice(None)] * (2 * num_qubits)
index[idx] = index[idx + num_qubits] = 1
phase_mat[tuple(index)] *= 1j
phase_mat = qml.math.convert_like(qml.math.reshape(phase_mat, shape), matrix) # O(n) WHT would be dominated by # c_00 + c_11 -> I; c_00 - c_11 -> Z; c_01 + c_10 -> X; 1j*(c_10 - c_01) -> Y
# https://quantumcomputing.stackexchange.com/a/31790
term_mat = qml.math.transpose(qml.math.multiply(hadamard_transform_mat, phase_mat)) # O((2^n)^2) Element wise multiplication has complexity # Obtain the coefficients for each Pauli word
coeffs, obs = [], []
for pauli_rep in product("IXYZ", repeat=num_qubits):
bit_array = qml.math.array(
[[(rep in "YZ"), (rep in "XY")] for rep in pauli_rep], dtype=int
).T
coefficient = term_mat[tuple(int("".join(map(str, x)), 2) for x in bit_array)]
if not is_abstract(matrix) and qml.math.allclose(coefficient, 0):
continue
observables = (
[(o, w) for w, o in zip(wire_order, pauli_rep) if o != I]
if hide_identity and not all(t == I for t in pauli_rep)
else [(o, w) for w, o in zip(wire_order, pauli_rep)]
)
if observables:
coeffs.append(coefficient)
obs.append(observables)
coeffs = qml.math.stack(coeffs) I believe this is where the poor time complexity comes from. We need to (1) loop through all possible The total time complexity of this algorithm would be the sum of each of these operations
The horizontal bars are auto-generated box-and-whisker plots for the performance of each hermitian decomposition test case. When the benchmarking test is called, the following table is automatically generated, The error bars in the histogram come from the |
Awesome! Thank you so much for the detailed explanation, @andrijapau!! I’ll go ahead and update this branch and merge It 😎 |
After the merge of #6062 , the legacy op math tests in the plugin test matrix fail (see [here](https://github.com/PennyLaneAI/pennylane/actions/runs/10362479768/job/28685492331)), since the test uses the `Operator.matrix` method to verify the decomposition, but the decomposed operator is a legacy `Hamiltonian` with new op math disabled, which doesn't have a matrix. This PR just updates the test to use `qml.matrix` instead of `Operator.matrix`.
Context:
Prior to this fix, the
compute_decomposition
method ofHermitian
returned aqml.DecompositionUndefinedError
(due to itsOperator
base class inheritance). The desired fix, as laid out by the issue, was to use the existingpauli_decompose
method to perform the desired decomposition.Description of the Change:
Implemented a custom
compute_decomposition
static method inHermitian
to have it call the existingpauli_decompose
method.Examples
⏰ Benchmarking Performance ⏰
Performance was benchmarked as a function of matrix size (see figure below). I heuristically chose to raise an inefficiency warning if the decomposition time would take longer than a second (open to suggestions 🙂). Any observable with more than seven wires will raise a user warning indicating that the decomposition may be inefficient.
I've also plotted the theoretical time complexity of the
pauli_decompose
method and the data is in agreement.Benefits:
Can now compute the decomposition of any Hermitian matrix.
Possible Drawbacks:
N/A
Related GitHub Issues:
Closes #6047