Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add the adjoint method #83

Closed
wants to merge 39 commits into from
Closed

[WIP] Add the adjoint method #83

wants to merge 39 commits into from

Conversation

thisac
Copy link

@thisac thisac commented Mar 10, 2021

Adds the adjoint method to lightning.qubit.

@thisac thisac self-assigned this Mar 24, 2021
@codecov
Copy link

codecov bot commented Mar 25, 2021

Codecov Report

Merging #83 (f873e45) into master (aba70e1) will decrease coverage by 18.43%.
The diff coverage is 23.52%.

❗ Current head f873e45 differs from pull request most recent head 5ea0513. Consider uploading reports for the commit 5ea0513 to get more accurate results
Impacted file tree graph

@@             Coverage Diff             @@
##           master      #83       +/-   ##
===========================================
- Coverage   98.14%   79.71%   -18.44%     
===========================================
  Files           3        3               
  Lines          54       69       +15     
===========================================
+ Hits           53       55        +2     
- Misses          1       14       +13     
Impacted Files Coverage Δ
pennylane_lightning/lightning_qubit.py 78.12% <23.52%> (-19.84%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update aba70e1...5ea0513. Read the comment docs.

false,
obsWires[i].size()
);
lambdas.push_back(phiCopy);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we know how large lambdas is going in. Why not preallocate and fill? Is there a reason we need vector functionality?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I think I see. lambdas is a vector of statevectors? Why not just have an array with additional dimension?

throw std::invalid_argument("The operation is not supported using the adjoint differentiation method");
} else if ((operations[i] != "QubitStateVector") && (operations[i] != "BasisState")) {
// copy |phi> to |mu> before applying Uj*
CplxType* phiCopyArr = new CplxType[phi.length];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you just allocate the memory once and then always copy the phi statevector there, overwriting it's value from the last loop?


# send in flattened array of zeros to be populated by adjoint_jacobian
jac = np.zeros(len(tape.observables) * len(tape.trainable_params))
adjoint_jacobian(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe have this named something slightly different than the method?

opWires[i].size()
);

if (std::find(trainableParams.begin(), trainableParams.end(), paramNumber) != trainableParams.end()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here you could take advantage of the fact that trainableParams is ordered and paramNumber is continuously decreasing.

You could have a next_trainableParam_index and compare trainableParams[next_trainableParam_index] to paramNumber. If paramNumber is trainable, you decrease next_trainable_param_index.

@mlxd mlxd closed this Sep 24, 2021
@mlxd mlxd deleted the adjoint-jacobian branch October 25, 2023 18:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants