-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add the adjoint method #83
Conversation
Codecov Report
@@ Coverage Diff @@
## master #83 +/- ##
===========================================
- Coverage 98.14% 79.71% -18.44%
===========================================
Files 3 3
Lines 54 69 +15
===========================================
+ Hits 53 55 +2
- Misses 1 14 +13
Continue to review full report at Codecov.
|
false, | ||
obsWires[i].size() | ||
); | ||
lambdas.push_back(phiCopy); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we know how large lambdas
is going in. Why not preallocate and fill? Is there a reason we need vector functionality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I think I see. lambdas
is a vector of statevectors? Why not just have an array with additional dimension?
throw std::invalid_argument("The operation is not supported using the adjoint differentiation method"); | ||
} else if ((operations[i] != "QubitStateVector") && (operations[i] != "BasisState")) { | ||
// copy |phi> to |mu> before applying Uj* | ||
CplxType* phiCopyArr = new CplxType[phi.length]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you just allocate the memory once and then always copy the phi
statevector there, overwriting it's value from the last loop?
|
||
# send in flattened array of zeros to be populated by adjoint_jacobian | ||
jac = np.zeros(len(tape.observables) * len(tape.trainable_params)) | ||
adjoint_jacobian( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe have this named something slightly different than the method?
opWires[i].size() | ||
); | ||
|
||
if (std::find(trainableParams.begin(), trainableParams.end(), paramNumber) != trainableParams.end()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think here you could take advantage of the fact that trainableParams
is ordered and paramNumber
is continuously decreasing.
You could have a next_trainableParam_index
and compare trainableParams[next_trainableParam_index]
to paramNumber
. If paramNumber
is trainable, you decrease next_trainable_param_index
.
Adds the adjoint method to
lightning.qubit
.