-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solver rewrite #66
Solver rewrite #66
Conversation
…ts in compatibility with EWF.
… correction calculation.
I've fixed a few things leading to test failures
|
…tol option to CISD solver.
The symmetry check via the environment orbitals could be replaced with checking the cluster orbitals + effective 1-electron Hamiltonian or Fock matrix inside the cluster space, although this is a somewhat delayed test (only once the cluster space has been build, vs on fragment creation). It's also not completely equivalent, since, e.g. a single He atom placed 1m away from the molecule would break the symmetry as detected via environment orbitals, but not as detected via effective mean-field potential. But I guess this is fine, since a cluster with the same cluster orbitals + effective potential should still give the same result in this case, even though it's not technically a symmetry of the system.
This is true, but the different types of ERIs also have some justification, as different amount of ERI blocks are needed dependening on the solver (e.g. MP2 vs CCSD), integral symmetry (RHF vs UHF, real vs complex), and density-fitting ("vvvv" vs "vvL"). Frequently there are also incore and outcore versions. Does this rewrite still make sure that we use the minimum required resources?
OK that's nice - how many auxiliaries/cluster orbital do we generally get with the compression?
We have two "tailorings" - a "TCCSD solver" and tailoring from other fragments. The second approach offers all the functionality of the former approach, but we might want to keep the former approach anyways, for it's simplicity. Both approaches are working on master.
Does this mean you construct the full cluster 2-DM for MP2 fragments? This would limit the applicability of the DM-energy functional for MP2 fragments, as for example with 300 occupied and 700 virtual orbitals the full 2-DM would require 8 TB for storage, but the occ-vir-occ-vir part only 350 GB. |
@@ -1591,3 +1597,11 @@ def brueckner_scmf(self, *args, **kwargs): | |||
"""Decorator for Brueckner-DMET.""" | |||
self.with_scmf = Brueckner(self, *args, **kwargs) | |||
self.kernel = self.with_scmf.kernel.__get__(self) | |||
|
|||
def check_solver(self, solver): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicated from the fragment class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so we can check both whether the default configuration is valid at initialisation, and the specific configuration for each fragment. It could be amalgamated, but the checks are slightly different and it's a very short function
@@ -278,6 +266,8 @@ def kernel(self, solver=None, init_guess=None, eris=None): | |||
self._results = results = self.Results(fid=self.id, n_active=cluster.norb_active, | |||
converged=cluster_solver.converged, wf=cluster_solver.wf, pwf=pwf) | |||
|
|||
eris = cluster_solver.hamil.get_eris_bare() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this recalculate the ERIs?
Can we use the convention: eris
and eris_screened
rather than eris_bare
and eris
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this recalculate the ERIs?
The ERI's are currently cached within the hamiltonian if calculated (unused configuration option with default True), so won't necessarily be recalculated, but nonetheless this is suboptimal if they haven't previously been calculated. There are some ways to get around this that might be interesting- will add to conversation below.
Can we use the convention: eris and eris_screened rather than eris_bare and eris
I don't have a hugely strong preference in this context, but thinking about it when we have two possible eris ensuring it's totally clear which is being used might be useful- could we do eris_bare
and eris_screened
?
|
||
def get_solver(self, solver=None): | ||
# This detects based on fragment what kind of Hamiltonian is appropriate (restricted and/or EB). | ||
cl_ham = ClusterHamiltonian(self, self.mf, self.log, screening=self.opts.screening) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be useful to keep the Hamiltonian outside the cluster solver scope, i.e. to generate it first?
This would help in cases where you want to keep the cluster Hamiltonian stored after the solver completed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be nice, though if we cache the ERIs within it obviously need to ensure it's deleted to avoid using too much memory, as usual. This would then allow the energy calculation to access whatever eris are needed directly, potentially making use of cderis if available.
@@ -493,12 +479,12 @@ def make_fragment_dm2cumulant_energy(self, eris=None, t_as_lambda=False, sym_t2= | |||
if eris is None: | |||
eris = self.base.get_eris_array(self.cluster.c_active) | |||
dm2 = self.make_fragment_dm2cumulant(t_as_lambda=t_as_lambda, sym_t2=sym_t2, approx_cumulant=approx_cumulant, | |||
full_shape=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Full shape is a waste of memory for MP2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just a stopgap to get the energy calculation to work.
vayesta/solver_rewrite/__init__.py
Outdated
raise e | ||
|
||
|
||
def _get_solver_class(is_uhf, is_eb, solver): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this would be cleaner with a dictionary: solver_dict[(solver, is_uhf, is_eb)]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That could work, though giving good error messages for the different cases (unrecognised solver, not implemented for uhf, or not implemented for eb) might be more difficult for a dictionary. I might leave it for now?
So when using density fitting you can now either request this when using a The main shortcoming of all this currently is the ability to perform calculations with ERIs outcore (with the important exception of DF-CCSD, which can generate outcore
I haven't done any systematic testing, but it's system-dependent whether you get a considerable reduction beyond the dimension of the product space spanned by the original cderis- but even at this limit it can be a fair reduction.
Yeah, the TCCSD is implemented and I can have a look at the tailoring from other fragments.
That's the current workaround just to show we're getting the correct results; moving to using the hamiltonian itself to get the ERIs for a cluster calculation will get rid of this. |
…eened() to avoid ambiguity.
This is the long-awaited rewrite of all cluster solvers to use a single shared interface to generate all integrals within the cluster.
The major change in this PR is the introduction of a new class of solvers, under the temporary folder
vayesta/solver_rewrite/
but to be moved tovayesta/solver/
prior to merger, which use a new set of intermediate classesClusterHamiltonian
to generate all information about the cluster within a calculation without direct reference to any fragments.This can take the form of either an appropriate effective pyscf meanfield, or the base integrals themselves, and in either case all information is expressed within only the active space of a given cluster, avoiding any reference to the environmental orbitals. Compared to our previous approaches, this allows much more code reuse and so much more streamlined code within new solvers themselves.
As an example, a new solver which takes as input a pyscf mean-field object could be written as
Here, the
allow_dummy_orbs
keyword determines whether dummy virtual orbitals can be included within thehf
object to then be frozen without effect in the case that spin channels have uneven numbers of orbitals. These orbital indices are specified in thefrozen
return value, which is otherwiseNone
.If this parameter is set to
False
, aNotImplementedError
will be raised where it would be required, since the fix would be for that solver to support freezing specified orbitals.If the one- and two-body integrals can be used directly this would become
The new solvers can be obtained via a call to
self.get_solver(solver)
within all subclasses ofqemb.fragment
. This automatically generates a Hamiltonian for the current cluster (appropriately spin (un)restricted and purely fermionic or coupled electron-boson), then obtains the required solver class. An equivalent call toself.check_solver(solver)
uses similar code to identify if the current fragment solver configuration is supported, without any overhead from generating the hamiltonian, allowing us to avoid long lists specifying valid solvers on an embedding method-by-embedding method basis.Obviously, there could still be issues with support for different functionalities within
ewf
with different solvers, so I might need to add some more wavefunction conversions, but this is hopefully broadly reasonable.Implementation of solver methods currently in master:
Current limitations:
Using an intermediate pyscf mf object currently requires equal numbers of active alpha and beta orbitals. This could be avoided by introducing additional dummy orbitals to be frozen without affecting the calculation. This currently isn't implemented, but once added we'll have the same limitations as current methods, aka requiring the solver approach supports freezing orbitals. I'd lean towards having this before merging, to avoid feature regression, but could be persuaded otherwise.I've now pushed an update which implements this, allowing treatment of arbitrary spin symmetry-broken CAS spaces with CISD and CCSD solvers, so hopefully removing any feature regression compared to the previous implementations. I've also updated the above examples for these features.I've left coupling alone for now, but there shouldn't be any additional suprises in their implementation. I figured @maxnus would probably be best placed to set this up, but would be happy to get them working otherwise.I've implemented TCCSD in the new solvers, which I quite like as a demonstration of the benefits of the new approach. I don't know what's currently supported in terms of tailoring/coupling between CCSD fragments, as I can't see find obviously pertinent tests.We currently explicit construct and store the 4-centre eris within the active space of each cluster during calculations, which could cause memory issues for very large clusters. We've discussed previously directly compressingDF-MP2 and DF-RCCSD are now added, matching previous functionality.cderi
to allow the use of RI within the cluster space, but I haven't currently explored this any further. Hopefully this isn't critical, but shout if not.Currently all tests not caught by the caveats above pass on my machine, with those requiring functionality not supported obviously fail. As usual, we want to have all tests pass (or a good justification for why the result has changed) before merger- please excuse initial failures!