-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration test: quasi-2D #578
Comments
What is adaptive B-spline mesh? "truncate"? We will probably deprecate it. |
These should be split. vacuum options are apparently useful and used, but need to be eventually moved inside the input xml specification. adaptive mesh is worth testing provided it is not much work to setup. Indeed it might be deprecated but we should not go out of our way to do so. It has some champions but considering that there is no published data and we already have multiple bets on different basis functions to save memory it is very much a research fall back. |
I can add two tests instead of a combined single one. |
I have found truncate useful in a number of instances. @ye-luo why do you say we will deprecate it? Are you thinking the memory savings from hybrid are enough to offset it's usefulness? In particular, I think for attempting catalysis calculations on a surface there are no really good alternatives to using plane waves and a big box. |
@jtkrogel Could you clarify what exact option you mean "adaptive B-spline mesh"? |
@lshulen Using hybrid + vacuum option are enough to offset its usefulness. This way also reduces the pain of DFT. The "truncate" two grid scheme has not been updated to SoA. |
Before taking a decision about future support or deprecation we will examine actual data for problems such as surface catalysis. |
By "adaptive" I mean the two grid scheme enabled by truncate (+buffer). |
Actual data is a bit tricky, but I can give a couple of back of the envelope calculations for reference. The first case is the aforementioned catalysis. I'm imagining a slab of material, let's say platinum with a molecule landing on top, let's say CO. For these purposes, I would imagine needing ~6 layers of Pt, which would come to ~22 bohr deep. Then add about 4 bohr for the CO. To prevent interactions with the periodic image in espresso, I would probably add about 20 bohr of vacuum on the top. I would estimate a buffer layer of 2 bohr would be sufficient, so using truncate, I would expect 30 bohr of a fully dense grid in a box that is ~46 bohr tall. The fully dense grid costs 8x in memory for a given volume than the coarse one currently used, so the total memory cost without truncate is: Another possibility would be docking a molecule onto a wire. Picking a random example, imagine a carbon nanotube with a 22 bohr diameter. For simplicity, I'll imagine that the interaction is going on inside the pore, so just add 20 bohr of vacuum on each of the x and y directions so the total cell is 44bohr x 44 bohr. Probably a buffer of 3 bohr would be sufficient here, so the dense grid is in a 28 bohr by 28 bohr portion of the cell when using truncate. Here the savings is roughly a factor of 2. These are not the most aggressive cases I can think of, but I believe they are realistic. The strongest case I can put forward is for some calculations I was working on of doped diamondoids. You can argue that such calculations of isolated clusters are possible using Gaussian basis but in this case the very tenuously bound electron made converging a Gaussian basis extremely difficult. One case that I considered had a cluster that was about 9 bohr on each side, and needed to be in a box ~33 bohr on each side to provide adequate insulation from periodic images in quantum espresso. So in this ~ nightmare scenario, truncate saves you nearly a factor of 5 in memory. Assuming all of this analysis is correct, the question is whether the potential memory savings are worth the added code / testing complexity. I think the first case where we achieve a ~25% memory savings for a molecule on a surface is a likely application. The second where we are working on a wire and gain a factor of 2 is a bit more specialized but potentially useful and the last where we gain a factor of 5 is likely an edge case where most of the applications would use a Gaussian basis rather than plane waves. What does everyone else think? Does my analysis look correct? If so, what do you think of the utility vs programming effort? |
@lshulen Your analysis is correct to me. In your examples, a factor of 8 is the theoretical maximum.
|
@ye-luo No argument that hybrid is generally a bigger and more general improvement. This has the potential to add an additional factor of memory savings in some limited cases. The question is whether it is worth the effort. It sounds like your answer is no. Does anyone disagree? |
@lshulen if we consider using this beyond hybrid rep, I think there is potential. I did a rough estimate. When we reduce the grid by half in each direction. The max G vector reduces by half, the planewave cutoff drops by a factor of 4. So in a calculation with 400Ry, first use hybrid and the effective PW cutoff becomes 100Ry. If we scale down the mesh again, 100->25Ry it may still work for vacuum region but the boarder is pushed more towards the vacuum and thus the memory saving is reduced. I looked at the code and noticed that "truncate" with complex k-points has never been implemented. We only had it with real k-points and no supercell. If we really need the double grid scheme, we should write a more flexible wrapper for the double grids and support various boundary conditions. The current one can just be removed. Before doing that, I prefer to check the energy difference using the small and big box in DFT and check whether the vacuum option can recover the big box energy. |
@ye-luo I agree about writing a more flexible wrapper. However, cases where I think this should be useful are not likely to involve k-points much if at all. (If you are using symmetry you probably can get the memory footprint low enough to not need this functionality. Unlike hybrid rep, there is no advantage to this beyond reducing memory). Where I would like to see additional flexibility if this were re-implemented is to allow the user to specify the meshfactor of the coarse grid rather than just making it a factor of 2 less dense in each direction. It may be that you could go considerably coarser than this depending on the geometry and that might allow more memory savings. |
@lshulen Since the current implementation is not flexiblem, it only supports the old spline implementation with R2R case. If we envision using it with hybrid rep, we any way need to rewrite. I agree with you the factor should not be hard coded. |
@ye-luo which functionality here still exists in the current develop? Important to know before writing tests. |
The ppn type of boundary conditions are available. We need to test distance tables under such BC. |
Feature no longer exists. |
Combined test of vacuum option and quasi-2D ~adaptive B-spline mesh.
The text was updated successfully, but these errors were encountered: