Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compute adjoint gradient using restriction/interpolation only when anisotropic materials are present #1886

Merged
merged 3 commits into from
Jan 7, 2022

Conversation

oskooi
Copy link
Collaborator

@oskooi oskooi commented Jan 7, 2022

#1780 added support for subpixel smoothing of the adjoint gradients in which ε is an anisotropic tensor. However, in the current implementation of material_grids_addgradient of meepgeom.cpp, regardless of the value of the do_averaging property of the MaterialGrid object, the adjoint gradient is always computed for the off-diagonal components of ε which is a bug. Also, as reported in #1861, computing these off-diagonal elements involves a memory violation in which the array for the forward fields is read outside of its bounds. The memory violation issue is exacerbated by the fact that do_averaging=True is the default setting in MaterialGrid:

meep/python/geom.py

Lines 538 to 547 in efe3b4b

def __init__(self,
grid_size,
medium1,
medium2,
weights=None,
grid_type="U_DEFAULT",
do_averaging=True,
beta=0,
eta=0.5,
damping=0):

This PR provides a small fix to material_grids_addgradient to ensure that when do_averaging=False only the diagonal ε elements are computed (and the memory violation is therefore avoided).

@oskooi oskooi added the bug label Jan 7, 2022
@oskooi oskooi requested a review from smartalecH January 7, 2022 07:29
@oskooi oskooi changed the title compute adjoint gradient only for diagonal epsilon when subpixel smoothing is disabled compute adjoint gradient for off-diagonal epsilon only when subpixel smoothing is enabled Jan 7, 2022
@smartalecH
Copy link
Collaborator

This isn't quite right.

  1. Even if subpixel smoothing is turned off (e.g. do_averaging=false), there are still occasions when we need to average the other components. Specifically, materials with off-diagonal materials need this (that was the whole point of Anisotropic material gradient support #1801). In theory, there should be no overstepping when we just care about the diagonal components, as we are always using the same field data, regardless of the averaging condition.
  2. Right now, if averaging is turned off (and there are no off-diagonal eps components) then dA/du should be 0, meaning that if there is a bug here, it will be on those routines (e.g. material_grids_addgradient_point()). Perhaps a better check would be something like
else if (md->do_averaging && !(md-> medium_1->epsilon_offdiag) && !(md-> medium_2->epsilon_offdiag))

such that the routine is bypassed. (this needs a close look, however).

  1. The real issue is the overstepping due to the GET_FIELDS macro (as discussed in the above-linked issue, Out-of-bounds indexing in material_grids_addgradient #1861). The logic for this is horrendous. There are a lot of corner cases.
  2. It is more work to get this patched than to get Non-materialized dft fields for adjoint calculations #1855 working (it's very close). The new method of accessing data directly from the chunks removes the need for GET_FIELDS's crazy logic.

@oskooi
Copy link
Collaborator Author

oskooi commented Jan 7, 2022

I added a check for off-diagonal ε when deciding to execute the restriction/interpolation portion of material_grids_addgradient.

Also, I verified that with this PR and turning off subpixel smoothing via do_averaging=False in the MaterialGrid object for a test problem involving a waveguide bend from the tutorials, the memory violation reported in #1861 goes away.

@oskooi oskooi changed the title compute adjoint gradient for off-diagonal epsilon only when subpixel smoothing is enabled compute adjoint gradient using restriction/interpolation only when anisotropic materials are present Jan 7, 2022
@ahoenselaar
Copy link
Contributor

  1. It is more work to get this patched than to get Non-materialized dft fields for adjoint calculations #1855 working (it's very close). The new method of accessing data directly from the chunks removes the need for GET_FIELDS's crazy logic.

Attempting to fix forward and pile more changes on top of an incorrect implementation seems very risky.

@smartalecH
Copy link
Collaborator

Attempting to fix forward and pile more changes on top of an incorrect implementation seems very risky.

But the implementation is changing... normally I would agree with you but in this case, it's apples and oranges.

@oskooi
Copy link
Collaborator Author

oskooi commented Jan 7, 2022

It turns out this PR actually prevents the memory violations reported in #1861 even when subpixel smoothing is enabled (do_averaging=True). I tracked down the cause of the memory violations to be related to LOOP_OVER_IVECS(gv, is[ci_adjoint], ie[ci_adjoint], idx) of material_grids_addgradient which involves points outside the grid_volume of the MaterialGrid. It is only points outside of the MaterialGrid (for which do_averaging=False) which trigger the memory violation. With this PR, any grid point with do_averaging=False never executes the restriction/interpolate code which is why the memory violation is avoided.

I suspect there is a bug somewhere in this block which determines the starting and ending ivecs for the LOOP_OVER_IVECS:

meep/src/meepgeom.cpp

Lines 2903 to 2910 in efe3b4b

FOR_MY_COMPONENTS(cgrid) {
meep::vec yee_c(gv.yee_shift(meep::Centered) - gv.yee_shift(cgrid));
meep::ivec iyee_c(gv.iyee_shift(meep::Centered) - gv.iyee_shift(cgrid));
meep::volume wherec(where + yee_c);
is[c_i] = meep::ivec(meep::fields::vec2diel_floor(wherec.get_min_corner(), gv.a, zero_ivec(gv.dim)));
ie[c_i] = meep::ivec(meep::fields::vec2diel_ceil(wherec.get_max_corner(), gv.a, zero_ivec(gv.dim)));
c_i++;
}

Anyhow, at the very least, this PR fixes a bug in material_grids_addgradient related to disabling subpixel smoothing when the user specifies do_averaging=False.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants