Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed PatchBasedSmoothers #6

Closed
JordiManyer opened this issue Feb 14, 2023 · 0 comments · Fixed by #7
Closed

Distributed PatchBasedSmoothers #6

JordiManyer opened this issue Feb 14, 2023 · 0 comments · Fixed by #7
Assignees
Labels
enhancement New feature or request

Comments

@JordiManyer
Copy link
Member

I've been looking at how to efficiently implement patch-based smoothers, starting from the version we already have. Here are the notes left by Alberto within the code:

 Rationale behind distributed PatchFESpace:
 1. Patches have an owner. Only owners compute subspace correction.
    If am not owner of a patch, all dofs in my patch become -1.
 2. Subspace correction on an owned patch may affect DoFs  which
    are non-owned. These corrections should be sent to the owner
    process. I.e., NO -> O (reversed) communication. [PENDING]
 3. Each processor needs to know how many patches "touch" its owned DoFs.
    This requires NO->O communication as well. [PENDING]

I have though about it, and I think there are a couple improvements that can be made:

  1. First, I think communicating the counts is not general enough. I believe some literature uses weights which can be non-homogeneous and depend, for instance, on the model parameters. So we should have patch-dependent weights.
  2. Second, I believe we should communicate the corrections per dof, not per patch. This would save us communications.

This is what I though of implementing (let me know if you think I've missed something, which is very possible): The indexes are consistently i for the dofs, j for the patches and k for the processors sharing a certain dof.

Weights:

In general, we probably want to have weights which are different for each patch, i.e

$$w_{ij} = w[dof_i][patch_j] \quad \text{s.t.} \quad w_i = \sum_j w_{ij} = 1.0$$

The $w_{ij}$ are local (counts, etc...) and get reduced using their partial sums to obtain

$$w_{ik} = \sum_{j_k} w_{i j_k} \quad , \forall k \in \{ \text{processors sharing i} \}$$

Then this local sums get all-reduced by NN coms to obtain the global $w_i$.

Injection:

Again, we create the local reductions

$$y_{ik} = \sum_{j_k} y_{i j_k} * w_{i j_k}$$

which then get all-reduced to get the final

$$y_i = 1.0/w_i * (\sum_k y_{ik})$$

If we consider the weights constant, we could also normalize the $w_{i j_k}$ beforehand, and then we have simply

$$y_i = (\sum_k y_{ik})$$
@JordiManyer JordiManyer added the enhancement New feature or request label Feb 14, 2023
@JordiManyer JordiManyer self-assigned this Feb 14, 2023
@JordiManyer JordiManyer linked a pull request Feb 14, 2023 that will close this issue
@JordiManyer JordiManyer removed a link to a pull request Feb 14, 2023
@JordiManyer JordiManyer linked a pull request Feb 15, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant