Skip to content

Commit

Permalink
Extend validation section with EBchannelFlow and 1D PMF. (#180)
Browse files Browse the repository at this point in the history
* Add EBChannelFlow validation test case. LES results are still in process.

* Add data from a laminar premixed flame case and comparisons
with Cantera.

* A fresh pass on the doc.

* Add a couple of missing files.

* Add the Re_t = 934 case to the validation and stop there for now.

* Trailing whitespace.
  • Loading branch information
esclapez authored Mar 1, 2023
1 parent 3fd1187 commit 5751900
Show file tree
Hide file tree
Showing 15 changed files with 327 additions and 34 deletions.
12 changes: 6 additions & 6 deletions Docs/source/manual/Model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -176,11 +176,11 @@ An overview of `PeleLMeX` time-advance function is provided in the figure below

.. figure:: images/model/PeleLMeX_Algorithm.png
:align: center
:figwidth: 70%
:figwidth: 50%

The three steps of the low Mach number projection scheme described in Section `ssec:projScheme`_ are referenced to better emphasize how the thermodynamic solve is
closely weaved into the fractional step appraoch. Striped boxes indicate where the Godunov procedure described in Section `ssec:advScheme`_ is employed while
the four different linear solves are highlighted.
The three steps of the low Mach number projection scheme described :ref:`below <ssec:projScheme>` are referenced to better
emphasize how the thermodynamic solve is closely weaved into the fractional step appraoch. Striped boxes indicate where the
:ref:`Godunov procedure <ssec:advScheme>` is employed while the four different linear solves are highlighted.

Low Mach number projection scheme
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -345,7 +345,7 @@ This difference is illustrated in the figure below comparing the multi-level tim

.. figure:: images/model/PeleLMeX_Subcycling.png
:align: center
:figwidth: 90%
:figwidth: 60%

* `PeleLM` will recursively advance finer levels, halving the time step size (when using a refinement ratio of 2) at each level. For instance, considering a 3 levels simulation, `PeleLM` advances the coarse `Level0` over a :math:`\Delta t_0` step, then `Level1` over a :math:`\Delta t_1` step and `Level2` over two :math:`\Delta t_2` steps, performing an interpolation of the `Level1` data after the first `Level2` step. At this point, a synchronization step is performed to ensure that the fluxes are conserved at coarse-fine interface and a second `Level1` step is performed, followed by the same two `Level2` steps. At this point, two synchronizations are needed between the two pairs of levels.
* In order to get to the same physical time, `PeleLMeX` will perform 4 time steps of size similar to `PeleLM`'s :math:`\Delta t_2`, advancing all the levels at once. The coarse-fine fluxes consistency is this time ensured by averaging down the face-centered fluxes from fine to coarse levels. Additionnally, the state itself is averaged down at the end of each SDC iteration.
Expand All @@ -361,7 +361,7 @@ mesh is uniform and block-structured, but the boundary of the irregular-shaped c
through this mesh. Each cell in the mesh becomes labeled as regular, cut or covered, and the finite-volume
based discretization methods traditionally used in AMReX applications need to be modified to incorporate these cell shapes.
AMReX provides the necessary EB data structures, including volume and area fractions, surface normals and centroids,
as well as local connectivity information. The fluxes described in Section `ssec:projScheme`_ are then modified to account
as well as local connectivity information. The fluxes described in :ref:`the projection scheme section <ssec:projScheme>` are then modified to account
for the apperture opening between adjacent cells and the additional EB-fluxes are included when constructing the cell flux divergences.

A common problem arising with EB is the presence of the small cut-cells which can either introduce undesirable constraint on
Expand Down
4 changes: 2 additions & 2 deletions Docs/source/manual/Performances.rst
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ of the stiff chemistry integration, especially on the GPU.
Results on Crusher (ORNL)
^^^^^^^^^^^^^^^^^^^^^^^^^

Crusher is the testbed for DOE's first ExaScale platform Frontier. Crusher's `nodes <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide.html#crusher-compute-nodes>`_ consists of a single AMD EPYC 7A53 (Trento), 64 cores CPU connected to 4 AMD MI250X,
Crusher is the testbed for DOE's first ExaScale platform Frontier. `Crusher's nodes <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide.html#crusher-compute-nodes>`_ consists of a single AMD EPYC 7A53 (Trento), 64 cores CPU connected to 4 AMD MI250X,
each containing 2 Graphics Compute Dies (GCDs) for a total of 8 GCDs per node. When running with GPU acceleration, `PeleLMeX` will use 8 MPI ranks with each access to one GCD, while when running on flat MPI, we will use 64 MPI-ranks.

The FlameSheet case is ran using 2 levels of refinement (3 levels total) and the following domain size and cell count:
Expand Down Expand Up @@ -149,7 +149,7 @@ Results on Summit (ORNL)
^^^^^^^^^^^^^^^^^^^^^^^^

Summit was launched in 2018 as the first DOE's fully GPU-accelerated platform.
Summit's `nodes <https://docs.olcf.ornl.gov/systems/summit_user_guide.html#summit-nodes>`_ consists
`Summit's nodes <https://docs.olcf.ornl.gov/systems/summit_user_guide.html#summit-nodes>`_ consists
of a two IBM Power9 CPU connected to 6 NVIDIA V100 GPUs. When running with GPU acceleration, `PeleLMeX` will
use 6 MPI ranks with each access to one V100, while when running on flat MPI, we will use 42 MPI-ranks.
Note that in contrast with newer GPUs available on Perlmutter or Crusher, Summit's V100s only have 16GBs of
Expand Down
22 changes: 11 additions & 11 deletions Docs/source/manual/Tutorials_FlowPastCyl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,25 +33,25 @@ Follow the steps listed below to get to this point:

#. Move into the Exec folder containing the ``FlowPastCylinder``. To do so: ::

cd PeleLMeX/Exec/RegTests/EB_FlowPastCylinder
cd PeleLMeX/Exec/RegTests/EB_FlowPastCylinder

#. Finally, setup the environment variables providing paths to `PeleLMeX` and its dependencies. This can done in
one of two ways:

#. Directly into the `GNUmakefile` by updating the top-most lines as follows: ::

PELELMEX_HOME = <path_to_PeleLMeX>
AMREX_HOME =${PELELMEX_HOME}/Submodules/amrex
AMREX_HYDRO_HOME =${PELELMEX_HOME}/Submodules/AMReX-Hydro
PELE_PHYSICS_HOME =${PELELMEX_HOME}/Submodules/PelePhysics
PELELMEX_HOME = <path_to_PeleLMeX>
AMREX_HOME =${PELELMEX_HOME}/Submodules/amrex
AMREX_HYDRO_HOME =${PELELMEX_HOME}/Submodules/AMReX-Hydro
PELE_PHYSICS_HOME =${PELELMEX_HOME}/Submodules/PelePhysics


#. Exporting shell environement variables (using *bash* for instance): ::

export PELELMEX_HOME=<path_to_PeleLMeX>
export AMREX_HOME=${PELELMEX_HOME}/Submodules/amrex
export AMREX_HYDRO_HOME=${PELELMEX_HOME}/Submodules/AMReX-Hydro
export PELE_PHYSICS_HOME=${PELELMEX_HOME}/Submodules/PelePhysics
export PELELMEX_HOME=<path_to_PeleLMeX>
export AMREX_HOME=${PELELMEX_HOME}/Submodules/amrex
export AMREX_HYDRO_HOME=${PELELMEX_HOME}/Submodules/AMReX-Hydro
export PELE_PHYSICS_HOME=${PELELMEX_HOME}/Submodules/PelePhysics

Both options require to provide the path to where you cloned `PeleLMeX`. Note that using the first option will overwrite any
environement variables you might have previously defined when using this `GNUmakefile`.
Expand All @@ -61,7 +61,7 @@ You're good to go !
Numerical setup
---------------

In this section we review the content of the various input files for the flow past cylinder test case. To get additional information about the keywords discussed, the user is referred to section :ref:`sec:control`.
In this section we review the content of the various input files for the flow past cylinder test case. To get additional information about the keywords discussed, the user is referred to :doc:`LMeXControls`.

Test case and boundary conditions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -173,7 +173,7 @@ This initial solution is constructed via the routine ``pelelm_initdata()``, in t
Numerical scheme
^^^^^^^^^^^^^^^^

The ``NUMERICS CONTROL`` block can be modified by the user to increase the number of SDC iterations. Note that there are many other parameters controlling the numerical algorithm that the advanced user can tweak, but we will not talk about them in the present Tutorial. The interested user can refer to section :ref:`sec:control`.
The ``NUMERICS CONTROL`` block can be modified by the user to increase the number of SDC iterations. Note that there are many other parameters controlling the numerical algorithm that the advanced user can tweak, but we will not talk about them in the present Tutorial. The interested user can refer to :doc:`LMeXControls`.


Building the executable
Expand Down
Loading

0 comments on commit 5751900

Please sign in to comment.