Skip to content

ucvm2mesh

Philip Maechling edited this page Jul 29, 2023 · 4 revisions

Example ucvm2mesh_mpi

UCVM can be used to generate large meshes. The UCVM distribution contains a serial meshing program called ucvm2mesh. This entry describes a parallel version of the meshing code, used to speed up the meshing process, called ucvm2mesh-mpi.

Notice: ucvm2mesh-mpi is meant to be run as a MPI job, not as a standalone executable.

ucvm2mesh-mpi generates a mesh in either IJK-12, IJK-20, IJK-32, or SORD format. Unlike its serial version, ucvm2mesh this command can use multiple cores to generate the mesh. It does so by reading in the specifications in a given configuration file.

Preparing ucvm2mesh-mpi configuration files

First, we'll describe special considerations about running on a computing cluster. Then, we'll show an example of running on a specific cluster, one on campus at USC.

The configuration files required by ucvm2mesh-mpi are more extensive than other ucvm commands. A working example of a ucvm2mesh_mpi program is provided below.

First, ucvmc must be installed on a computer cluster. Most current HPC clusters are based on Linux operating systems, and in most cases, they provide the software tools needed to compile and build the ucvmc distribution, including the gnu compilers, a python interpreter, and the automake tools

To successfully install and use ucvm2mesh-mpi on a cluster, consider special conditions.

Clusters typically have a variety of disks and filesystems, and the user must figure out how to use them appropriate when building a ucvm mesh. Often a cluster will set a quota on a specific user directory. A cluster will provide a large temporary disk for use by running jobs. Sometimes a cluster will provide a group directory, where members of a group can install codes or store data used by the group.

An important consideration for each cluster file system is whether the filesystem is visible from the compute nodes. In some cases, the computers and disks that the user sees when the log into the cluster is not visible when a job is running on the cluster. In some case, the cluster nodes are different types than the login nodes. UCVMC should be installed on a filesystem that is visible from the compute nodes.

The user should identify locations for the UCVMC/source directory (25GB) and the UCVM/installation directory (20GB). After installation, the source directory can be deleted.

Large meshes require large disk storage. ucvm2mesh-mpi can be used to generate multi-terabyte meshes. When planning to use ucvm2mesh-mpi, the user should identify the output file system with room to store the mesh being created.

In some cases, the user can write the output file to the cluster /scratch. However, files written to /scratch can be removed by the cluster software, so typically the user should move the output files off of scratch as part of their cluster job.

Selecting the number of nodes and processors (or cores)

Typically, ucvm2mesh-mpi configurations are organized around processors. The mesh extraction job is divided onto multiple processor, which speeds up the mesh building process.

Cluster hardware is often discussed using the term node, in which case, a cluster node will often have multiple processors. Typically, you cannot schedule a job on anything less than 1 node. If your node has 32 processors, and you schedule 1 processor per node, the other 31 processors will be idle. However, the RAM memory on a node is often shared by all the processors, so if you try to run 32 processors on a node, each of your jobs will have less RAM to work with. If your job runs out of memory, you may need to run with fewer processors per node.

Worse case, you can run ucvm2mesh-mpi on one processor per node. If your nodes have more then one processor, you can run ucvm2mesh-mpi on multiple processors per node. If you have a 32 processor node, you might be able to run ucvm2mesh-mpi on 32 processors per node.

static versus dynamic linking

Some cluster require that all executables are statically linked. The default ucvm2mesh-mpi is dynamic linking. Dynamically linked executables are typically smaller, but they achieve lower performance than statically linked programs. Statically linked ucvm2mesh-mpi are advanced topic covered in the advanced user guide on the SCEC wiki.

Example processor division.

To figure our how to divide your ucvm2mesh-mpi job onto multiple processors, first calculate the total mesh points. We will use the following example, which defines a 1760800 mesh point mesh.

# Number of cells along each dim
nx=384
ny=248
nz=25

To divide the ucvm queries among several processors, we need to specify the length, width, and depth of our simulation volume. The length of the volume is defined as nx * spacing, the width is ny * spacing, and the depth is nz * spacing.

For the MPI version of ucvm2mesh-mpi, we need to specify the number of processors to dedicate to each axis. The total number of processors required is px * py * pz. For this example, we would need 20 cores dedicated to this task. Using our example above, we can divide the nx mesh points by 2, the number of ny mesh points by 2, and the number of nz mesh points by 5. So the number of processors in each direction can be used:

Then we divide the mesh points onto a number of processors, in each direction. An important constraint is that the number of mesh points must evenly by the number of processors. The guideline for the processors are that px * py * pz = num processors in the job. So for each x, y, and z, nx / px, ny / py, nz / pz, must be whole numbers. So if nx is 1000, px cannot be 3 but px can be 5. If you have a prime number of mesh points in a direction, you must use 1, or the prime number of processors.

# Partitioning of grid among processors
px=2
py=2
pz=5

For ucvm2mesh-mpi, we need to specify the number of processors to dedicate to each axis. The total number of processors required is px * py * pz. For this example, we would need 20 cores dedicated to this task.

Seismological parameters

The user must also specify the ucvm model, or models, to be queried. Also, the user should specify the min Vp and min Vs values in the mesh. These are described as floor values for Vs and Vp. If the user specifies a min vp or min vs, if ucvm retrieves a lower vp or vs from the CVM while it is building ht mesh, it will replace the lower vp or vs with the minimum value defined here. Setting these values to 0, and ucmv will not modify the material properties it receives from CVMs in use.

model=cvmsi,bbp1d
# Vs/Vp minimum
vp_min=0
vs_min=0

Specify Output filesystems

We must also specify the location for our mesh file and the grid file. A mesh file is the list of lon,lat,depths. The grid file is the mesh file populated with material properties.

# Mesh and grid files, format
meshfile=/lustre/scratch/user/mesh_cvmh_ijk12_2000m.media
gridfile=/lustre/scratch/user/mesh_cvmh_ijk12_2000m.grid

Running ucvm2mesh

ucvm2mesh-mpi needs to be submitted as a job to the cluster scheduler. In some cases, this can be done on the command line, like this. mpirun -np 20 ucvm2mesh_mpi -f ./ucvm2mesh_example.conf

However, in most cases, the user will create a job description scrip the defines the program, the required input parameters, and other information.

Creating HPC configuration files

In this example, we assume that ucvmc has been installed on USC HPC disk system visible to the compute nodes. Then, to run on ucvm2mesh-mpi on USC HPC cluster, the user defines two main configuration files.

First, the user defines a specification for the mesh they wish to create. This file defines the CVM or CVMS to use, the mesh size, the number of processors to use, the output file names and directories. A working example file for a 1.7M mesh point mesh is given here:

This file contains important information about the ucvm software, specifically the location of the ucvm.conf file in the ucvmc/installation directory. The ucvm2mesh-mpi binary, and the ucvm.conf file must be visible from the compute nodes.

# List of CVMs to query
ucvmlist=cvmh,bbp1d
# UCVM conf file
ucvmconf=/auto/scec-00/maechlin/t1/conf/ucvm.conf
# Gridding cell centered or vertex
gridtype=CENTER
# Spacing of cells
spacing=2000.0
# Projection
proj=+proj=utm +datum=WGS84 +zone=11
rot=-40.0
x0=-122.3
y0=34.7835
z0=0.0
# Number of cells along each dim
nx=384
ny=248
nz=25
# Partitioning of grid among processors
px=2
py=2
pz=5
# Vs/Vp minimum
vp_min=0
vs_min=0
# Mesh and grid files, format
meshfile=/auto/scec-00/maechlin/ucvm_mpi/mesh_cvmhbbp1d_ijk12_2000m.media
gridfile=/auto/scec-00/maechlin/ucvm_mpi/mesh_cvmhbbp1d_ijk12_2000m.grid
meshtype=IJK-12
# Location of scratch dir
scratch=/scratch

As you can see, this defines specific parameters about the mesh to be created.

USC HPC uses a pbs scheduler, so we create pbs script that will submit a job that runs ucvm2mesh-mpi, and that will pass this configuration file on the command line.

Our ucvm2mesh.pbs file looks like this:

-bash-4.2$ more ucvm2mesh.pbs
#!/bin/bash
#
# This is the pbs script ucvm2mesh 
#
#PBS -l nodes=4:ppn=5
#PBS -l walltime=00:10:00
#PBS -m ae
#PBS -M maechlin@usc.edu
##PBS -q main
#PBS -o $PBS_JOBID.out
#PBS -e $PBS_JOBID.err
cd $PBS_O_WORKDIR
np=$(cat $PBS_NODEFILE | wc -l)
cat $PBS_NODEFILE

mpirun -np $np -machinefile $PBS_NODEFILE /auto/scec-00/maechlin/t1/bin/ucvm2mesh-mpi -f /auto/scec-00/maechlin/ucvm_mpi/ucvm2mesh_20cores.conf
exit

This job runs on 4 nodes, with 5 processors per node. It outputs the mesh files to a disk visible from the compute nodes. the output files created when this ran were up to 28Mb is size:

-rw-r--r-- 1 maechlin scec 2.2M Feb  2 21:51 mesh_cvmhbbp1d_ijk12_2000m.grid
-rw-r--r-- 1 maechlin scec  28M Feb  2 21:52 mesh_cvmhbbp1d_ijk12_2000m.media

Submitting job on Titan

In this section we define specific configuration files we used to run on OLCF Titan. These are similar to the files used to run on HPC, with some differences.

configuration file

# List of CVMs to query
ucvmlist=cvmh

# UCVM conf file
ucvmconf=../conf/kraken/ucvm.conf

# Gridding cell centered or vertex
gridtype=CENTER

# Spacing of cells
spacing=2000.0

# Projection
proj=+proj=utm +datum=WGS84 +zone=11
rot=-40.0
x0=-122.3
y0=34.7835
z0=0.0

# Number of cells along each dim
nx=384
ny=248
nz=25

# Partitioning of grid among processors
px=2
py=2
pz=5

# Vs/Vp minimum
vp_min=0
vs_min=0

# Mesh and grid files, format
meshfile=/lustre/scratch/user/mesh_cvmh_ijk12_2000m.media
gridfile=/lustre/scratch/user/mesh_cvmh_ijk12_2000m.grid
meshtype=IJK-12

# Location of scratch dir
scratch=/lustre/scratch/user/scratch

ucvm2mesh_mpi configuration files

There are two inter-related configuration files used in the ucvm2mesh-mpi software. One file is the ucvm2mesh.conf, the other is the job submission script, that runs ucvm2mesh on a cluster. Examples of these two script are below:

A key constraint: the ucvm2mesh configuration file specifies the partitioning of grid among processor (as px, py, pz). The number of processes that are requested in the mpi job submission needs to be the product of these three values (e.g. example below (2 * 2 * 5) = 20). If the mpi job submission script requests more or less processes (aka tasks) than this number, the system exits with a configuration error.

example ucvm2mesh.conf

-bash-4.2$ more ucvm2mesh.conf
# List of CVMs to query
ucvmlist=cvmsi

## UCVM conf file
ucvmconf=/auto/scec-00/maechlin/ucvmc/conf/ucvm.conf

## Gridding cell centered or vertex (CENTER, VERTEX)
gridtype=CENTER

## Spacing of cells
spacing=2000.0

## Projection
proj=+proj=utm +datum=WGS84 +zone=11
rot=-40.0
x0=-122.3
y0=34.7835
z0=0.0

## Number of cells along each dim
nx=384
ny=248
nz=25

## Partitioning of grid among processors (request px*py*pz processes in mpi submit)
px=2
py=2
pz=5

## Vs/Vp minimum
vp_min=200
vs_min=200

## Mesh and grid files. Meshtype must be one of valid formats (IJK-12, IJK-20, IJK-32, SORD)
meshfile=/auto/scec-00/maechlin/ucvmc_mpi_testing/mesh_cvms5_sord_2000m_mpi.media
gridfile=/auto/scec-00/maechlin/ucvmc_mpi_testing/mesh_cvms5_sord_2000m_mpi.grid
meshtype=SORD

## Location of scratch dir
scratch=/staging/scratch

Then, the associate mpi job submission script must request the same number of processes. USC HPC, where we test ucvmc uses the slurm job manager. The following job manager script:

ucvm2mesh.slurm

-bash-4.2$ more *.slurm
#!/bin/bash
##SBATCH -p scec
#SBATCH --ntasks=20 #number of tasks with one per processor
#SBATCH -N 10
#SBATCH --mem 0 # Set to unlimited memory
#SBATCH --time=02:30:00
#SBATCH -o ucvm2mesh_mpi_large_%A.out
#SBATCH -e ucvm2mesh_mpi_large_%A.err
#SBATCH --export=NONE
#SBATCH --mail-user=maechlin@usc.edu
#SBATCH --mail-type=END

cd /auto/scec-00/maechlin/ucvmc_mpi_testing
srun -v --mpi=pmi2 /auto/scec-00/maechlin/ucvmc/bin/ucvm2mesh_mpi -f /auto/scec-00/maechlin/ucvmc_mpi_testing/ucvm2mesh.con

Installing and Running UCVM2mesh_mpi

1.	Building and running UCVMC v171:
We built and ran UCVMC v17.1 on USC hpc. We re-built it using openmpi libraries and used updated job submit scripts for slurm. 
- Examples of how to run ucvm2mesh_mpi are in my directory at 
- /home/scec-00/maechlin/ucvmc_mpi_testing/testcase6/
- There is a ucvm2mesh.conf and a makemesh.slurm which run successful for small meshes.

We tested ucvm2mesh and ucvm2mesh_mpi on small meshes and confirmed they produce the identical grid and media file results. So the rest of our testing was with ucvm2mesh_mpi.

2.	Output File Formats:

We looked into the output files and formats, and the explanation I made on our call was wrong. The numbers are not related to significant digits in the output files.

We determined that a standard ucvm2mesh_mpi run produces an output grid file, and a mesh file (also called a media file) 

The grid file contains the I j and lat lon values in the mesh. There is a utility called dumpgridp.py in the ucvmc scripts directory that will read the grid file and print the I,j values and the associated lat,lon values.

The .media file formats include IJK-12, IJK-20, IJK-32 and SORD. I believe all mesh file formats are fast x, y, and z (although this is an assumption based on the grid file format). The formats differ by the contents for each mesh point. The size of each point depends on the size of a structure, and the structures contains different information, shown below:

typedef struct mesh_ijk12_t {
  float vp;
  float vs;
  float rho;
} mesh_ijk12_t;

typedef struct mesh_ijk20_t {
  float vp;
  float vs;
  float rho;
  float qp;
  float qs;
} mesh_ijk20_t;

typedef struct mesh_ijk32_t {
  int i;
  int j;
  int k;
  float vp;
  float vs;
  float rho;
  float qp;
  float qs;
} mesh_ijk32_t;
The SORD output format, produces three files. Each file is apparently only one material property value, and has no ijk values, written like this:

case MESH_FORMAT_SORD:
for (i = 0; i < node_count; i++) {
      ptr1_sord[i].val = nodes[i].vp;
      ptr2_sord[i].val = nodes[i].vs;
      ptr3_sord[i].val = nodes[i].rho;
    }

For the mesh format ijk-32 includes qp, and qs values based on this formula /* Qp/Qs via Kim Olsen */
qs = 50.0 * (vs / 1000.0);
qp = 2.0 * qs;

Also, we noted that if a undefined file format is given (e.g. if you leave out the dash in a string like IJK-12) the code complains, but doesn’t exit, and it will run till it is killed by the wall clock timer.

3.	Impact of Vertex/Center:

My explanation on the call for vertex, center was wrong. Testing shows that Vertex setting sets the origin at exactly the given point, while Center setting moves the lat/lon point to the center of the cell and queries properties there. For example, given a origin at 34.00, -118,0, these two grid files are produced:

Center result:
0, 0 : -117.994639, 34.004553
1, 0 : -117.983811, 34.004640
2, 0 : -117.972984, 34.004726

Vertex  result:
0, 0 : -118.000000, 34.000000
1, 0 : -117.989173, 34.000088
2, 0 : -117.978346, 34.000174
4.	Orientation of x, y and z:

We setup some small scale meshes, with and without rotations, with a different number of points in each direction. Our tests indicate negative rotation is clockwise, which is consistent with the source code comments that say “ proj rotation angle in degrees, (+ is counter-clockwise)”

We setup a mesh with 1km spacing, Center mesh, and dimensions of x=40pt, y=20pts, z=10pts and tried different rotations:

no rotation end points:
0, 0 : -117.994639, 34.004553
39, 19 : -117.573488, 34.178591

negative 90 rotation end points:
0, 0 : -117.994534, 33.995535
39, 19 : -117.785601, 33.64531

postive 90 rotation end points:
0, 0 : -118.005467, 34.004465
39, 19 : -118.216216, 34.354259

This is one result that I have concerns about. As expected, the three end points form right angles, but they do not look east/west or north/south as expected. Possibly the projection is making these look like they are not east/west, but this is what we got.

5.	Additional Notes:

A few other things we learned in our testing.

-	If we tried to query for a point, and none of the velocity model had data for that point, ucvm2mesh-mpi exited with an error. For this reason, when querying cvms5 (aka cvm-s4.26 which has no background model) we needed to add a background model in our model list, to build our example meshes.

-	Ucvm2mesh_mpi produces a useful summary about the mesh it just produced when it completes. This summary includes the max, min vs, mp and the point where they are found.

With MinVS and MinVS to to 0 (updated ucvm2mesh.conf to set min vp and min vs to 0)
[0] Max Vp: 8530.947266 at
[0]	i,j,k : 237, 53, 14
[0] Max Vs: 5225.743652 at
[0]	i,j,k : 237, 54, 13
[0] Max Rho: 3174.164062 at
[0]	i,j,k : 119, 60, 15
[0] Min Vp: 283.636932 at
[0]	i,j,k : 204, 110, 0
[0] Min Vs: 138.720001 at
[0]	i,j,k : 194, 82, 0
[0] Min Rho: 1909.786255 at
[0]	i,j,k : 204, 110, 0
[0] Min Ratio: 1.414214 at
[0]	i,j,k : 125, 62, 0

-	When you specify a min Vs and min Vp, ucvm2mesh-mpi only checks the min Vs value. If it finds Vs lower than the min Vs, it sets both the Vs and Vp to the min values specified in the ucvm2mesh config file.

if (vs < vs_min)
{
	vs = vs_min;
	vp = vp_min;
}

-	There is a mesh_check utility in the distribution. It checks to make sure that each record in the file is of the correct size. Furthermore, it checks to make sure that each value is not NaN, infinity, or negative.

  printf("Usage: %s input format\n\n",arg);
  printf("input: path to the input file\n");
  printf("format: format of file, IJK-12, IJK-20, IJK-32\n\n");

We generated a mesh and ran mesh_check and it returned:
-bash-4.2$ ../ucvmc/bin/mesh_check mesh_cvmsi_ijk32_2000m_mpi.media IJK-32
Record size is 32 bytes
Opening input file mesh_cvmsi_ijk32_2000m_mpi.media
Checked 2380800 vals total

Example Configuration Files from OLCF Frontier

File: titan_mesh.conf

# List of CVMs to query
ucvmlist=cvmsi

## UCVM conf file
ucvmconf=/lustre/orion/proj-shared/geo112/pmaech/ucvm227/conf/ucvm.conf

## Gridding cell centered or vertex (CENTER, VERTEX)
gridtype=CENTER

## Spacing of cells
spacing=2000.0

## Projection
proj=+proj=utm +datum=WGS84 +zone=11
rot=-40.0
x0=-122.3
y0=34.7835
z0=0.0

## Number of cells along each dim
nx=384
ny=248
nz=25

## Partitioning of grid among processors (request px*py*pz processes in mpi submit)
px=2
py=2
pz=5

## Vs/Vp minimum
vp_min=200
vs_min=200

## Mesh and grid files. Meshtype must be one of valid formats (IJK-12, IJK-20, IJK-32, SORD)
meshfile=/lustre/orion/scratch/pmaech/geo112/ucvm/mesh_cvmsi_sord_2000m_mpi.media
gridfile=/lustre/orion/scratch/pmaech/geo112/ucvm/mesh_cvmsi_sord_2000m_mpi.grid
meshtype=IJK-32
## Location of scratch dir
scratch=/lustre/orion/scratch/pmaech/geo112/ucvm

frontier_mesh.sl file

#!/bin/bash
#SBATCH -A geo112
#SBATCH -J titan_mesh
#SBATCH -o %x-%j.out
#SBATCH -t 0:10:00
#SBATCH -p batch
#SBATCH -N 2
#SBATCH --threads-per-core=1
#SBATCH --mail-type=ALL
#SBATCH --mail-user=maechlin@usc.edu
cd $MEMBERWORK/geo112/ucvm
cp $PROJWORK/geo112/pmaech/test_ucvm/titan_mesh.conf ./titan_mesh.conf
srun -N 2 -n 20 -c 1 --cpu-bind=threads --threads-per-core=1 $PROJWORK/geo112/pmaech/ucvm227/bin/ucvm2mesh_mpi -f ./titan_mesh.conf
cp mesh_cvmsi_sord_2000m_mpi.media $PROJWORK/geo112/pmaech/test_ucvm/mesh_cvmsi_sord_2000m_mpi.media
cp mesh_cvmsi_sord_2000m_mpi.grid  $PROJWORK/geo112/pmaech/test_ucvm/mesh_cvmsi_sord_2000m_mpi.grid

Older, but more detailed documentation on ucvm2mesh is posted on a SCEC website at: UCVM Documentation

Clone this wiki locally