-
Notifications
You must be signed in to change notification settings - Fork 1
Compiling WRFv4.0 on ARC3
These instructions describe the methodology behind the installation of WRFv4.0 under the earwrfa
account on the University of Leeds HPC facility, ARC3 http://www.arc.leeds.ac.uk.
This page can also be used by University of Leeds researchers who wish to build their own version of WRFv4 on ARC3. In such cases, applications for an account on ARC3 can be made by following the instructions at http://arc.leeds.ac.uk/apply/getting-an-account/
Requirements
The basic requirements for installing WRF on a HPC system are listed below:
- WRF model source code (including WPS); see http://www2.mmm.ucar.edu/wrf/users/download/get_source.html
- Netcdf libraries;
- A suitable Fortran compiler;
- An implementation of the Message Passing Interface (MPI), to allow WRF to run across multiple processors.
The WRFv4 installation on earwrfa
uses netcdf v4.4.1, the Intel Fortran Compiler V17.0, and the OpenMPI library (https://www.open-mpi.org/). These are already available as modules on ARC3; more details are given in the instructions below.
Compilation Instructions
Given the size of the WRF installation, it is advisable to build the model within your /nobackup
directory. Remember that files on /nobackup
are automatically deleted if they reach a limit of 90 days without being accessed, so it is advisable to set up a cron job that makes use of the 'touch' command to prevent files from being removed.
First we need to download the WRF source code. Change to your /nobackup
directory (/nobackup/$USER
) and create a new sub-directory called 'WRFV4'. Change into this directory and enter the following commands:
git clone --branch v4.0 https://github.com/wrf-model/WRF git clone --branch v4.0 https://github.com/wrf-model/WPS
The ls
command should now reveal two new directories: WRF
and WPS
. Change to the WRF
directory using cd WRF
Now type:
module list
to reveal which modules are already loaded by default. You should see something similar to the following:
Currently Loaded Modulefiles: 1) licenses 2) sge 3) intel/17.0.1 4) openmpi/2.0.2 5) user
This indicates that the intel fortran compiler V17.0.1 (intel/17.0.1
), along with the OpenMPI library v2.0.2 (openmpi/2.0 .2
) are already loaded and available to us within our current environment. WRF still requires netcdf and hdf5 libraries though; thank fully these are also available as modules on ARC3, the only difference being that we must explicitly load them into our environment first be fore we can use them. This is done by typing:
module load netcdf module load hdf5
Next, we need to set some environment variables, to enable large file support and to tell WRF the location of the netcdf and intel libraries. Enter the lines below directly at the command line:
export WRFIO_NCD_LARGE_FILE_SUPPORT=1 export NETCDF=/apps/developers/libraries/netcdf/4.4.1/1/intel-17.0.1-openmpi-2.0.2 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NETCDF/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/apps/developers/compilers/intel/17.0.1/1/default/lib/intel64
It is also useful to add these four lines to your $HOME/.bashrc file, to avoid having to re-enter them every time you log in to ARC3. At the time of writing, the NETCDF
and LD_LIBRARY_PATH
paths above point to the latest versions on ARC3, although the se are likely to change over time. Please be sure to check the paths yourself to make sure they are up-to-date and correct.
We can now run the configure script by typing
./configure
When prompted, select option 20:
Xeon (SNB with AVX mods) ifort compiler with icc (dmpar)
Upon completion, a file called configure.wrf
will be generated, containing the settings needed for the compilation to procee d.
Due to memory and wallclock time requirements, it is important to run the compilation as a serial batch job. An example submission script to compile WRF is given below:
# Run with current set of modules and in current directory #$ -cwd -V # Request some time; 2 hours 30 mins should be enough #$ -l h_rt=02:30:00 # Request some memory; 6GB should be enough for WRF (no CHEM, yet...) #$ -l h_vmem=6G # Get mail at start and end of the job #$ -m be #$ -M @leeds.ac.uk module load netcdf module load hdf5 export WRFIO_NCD_LARGE_FILE_SUPPORT=1 export NETCDF=/apps/developers/libraries/netcdf/4.4.1/1/intel-17.0.1-openmpi-2.0.2 # Now run the compilation script ./compile em_real >&compile_wrf.log
In the above script, be sure to set <username>@leeds.ac.uk
accordingly, to ensure that the scheduler sends notificatio ns to the correct email address. Save your script with the name compile_wrf.sh
, and then submit it using qsub
:
qsub ./compile_wrf.sh
Output from the compilation will be saved in a text file called compile_wrf.log
. To check on the status of your serial job, use qstat
. To check for any errors in the compile log file, use the grep
command, e.g.
grep Error compile_wrf.log
Compilation can take around 2 hours. If you have set up your submission script correctly, you should receive email notifications when the script starts to run and when it has completed. If it has finished successfully, you should see something similar to the following at the bottom of your compile_wrf.log
file:
Executables successfully built -rwxr-xr-x 1 earcdea EAR 51822816 Jul 6 23:20 main/ndown.exe -rwxr-xr-x 1 earcdea EAR 51823328 Jul 6 23:20 main/real.exe -rwxr-xr-x 1 earcdea EAR 50877216 Jul 6 23:20 main/tc.exe -rwxr-xr-x 1 earcdea EAR 58426512 Jul 6 23:18 main/wrf.exe
Compiling WPS
Now we have built the wrf executables, we can proceed with installation of WPS, the WRF Pre-Processing Suite. In addition to netcdf, WPS also requires some other external libraries, namely jasper, libpng and zlib. These libraries are needed to ensure that WPS can make use of i nput data in GRIB2 format, now widely used by NCEP (http://www.ncep. noaa.gov). Thankfully these libraries are already available on ARC3, under /usr/lib64
and /usr/include
.
The first step in compiling WPS is to run the configure script within the WPS
directory:
./configure
When prompted, select option 19 - Intel x86_64, Intel compiler (dmpar)
. This should result in the generation of the file configure.wps. Before we can use this file to compile WPS, we need to make a small change, in order to set the correct path for t he external libraries. Open configure.wps
and scroll down until you find the environment variables COMPRESSION_LIBS
and COMPRESSION_INC
, then set them as follows:
COMPRESSION_LIBS = -L/usr/lib64 -ljasper -lpng -lz COMPRESSION_INC = -I/usr/include
Save your changes and exit.
Just as we did for WRF, it is good practice to compile WPS as a serial job, even though it does not require nearly as much memory or wallclock time as the main WRF program. An example script, compile_wps.sh
, for compiling WPS is given below:
# Run with current set of modules and in current directory #$ -cwd -V # Request some time; 30 mins should be enough #$ -l h_rt=00:30:00 # Request some memory; 1GB should be enough for WPS #$ -l h_vmem=1G # Get mail at start and end of the job #$ -m be #$ -M @leeds.ac.uk module load netcdf module load hdf5 export WRFIO_NCD_LARGE_FILE_SUPPORT=1 export NETCDF=/apps/developers/libraries/netcdf/4.4.1/1/intel-17.0.1-openmpi-2.0.2 # Now run the compilation script ./compile >&compile_wps.log
Submit the serial job as before using qsub
:
qsub ./compile_wps.sh
Upon successful completion, you should see three soft links with the WPS
directory corresponding to the three main WPS tools, namely geogrid, ungrib and metgrid:
lrwxrwxrwx 1 earcdea EAR 23 Jul 9 12:13 geogrid.exe -> geogrid/src/geogrid.exe lrwxrwxrwx 1 earcdea EAR 21 Jul 9 12:14 ungrib.exe -> ungrib/src/ungrib.exe lrwxrwxrwx 1 earcdea EAR 23 Jul 9 12:15 metgrid.exe -> metgrid/src/metgrid.exe
Testing your new WRF installation
A good starting point is the on-line tutorial at http://www2.mmm.ucar.edu/wrf/OnLineTutorial/CASES/JAN00/index.html. This is a case study of a winter cyclone that hit the East Coast of the US on January 24-25, 2000.
Below is an example of a parallel job submission script called run_wrf.sh
for running the WRF tutorial case study on ARC3:
# Run with current set of modules and in current directory #$ -cwd -V # Request some time; 30 minutes for default case study described at # http://www2.mmm.ucar.edu/wrf/OnLineTutorial/CASES/JAN00/index.html #$ -l h_rt=00:30:00 # Request some cores #$ -l np=4 # Request some memory; 2GB per core #$ -l h_vmem=2G # Get mail at start and end of the job #$ -m be #$ -M @leeds.ac.uk module load netcdf module load hdf5 export WRFIO_NCD_LARGE_FILE_SUPPORT=1 export NETCDF=/apps/developers/libraries/netcdf/4.4.1/1/intel-17.0.1-openmpi-2.0.2 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$NETCDF/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/apps/developers/compilers/intel/17.0.1/1/default/lib/intel64 # Now run WRF mpirun ./wrf.exe
Submit using qsub:
qsub ./run_wrf.sh
As before, you can check on the status of your batch job using the qstat
command. When the job starts to run, you can check on progress using the command:
more rsl.error.0000
Scroll to the end of this file to see how many timesteps have been completed.
Installing NCL (NCAR Command Language) on ARC3
It is recommended to have NCL installed within your environment, since some of the WPS tools are written in NCL (e.g. the very useful plotgrids_new.ncl script, which lives at WPS/util/
). NCL also provides an extensive library of scripts specifically designed for post-processing and visualisation of WRF output - see here. NCL is not available as a standard module on ARC3, although it is straightforward enough to install it within your own environment using Miniconda. If you are not familiar with the Conda package management system, it is advisable to first read the ARC team's helpful guidance at http://arc.leeds.ac.uk/using-the-systems/advanced-top ics/installing-applications-and-libraries-with-conda/. Specific instructions on how to install NCL with Conda are available at https://www.ncl.ucar.edu/Download/conda.shtml
Where to obtain input data for driving WRF
At some point you will want to start creating your own case studies using WRF, for which you will need to provide the model with relevant input data (e.g. analysis products from the archives of operational Met services) to generate initial conditions and boundary conditions fo r your domains. The UCAR Research Data Archive (RDA) is a useful source of global analyses (see https://rda.ucar.edu/index.html). This data is free to download; it only requires you to register for their services first.
A useful global dataset is the NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive (starting from 2015-01-15). This can be fou nd by visiting the UCAR RDA website and typing 'GFS' into the 'DataSet Search' dialog box. Once you have found the dataset, it can be downlo aded by going to 'Data Access', then selecting 'Web File Listing' under 'Web Server Holdings'. Then select 'Complete File List', and the year of interest (2015 - present). You can then select a specific day to retrieve the data for. For analysis products, choose the files that end in f000.grib2
.
There is also a 0.5 degree global GFS analysis product that is available from https://www.ncdc.noaa.gov/data-access/model-data/model-dataset s/global-forcast-system-gfs, that spans the period 1 Jan 2007 - present.
For case studies over the US, another option is the NCEP North American Mesoscale (NAM) 12 km Analysis. This product can be found by sear ching for 'NAM' in the Dataset Search from the UCAR RDA website home page.