-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JEDI increment write to cubed sphere history #983
Conversation
…d for ensemble analysis
…d for ensemble analysis
Need to do a touch more testing before re-opening |
OK, re-opening. I thought there was an error, but things looks good. |
@DavidNew-NOAA before merging, do you have any plots/statistics summarizing your testing that show this produces comparable results? |
I compared the min, max, and std of the delp and delz increment with the new OOPS app and with the old Python script. The relative error of the delp and delz increments was 10^-7 and 10^-2 respectively for each these three statistics. |
* origin/develop: Use <filesystem> on a non c++17 supported machine (WCOSS ACORN) (#1026) Change generate_com to declare_from_tmpl (#1025) Commenting out more of the marine bufr 2 ioda stuff (#1018) make driver consistent with workflow driver (#1016) Update hashes now that GSI-B is working for EnVar (#1015) Add GitHub CLI to path for CI (#1014) Use _anl rather than _ges dimensions for increments in FV3 increment converter YAML (#1013) Fix inconsistent VIIRS preprocessing test (#1012) remove gdas_ prefix from executable filename in test_gdasapp_fv3jedi_fv3inc (#1010) Bugfix on Broken GHRSST Ioda Converter (#1004) Moved the marine converters to a "safe" place (#1007) restore ATM local ensemble ctest functionality (#1003) Add BUFR2IODA python API converter to prepoceanobs task (#914) Remove sst's from obs proc (#1001) JEDI increment write to cubed sphere history (#983) [End- to End Test code sprint] Add SEVIRI METEOSAT-8 and METEOSAT-11 to end-to-end testing (#766)
This PR, a companion to GDASApp PR [#983](NOAA-EMC/GDASApp#983), creates a new Rocoto job called "atmanlfv3inc" that computes the FV3 atmosphere increment from the JEDI variational increment using a JEDI OOPS app in GDASApp, called fv3jedi_fv3inc.x, that replaces the GDASApp Python script, jediinc2fv3.py, for the variational analysis. The "atmanlrun" job is renamed "atmanlvar" to better reflect the role it plays of running now one of two JEDI executables for the atmospheric analysis jobs. Previously, the JEDI variational executable would interpolate and write its increment, during the atmanlrun job, to the Gaussian grid, and then the python script, jediinc2fv3.py, would read it and then write the FV3 increment on the Gaussian grid during the atmanlfinal job. Following the new changes, the JEDI increment will be written directly to the cubed sphere. Then during the atmanlfv3inc job, the OOPS app will read it and compute the FV3 increment directly on the cubed sphere and write it out onto the Gaussian grid. The reason for writing first to the cubed sphere grid is that otherwise the OOPS app would have to interpolate twice, once from Gaussian to cubed sphere before computing the increment and then back to the Gaussian, since all the underlying computations in JEDI are done on the native grid. The motivation for this new app and job is that eventually we wish to transition all intermediate data to the native cubed sphere grid, and the OOPS framework allows us the flexibility to read and write to/from any grid format we wish by just changing the YAML configuration file rather than hardcoding. When we do switch to the cubed sphere, it will be an easy transition. Moreover, it the computations the OOPS app will be done with a compiled executable rather than an interpreted Python script, providing some performance increase. It has been tested with a cycling experiment with JEDI in both Hera and Orion to show that it runs without issues, and I have compared the FV3 increments computed by the original and news codes. The delp and hydrostatic delz increments, the key increments produced during this step, differ by a relative error of 10^-7 and 10^-2 respectively. This difference is most likely due to the original python script doing its internal computation on the interpolated Gaussian grid, while the new OOPS app does its computations on the native cubed sphere before interpolating the the Gaussian grid.
This PR, a companion to Global Workflow PR #2420 changes the variational YAML for JEDI to write to cubed sphere history rather than the Gaussian grid. With the new changes to Global Workflow, the new gdas_fv3jedi_jediinc2fv3.x OOPS app will read the JEDI increment from the cubed sphere history, compute the FV3 increment, and interpolate/write it the the Gaussian grid. The only meaningful difference is that the internal calculations, namely computation of the hydrostatic layer thickness increment, will be computed on the native grid rather than on the Gaussian grid, before interpolation rather than after. This makes more sense physically. Eventually the FV3 increment will be written and read to/from cubed sphere history anyway.