-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to add a new gridded dataset to ogh #32
Comments
I like the moxie in the instructions. However, the key step is forgotten, Step 0: File and Metadata Management. You're including a new data set, and they may have their own ways of doing things. This includes the time-period of the files, organization of the files, gridding schema, and variables represented. Working backwards from the functions is the hard way forward. |
Maybe I should have made Gist. But above there are "Three main code additions: |
Jim, Do you have any time today or this evening I could try modifying oxl(ogh_xarray_landlab.py) to include a oxl.get_x_hourlywrf_pnnl2018 function? I cloned your fork of the Observatory and ran the Observatory_usecase_7_xmapLandlab and am looking at the oxl set of functions. The usecase_7 has a number of errors when I run it from hydroshare.... like: Do I need to use the ogh module you updated at geohack to run the notebook? the pacific northwest national laboratory data is saved at: Thanks for your help |
Sorry, I'm at a workshop today, and I won't be able to get around to this error until Wednesday. In the usecase7 notebook, the intention is to have them use the OGH v0.1.11 conda library (the current most stable version), while using functionalities from the oxl module, which will later become OGH.xarray_landlab module. If you've paired an ogh.py within the same folder as the notebook, just rename it to something else as I have like ogh_old.py. To make sure you can run ogh v.0.1.11, run the following code in bash on HydroShare Jupyterhub to get through the necessary installations. In the long run, HydroShare-Jupyterhub needs to keep up with these versioning issues in their Docker image. conda install -c conda-forge ogh fiona ncurses libgdal gdal pygraphviz --yes |
I've just pushed my changes to my Fork. It should work better if you've implement the conda install code. |
Hi Jim, I added two functions (copied and modified two of your functions so that we can download the PNNL data. Should I push the changes to your fork of Observatory? def compile_x_wrfpnnl2018_raw_locations(time_increments):
def get_x_hourlywrf_PNNL2018(homedir, |
into a sensible folder outside of the HS folder structure (e.g. make a folder called Github)
4. copy ogh.py and ogh_meta to your HS working directory
5. test existing Notebook and functions for case study location
6. change the name and Notebook to import and use a local version of ogh and ogh meta
7. Create functions for metadata, get, and compile (A, B, C below)
8. Test and debug
9. Download your data !! Yeahhh. Explore your data with other OGH functions.
10. Click Pull request https://github.com/Freshwater-Initiative/Observatory
Three main code additions:
A. Edit ogh_meta (click here to view code) for your new dataset
B. Create a new ogh get function for your new dataset. For example, create your own version of this:
def getDailyMET_livneh2013(homedir, mappingfile,
subdir='livneh2013/Daily_MET_1915_2011/raw',
catalog_label='dailymet_livneh2013'):
"""
Get the Livneh el al., 2013 Daily Meteorology files of interest using the reference mapping file
C. Create new compile function for your dataset
def compile_bc_Livneh2013_locations(maptable):
"""
Compile a list of file URLs for bias corrected Livneh et al. 2013 (CIG)
supp_table1.pdf
The text was updated successfully, but these errors were encountered: