Skip to content

IanTBlack/oregon-shelf-mhw

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bloom Compression by Marine Heatwaves Contemporary With the Oregon Upwelling Season

Black, I., Kavanaugh, M.T., and Reimers, C.E. -- In Revision


Please use the repository Issues page for code issues or if items in this README need to be updated. All other correspondence can be directed toward Ian Black (blackia@oregonstate.edu).


Project Installation

This project was built around the OOI JupyterHub. You can sign up for a free account here.

Steps

  1. First navigate to your user directory (/home/jovyan).
    • cd ~
  2. Clone this repository to your user directory.
    • git clone https://github.com/IanTBlack/oregon-shelf-mhw.git
  3. Create a new virtual environment.
    • cd ~
    • conda create -n ormhw python=3.12
    • conda activate ormhw
    • python -m ipykernel install --user --name ormhw --display-name "Python (ormhw)"
  4. Install ooijh. ooijh is a Python package specifically built for the OOI JupyterHub that streamlines data access and analysis of OOI datasets.
    • cd ~
    • conda activate ormhw
    • git clone https://github.com/IanTBlack/ooijh.git
    • cd ooijh
    • pip install -r requirements.txt
    • pip install .
  5. Install required packages.
    • cd ~
    • cd oregon-shelf-mhw
    • pip install -r requirements.txt

How To Rerun

Notebooks in the main project repository are ordered. It is expected that you run these notebooks in order if you are running this project for the first time. Many notebooks rely on data that was previously downloaded or curated by another notebook.


Running Multiple Notebooks

By default, curating NASA data is done iteratively for each year, which can take about 24 hours. It is beneficial to run multiple notebooks at once. Simply copy the notebook and reassign a new year. This will reduce processing time considerably.


Datasets Used In Analysis

OOI Datasets

Brief Intro to OOI Dataset Syntax

OOI datasets are identified by a site, node, instrument, method, and stream. These describe the spatial location (site), the logger (node, which can also be loosely used to identify the vertical location), the sensor, the data delivery method, and the data stream (most sensors have one data stream, but some datasets are split into scientific/engineering, which is where the stream id comes in).

Example:

  • site: CE01ISSM
  • node: RID16
  • instrument: 03-CTDBPC000
  • method: recovered_host
  • stream: ctdbp_cdef_dcl_instrument_recovered

Below are THREDDS catalog links to the recovered_host datasets that were used in this study. Note that the ooijh package (under development) will automatically merge datasets that were delivered by different methods in order to achieve the most complete time-series.

Additional information about each platform can be found in the list below.


Downloading Data

The provided notebooks will automatically download files, with two exceptions.

  1. The CUTI/BEUTI dataset must be manually downloaded and then reuploaded to the OOI JupyterHub.
  2. The NASA data requests require login information input into a .netrc file within the user directory.
  • machine urs.earthdata.nasa.gov
  • login your-login-here
  • password your-password-here