diff --git a/_episodes/07-development-setup.md b/_episodes/07-development-setup.md index 0d70e22a..b515aac2 100644 --- a/_episodes/07-development-setup.md +++ b/_episodes/07-development-setup.md @@ -204,7 +204,7 @@ with changes from the ``main`` branch. ## Contribution We have seen how to install ESMValTool in a ``develop`` mode. -Now, we try to contribute to its development. Let's see how we this can be achieved. +Now, we try to contribute to its development. Let's see how this can be achieved. We first discuss our ideas in an **[issue](https://github.com/ESMValGroup/ESMValTool/issues)** in ESMValTool repository. This can avoid disappointment at a later stage, for example, @@ -236,7 +236,7 @@ However, a few (command line) tools can get you a long way, and we’ll cover those essentials in the next sections. **Tip**: we encourage you to keep the pull requests small. -Reviewing small incremental changes are more efficient. +Reviewing small incremental changes is more efficient. ### Working example @@ -347,7 +347,7 @@ To explore other tools, have a look at ESMValTool documentation on ### Run unit tests Previous section introduced some tools to check code style and quality. -What it has not done is show us how to tell whether our code is getting the right answer. +There is lack of mechanism to determine whether or not our code is getting the right answer. To achieve that, we need to write and run tests for widely-used functions. ESMValTool comes with a lot of tests that are in the folder ``tests``. diff --git a/_episodes/08-diagnostics.md b/_episodes/08-diagnostics.md index 2c1468f1..967df0fe 100644 --- a/_episodes/08-diagnostics.md +++ b/_episodes/08-diagnostics.md @@ -340,17 +340,17 @@ available functions and their description can be found in ## Diagnostic computation After grouping and selecting data, we can read individual attributes (such as filename) -of each item. Here we have grouped the input data by ``variables`` -so we loop over the variables (line 88). Following this, is a call to the -function ``compute_diagnostic`` (line 93). Let's have a look at the -definition of this function in line 42 where the actual analysis on the data is done. +of each item. Here, we have grouped the input data by ``variables``, +so we loop over the variables (line 88). Following this is a call to the +function ``compute_diagnostic`` (line 93). Let's look at the +definition of this function in line 42, where the actual analysis of the data is done. Note that output from the ESMValCore preprocessor is in the form of NetCDF files. Here, ``compute_diagnostic`` uses [Iris](https://scitools-iris.readthedocs.io/en/latest/index.html) to read data from a netCDF file and performs an operation ``squeeze`` to remove any dimensions -of length one. We can adapt this function to add our own analysis. As an -example, here we calculate the bias using the average of the data using Iris cubes. +of length one. We can adapt this function to add our own analysis. As an example, +here we calculate the bias using the average of the data using Iris cubes. ~~~python def compute_diagnostic(filename): diff --git a/_episodes/09-cmorization.md b/_episodes/09-cmorization.md index c131c96e..92a888c5 100644 --- a/_episodes/09-cmorization.md +++ b/_episodes/09-cmorization.md @@ -40,11 +40,11 @@ the data. This process is called "CMORization". > > Concretely, the CMOR standards dictate e.g. the variable names and units, coordinate information, how the data should be structured (e.g. 1 variable per -file), additional metadata requirements, but also file naming conventions a.k.a. +file), additional metadata requirements, and file naming conventions a.k.a. the data reference syntax ([DRS](https://docs.esmvaltool.org/projects/esmvalcore/en/latest/quickstart/find_data.html)). > All this information is stored in so-called CMOR tables. -> As an example, the CMOR tables for the CMIP6 project can be found +> For example, the CMOR tables for the CMIP6 project can be found [here](https://github.com/PCMDI/cmip6-cmor-tables). {: .callout} @@ -105,11 +105,11 @@ Note: you'll need a user-friendly ftp client. On Linux, `ncftp` works okay. > - **Tier3**: datasets with access restrictions (most of these datasets will also > need some kind of cmorization) > -> These access restrictions are also the reason why the ESMValTool developers +> These access restrictions are also why the ESMValTool developers > cannot distribute copies or automate downloading of all observations and -> reanalysis data used in the recipes. As a compromise we provide the +> reanalysis data used in the recipes. As a compromise, we provide the > CMORization scripts so that each user can CMORize their own copy of the access -> restricted datasets if they need them. +> restricted datasets if needed. > {: .callout} diff --git a/_episodes/10-debugging.md b/_episodes/10-debugging.md index 4861252d..ebcf0d54 100644 --- a/_episodes/10-debugging.md +++ b/_episodes/10-debugging.md @@ -14,10 +14,10 @@ keypoints: --- Every user encounters errors. Once you know why you get certain types of errors, -they become much easier to fix. The good news is, ESMValTool creates a record of +they become much easier to fix. The good news is that ESMValTool creates a record of the output messages and stores them in log files. They can be used for debugging -or monitoring the process. This lesson helps to understand what the different -types of errors are and when you are likely to encounter them. +or monitoring the process. This lesson helps you understand the different +types of errors and when you are likely to encounter them. ## Log files @@ -326,7 +326,7 @@ save the pre-processed data. More information about this setting can be found at > ## save_intermediary_cubes > -> Note that this setting should be only used for debugging, as it significantly +> Note that this setting should only be used for debugging, as it significantly > slows down the recipe and increases disk usage because a lot of output files > need to be stored. {: .callout} @@ -434,7 +434,7 @@ How to re-run the diagnostic script? > If you run out of memory, try setting ``max_parallel_tasks`` to 1 in the > configuration file. Then, check the amount of memory you need for that by > inspecting the file ``run/resource_usage.txt`` in the output directory. Using -> the number there you can increase the number of parallel tasks again to a +> the number, there you can increase the number of parallel tasks again to a > reasonable number for the amount of memory available in your system. {: .callout}