Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix spelling errors #128

Merged
merged 4 commits into from
Aug 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py

# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
# python:
# install:
# - requirements: docs/requirements.txt
2 changes: 1 addition & 1 deletion docs/endian_issues.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ There is an endianness test in the Makefile and two ZFP_ compressed example data

Again, because most CPUs are now little-endian and because ZFP_ became available only after the industry mostly moved away from big-endian, it is highly unlikely that this inefficiency will be triggered.

Finally, *endian-targetting*, which is setting the file datatype for an endianness that is possibly different than the native endianness of the writer, is explicitly disallowed.
Finally, *endian-targeting*, which is setting the file datatype for an endianness that is possibly different than the native endianness of the writer, is explicitly disallowed.
For example, data may be produced on a big-endian system, but most consumers will be little-endian.
Therefore, to alleviate downstream consumers from having to always byte-swap, it is desirable to byte-swap to little-endian when the data is written.
However, the juxtaposition of HDF5_'s type conversion and filter operations in a pipeline makes this impractical for the H5Z-ZFP_ filter.
Expand Down
4 changes: 2 additions & 2 deletions docs/h5repack.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ To use ZFP_ filter in *rate* mode with a rate of ``4.5`` bits per value, first,

% ./print_h5repack_farg zfpmode=1 rate=4.5

Print cdvals for set of ZFP compression paramaters...
Print cdvals for set of ZFP compression parameters...
zfpmode=1 set zfp mode (1=rate,2=prec,3=acc,4=expert,5=rev)
rate=4.5 set rate for rate mode of filter
acc=0 set accuracy for accuracy mode of filter
Expand Down Expand Up @@ -79,7 +79,7 @@ To use ZFP_ filter in *accuracy* mode with an accuracy of ``0.075``, first, use

% ./print_h5repack_farg zfpmode=3 acc=0.075

Print cdvals for set of ZFP compression paramaters...
Print cdvals for set of ZFP compression parameters...
zfpmode=3 set zfp mode (1=rate,2=prec,3=acc,4=expert,5=rev)
rate=3.5 set rate for rate mode of filter
acc=0.075 set accuracy for accuracy mode of filter
Expand Down
6 changes: 3 additions & 3 deletions docs/hdf5_chunking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ competing interests. One is optimizing the chunk_ size and shape for access
patterns anticipated by downstream consumers. The other is optimizing the chunk_
size and shape for compression. These two interests may not be compatible
and you may have to compromise between them. We illustrate the issues and
tradeoffs using an example.
trade-offs using an example.

---------------------------------------------------
Compression *Along* the *State Iteration* Dimension
Expand All @@ -114,7 +114,7 @@ along those dimensions *before* H5Dwrite_'s can be issued.
For example, suppose you have a tensor-valued field (e.g. a 3x3 matrix
at every *point*) over a 4D (3 spatial dimensions and 1 time dimension),
regularly sampled domain? Conceptually, this is a 6 dimensional dataset
in HDF5_ with one of the dimensions (the *time* dimension) *extendible*.
in HDF5_ with one of the dimensions (the *time* dimension) *extendable*.
So, you are free to define this as a 6 dimensional dataset in HDF5_. But, you
will also have to chunk_ the dataset. You can select any chunk_ shape
you want, except that no more than 3 (or 4 for ZFP_ versions 0.5.4 and
Expand All @@ -131,7 +131,7 @@ can issue an H5Dwrite_ call doing
`hyperslab <https://docs.hdfgroup.org/hdf5/develop/_h5_d__u_g.html#subsubsec_dataset_transfer_partial>`__
can issue an H5Dwrite_
call doing `hyperslab <https://docs.hdfgroup.org/hdf5/develop/_h5_d__u_g.html#subsubsec_dataset_transfer_partial>`__
partial I/O on the 6D, `extendible <https://docs.hdfgroup.org/hdf5/develop/_l_b_ext_dset.html>`__
partial I/O on the 6D, `extendable <https://docs.hdfgroup.org/hdf5/develop/_l_b_ext_dset.html>`__
dataset. But, notice that the chunk_ dimensions (line 10) are such that only 4 of the
6 dimensions are non-unity. This means ZFP_ will only ever see something to
compress that is essentially 4D.
Expand Down
24 changes: 12 additions & 12 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ For Spack_ installations, Spack_ will handle installation of dependencies as wel

.. _gnumake:

----------------------------------
Installiang via Generic (GNU) Make
----------------------------------
---------------------------------
Installing via Generic (GNU) Make
---------------------------------

H5Z-ZFP_ installation supports both vanilla (`GNU <https://www.gnu.org/software/make/>`__) Make (described below) as well as :ref:`CMake <ceemake>`.

Expand Down Expand Up @@ -74,7 +74,7 @@ where ``<path-to-zfp>`` is a directory containing ZFP_ ``inc[lude]`` and ``lib``
If you don't specify a C compiler, it will try to guess one from your path.
Fortran compilation is optional.
If you do not specify a Fortran compiler, it will not attempt to build the Fortran interface.
However, if the variable ``FC`` is already defined in your enviornment (as in Spack_ for example), then H5Z-ZFP_ will attempt to build Fortran.
However, if the variable ``FC`` is already defined in your environment (as in Spack_ for example), then H5Z-ZFP_ will attempt to build Fortran.
If this is not desired, the solution is to pass an *empty* ``FC`` on the make command line as in...

::
Expand Down Expand Up @@ -229,27 +229,27 @@ To use the ``develop`` version of H5Z-ZFP_ with version 1.10.6 of HDF5_ ::
By default, H5Z-ZFP_ will attempt to build with Fortran support which requires a Fortran compiler.
If you wish to exclude support for Fortran, use the command::

spack install h5z-zfp ~fortran
spack install h5z-zfp~fortran

Spack_ packages can sometimes favor the use of dependencies you may not need.
For example, the HDF5_ package favors the use of MPI.
Since H5Z-ZFP_ depends on HDF5_, this behavior will then create a dependency on MPI.
To avoid this, you can force Spack_ to use a version of HDF5_ *without* MPI using.
In the example command below, we do force Spack_ to not use MPI with HDF5_ and to not use OpenMP with ZFP_::
Since H5Z-ZFP_ depends on HDF5_, this behavior will then create a dependency of H5Z-ZFP_ on MPI.
To avoid this, you can force Spack_ to use a version of HDF5_ *without* MPI.
In the example command below, we force Spack_ to not use MPI with HDF5_ and to not use OpenMP with ZFP_::

spack install h5z-zfp~fortran ^hdf5~mpi~fortran ^zfp~openmp

This can have the effect of substantially reducing the number of dependencies Spack_ winds up having to build in order to install H5Z_ZFP_.
This can have the effect of substantially reducing the number of dependencies Spack_ winds up having to build (from 35 in one case to 10) in order to install H5Z-ZFP_ which, in turn, speeds up the install process.

.. note::

Spack_ will build H5Z-ZFP_ **and** all of its dependencies including the HDF5_ library *as well as a number of other dependencies you may not initially expect*.
Be patient and let the build complete.
It may take more than an hour.
It may take as much as an hour.

In addition, by default, Spack_ installs packages to directory *hashes within* the cloned Spack_ repository's directory tree, ``$spack/opt/spack``.
You can find the resulting installed HDF5_ library with the command ``spack find -vp hdf5`` and the resulting H5Z-ZFP_ plugin installation with the command ``spack find -vp h5z-zfp``.
If you wish to exercise more control over where Spack_ installs things, have a look at
If you wish to exercise more control over how and where Spack_ installs, have a look at
`configuring Spack <https://spack.readthedocs.io/en/latest/config_yaml.html#install-tree>`_

--------------------------------
Expand Down Expand Up @@ -286,6 +286,6 @@ In the source code for H5Z-ZFP_ this manifests as something like what is shown i

In the code snippet above, note the funny ``Z`` in front of calls to various methods in the ZFP_ library.
When compiling H5Z-ZFP_ normally, that ``Z`` normally resolves to the empty string.
But, when the code is compiled with ``-DAS_SILO_BUILTIN`` (which is supported and should be done *only* when ``H5Zzfp.c`` is being compiled *within* the Silo library and *next to* a version of ZFP_ that is embedded in Silo) that ``Z`` resolves to the name of a struct and struct-member dereferncing operator as in ``zfp.``.
But, when the code is compiled with ``-DAS_SILO_BUILTIN`` (which is supported and should be done *only* when ``H5Zzfp.c`` is being compiled *within* the Silo library and *next to* a version of ZFP_ that is embedded in Silo) that ``Z`` resolves to the name of a struct and struct-member dereferencing operator as in ``zfp.``.
There is a similar ``B`` used for a similar purpose ahead of calls to ZFP_'s bitstream library.
This is something to be aware of and to adhere to if you plan to contribute any code changes here.
8 changes: 4 additions & 4 deletions docs/interfaces.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,14 +62,14 @@ For reference, the ``cd_values`` array for this ZFP_ filter is defined like
+-----------+--------+--------+---------+---------+---------+---------+
| expert | 4 | unused | minbits| maxbits| maxprec| minexp |
+-----------+--------+--------+---------+---------+---------+---------+
| reversible| 5 | unused | unused | unused | unused | unsued |
| reversible| 5 | unused | unused | unused | unused | unused |
+-----------+--------+--------+---------+---------+---------+---------+

A/B are high/low 32-bit words of a double.

Note that the cd_values used in the generic interface to ``H5Pset_filter()`` are **not the same** cd_values ultimately stored to the HDF5_ dataset header for a compressed dataset.
The values are transformed in the set_local method to use ZFP_'s internal routines for 'meta' and 'mode' data.
So, don't make the mistake of examining the values you find in a file and think you can use those same values, for example, in an invokation of h5repack.
So, don't make the mistake of examining the values you find in a file and think you can use those same values, for example, in an invocation of h5repack.

.. _properties-interface:

Expand Down Expand Up @@ -125,12 +125,12 @@ The filter is designed to be compiled for use as both a standalone HDF5_ `dynami
When it is used as a plugin, it is a best practice to link the ZFP_ library into the plugin dynamic/shared object as a *static* library.
Why? In so doing, we ensure that all ZFP_ public namespace symbols remain *confined* to the plugin so as not to interfere with any application that may be directly explicitly linking to the ZFP_ library for other reasons.

All HDF5_ applications are *required* to *find* the plugin dynamic library (named ``lib*.{so,dylib}``) in a directory specified by the enviornment variable, ``HDF5_PLUGIN_PATH``.
All HDF5_ applications are *required* to *find* the plugin dynamic library (named ``lib*.{so,dylib}``) in a directory specified by the environment variable, ``HDF5_PLUGIN_PATH``.
Currently, the HDF5 library offers no mechanism for applications themselves to have pre-programmed paths in which to search for a plugin.
Applications are then always vulnerable to an incorrectly specified or unspecified ``HDF5_PLUGIN_PATH`` environment variable.

However, the plugin can also be used explicitly as a *library*.
In this case, **do** **not** specify the ``HDF5_PLUGIN_PATH`` enviornment variable and instead have the application link to ``libH5Zzfp.a`` in the ``lib`` dir of the installation.
In this case, **do** **not** specify the ``HDF5_PLUGIN_PATH`` environment variable and instead have the application link to ``libH5Zzfp.a`` in the ``lib`` dir of the installation.
Instead two initialization and finalization routines are defined::

int H5Z_zfp_initialize(void);
Expand Down
6 changes: 3 additions & 3 deletions docs/tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ of the filter as an explicitly linked library. By default, these test a simple
1D array with and without ZFP_ compression using either the :ref:`generic-interface` (for plugin)
or the :ref:`properties-interface` (for library). You can use the code there as an
example of using the ZFP_ filter either as a plugin or as a library. However, these
also include some advanced usages for 4D and 6D, time-varying (e.g. *extendible*)
also include some advanced usages for 4D and 6D, time-varying (e.g. *extendable*)
datasets. The command ``test_write_lib help`` or ``test_write_plugin help`` will print a
list of the example's options and how to use them.

Expand All @@ -34,7 +34,7 @@ Write Test Options
chunk=256 set chunk size for 1D dataset
doint=0 also do integer 1D data

ZFP compression paramaters...
ZFP compression parameters...
zfpmode=3 (1=rate,2=prec,3=acc,4=expert,5=reversible)
rate=4 set rate for rate mode of filter
acc=0 set accuracy for accuracy mode of filter
Expand All @@ -56,7 +56,7 @@ on a 4D dataset where two of the 4 dimensions are not correlated.
This tests the plugin's ability to properly set chunking for
HDF5 such that chunks span **only** correlated dimensions and
have non-unity sizes in 3 or fewer dimensions. The ``sixd``
test runs a test on a 6D, extendible dataset representing an
test runs a test on a 6D, extendable dataset representing an
example of using ZFP_ for compression along the *time* axis.

There is a companion, `test_read.c <https://github.com/LLNL/H5Z-ZFP/blob/master/test/test_read.c>`_
Expand Down
2 changes: 1 addition & 1 deletion test/print_h5repack_farg.c
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ int main(int argc, char **argv)
int help = 0;

/* ZFP filter arguments */
HANDLE_SEP(Print cdvals for set of ZFP compression paramaters)
HANDLE_SEP(Print cdvals for set of ZFP compression parameters)
HANDLE_ARG(zfpmode,(int) strtol(argv[i]+len2,0,10),"%d",set zfp mode (1=rate,2=prec,3=acc,4=expert,5=rev));
HANDLE_ARG(rate,(double) strtod(argv[i]+len2,0),"%g",set rate for rate mode of filter);
HANDLE_ARG(acc,(double) strtod(argv[i]+len2,0),"%g",set accuracy for accuracy mode of filter);
Expand Down
2 changes: 1 addition & 1 deletion test/test_error.c
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ int main(int argc, char **argv)
int minexp = -1074;

/* ZFP filter arguments */
HANDLE_SEP(ZFP compression paramaters)
HANDLE_SEP(ZFP compression parameters)
HANDLE_ARG(zfpmode,(int) strtol(argv[i]+len2,0,10),"%d", (1=rate,2=prec,3=acc,4=expert,5=reversible));
HANDLE_ARG(rate,(double) strtod(argv[i]+len2,0),"%g",set rate for rate mode);
HANDLE_ARG(acc,(double) strtod(argv[i]+len2,0),"%g",set accuracy for accuracy mode);
Expand Down
2 changes: 1 addition & 1 deletion test/test_write.c
Original file line number Diff line number Diff line change
Expand Up @@ -324,7 +324,7 @@ int main(int argc, char **argv)
HANDLE_ARG(ofile,strndup(argv[i]+len2,NAME_LEN), "\"%s\"",set output filename);

/* ZFP filter arguments */
HANDLE_SEP(ZFP compression paramaters)
HANDLE_SEP(ZFP compression parameters)
HANDLE_ARG(zfpmode,(int) strtol(argv[i]+len2,0,10),"%d", (1=rate,2=prec,3=acc,4=expert,5=reversible));
HANDLE_ARG(rate,(double) strtod(argv[i]+len2,0),"%g",set rate for rate mode);
HANDLE_ARG(acc,(double) strtod(argv[i]+len2,0),"%g",set accuracy for accuracy mode);
Expand Down