diff --git a/Changelog b/Changelog index f1e82e2b3..4015f67a1 100644 --- a/Changelog +++ b/Changelog @@ -22,7 +22,7 @@ version 1.6.5 (tag v1.6.5rel) =============================== - * fix for issue #1271 (mask ignored if bool MA assinged to uint8 var) + * fix for issue #1271 (mask ignored if bool MA assigned to uint8 var) * include information on specific object when reporting errors from netcdf-c * python 3.12 wheels added, support for python 3.7 removed. @@ -341,7 +341,7 @@ * Fix for auto scaling and masking when _Unsigned attribute set (create view as unsigned type after scaling and masking). Issue #671. * Always mask values outside valid_min, valid_max (not just when - missing_value attribue present). Issue #672. + missing_value attribute present). Issue #672. * Fix setup.py so pip install doesn't fail if cython not installed. setuptools >= 18.0 now required for installation (Issue #666). @@ -415,7 +415,7 @@ reading, a vlen string array attribute is returned as a list of strings. To write, use var.setncattr_string("name", ["two", "strings"]).) * Fix for issue #596 - julian day calculations wrong for negative years, - caused incorrect rountrip num2date(date2num(date)) roundtrip for dates with year + caused incorrect roundtrip num2date(date2num(date)) roundtrip for dates with year < 0. * Make sure negative years work in utime.num2date (issue #596). * raise NotImplementedError when trying to pickle Dataset, Variable, @@ -958,7 +958,7 @@ lib after the 4.2 release). Controlled by kwarg 'diskless' to netCDF4.Dataset (default False). diskless=True when creating a file results in a file that exists only in memory, closing the file - makes the data disapper, except if persist=True keyword given in + makes the data disappear, except if persist=True keyword given in which case it is persisted to a disk file on close. diskless=True when opening a file creates an in-memory copy of the file for faster access. @@ -1196,7 +1196,7 @@ version 0.8.1 (svn revision 744) * Experimental variable-length (vlen) data type support added. - * changes to accomodate compound types in netcdf-4.1-beta snapshots. + * changes to accommodate compound types in netcdf-4.1-beta snapshots. Compound types now work correctly for snapshots >= 20090603. * Added __len__ method and 'size' property to Variable class. @@ -1207,7 +1207,7 @@ version 0.8.1 (svn revision 744) * Fixed bug occurring when indexing with a numpy array of length 1. - * Fixed bug that occured when -1 was used as a variable index. + * Fixed bug that occurred when -1 was used as a variable index. * enabled 'shared access' mode for NETCDF3 formatted files (mode='ws', 'r+s' or 'as'). Writes in shared mode are unbuffered, which can @@ -1376,7 +1376,7 @@ version 0.7.3 (svn revision 501) to work as slice indices. * (netCDF4_classic only) try to make sure file is not left in 'define mode' - when execption is raised. + when exception is raised. * if slicing a variable results in a array with shape (1,), just return a scalar (except for compound types). diff --git a/README.md b/README.md index 59cf74311..8fa99985b 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ For details on the latest updates, see the [Changelog](https://github.com/Unidat 06/13/2024: Version [1.7.0](https://pypi.python.org/pypi/netCDF4/1.7.0) released. Add support for complex numbers via `auto_complex` keyword to `Dataset` ([PR #1295](https://github.com/Unidata/netcdf4-python/pull/1295)) 10/20/2023: Version [1.6.5](https://pypi.python.org/pypi/netCDF4/1.6.5) released. -Fix for issue #1271 (mask ignored if bool MA assinged to uint8 var), +Fix for issue #1271 (mask ignored if bool MA assigned to uint8 var), support for python 3.12 (removal of python 3.7 support), more informative error messages. diff --git a/README.wheels.md b/README.wheels.md deleted file mode 100644 index 83626b78f..000000000 --- a/README.wheels.md +++ /dev/null @@ -1,100 +0,0 @@ -# Building and uploading wheels - -## For OSX - -We automate OSX wheel building using a custom github repository that builds on -the travis-ci OSX machines. - -The travis-ci interface for the builds is : -https://travis-ci.org/MacPython/netcdf4-python-wheels - -The driving github repository is : -https://github.com/MacPython/netcdf4-python-wheels - -### How it works - -The wheel-building repository: - -* does a fresh build of the required C / C++ libraries; -* builds a netcdf4-python wheel, linking against these fresh builds; -* processes the wheel using [delocate](https://pypi.python.org/pypi/delocate). - `delocate` copies the required dynamic libraries into the wheel and relinks - the extension modules against the copied libraries; -* uploads the built wheel to http://wheels.scipy.org (a Rackspace container - kindly donated by Rackspace to scikit-learn). - -The resulting wheel is therefore self-contained and does not need any external -dynamic libraries apart from those provided as standard by OSX. - -### Triggering a build - -You will need write permission to the github repository to trigger new builds -on the travis-ci interface. Contact us on the mailing list if you need this. - -You can trigger a build by: - -* making a commit to the `netcdf4-python-wheels` repository (e.g. with `git - commit --allow-empty`); or -* clicking on the circular arrow icon towards the top right of the travis-ci - page, to rerun the previous build. - -In general, it is better to trigger a build with a commit, because this makes -a new set of build products and logs, keeping the old ones for reference. -Keeping the old build logs helps us keep track of previous problems and -successful builds. - -### Which netcdf4-python commit does the repository build? - -By default, the `netcd4-python-wheels` repository is usually set up to build -the latest git tag. To check whether this is so have a look around line 5 of -`.travis.yml` in the `netcdf4-python-wheels` repository. You should see -something like: - -``` -- BUILD_COMMIT='latest-tag' -``` - -If this is commented out, then the repository is set up to build the current -commit in the `netcdf4-python` submodule of the repository. If it is set to -another value then it will be specifying a commit to build. - -You can therefore build any arbitrary commit by specificying the commit hash -or branch name or tag name in this line of the `.travis.yml` file. - -### Uploading the built wheels to pypi - -Be careful, http://wheels.scipy.org points to a container on a distributed -content delivery network. It can take up to 15 minutes for the new wheel file -to get updated into the container at http://wheels.scipy.org. - -When the wheels are updated, you can of course just download them to your -machine manually, and then upload them manually to pypi, or by using -[twine][twine]. You can also use a script for doing this, housed at : -https://github.com/MacPython/terryfy/blob/master/wheel-uploader - -You'll need [twine][twine] and [beautiful soup 4][bs4]. - -You will typically have a directory on your machine where you store wheels, -called a `wheelhouse`. The typical call for `wheel-uploader` would then -be something like: - -``` -wheel-uploader -v -w ~/wheelhouse netCDF4 1.1.8 -``` - -where: - -* `-v` means give verbose messages; -* `-w ~/wheelhouse` means download the wheels from https://wheels.scipy.org to - the directory `~/wheelhouse`; -* `netCDF4` is the root name of the wheel(s) to download / upload; -* `1.1.8` is the version to download / upload. - -So, in this case, `wheel-uploader` will download all wheels starting with -`netCDF4-1.1.8-` from http://wheels.scipy.org to `~/wheelhouse`, then upload -them to pypi. - -Of course, you will need permissions to upload to pypi, for this to work. - -[twine]: https://pypi.python.org/pypi/twine -[bs4]: https://pypi.python.org/pypi/beautifulsoup4 diff --git a/docs/index.html b/docs/index.html index f120d187c..29e5a7c3c 100644 --- a/docs/index.html +++ b/docs/index.html @@ -289,7 +289,7 @@

Dimensions in a netCDF file

<class 'netCDF4._netCDF4.Dimension'>: name = 'lon', size = 144

Dimension names can be changed using the -Datatset.renameDimension method of a Dataset or +Dataset.renameDimension method of a Dataset or Group instance.

Variables in a netCDF file

netCDF variables behave much like python multidimensional array objects @@ -901,7 +901,7 @@

Parallel IO

The optional comm keyword may be used to specify a particular MPI communicator (MPI_COMM_WORLD is used by default). Each process (or rank) -can now write to the file indepedently. +can now write to the file independently. In this example the process rank is written to a different variable index on each task

>>> d = nc.createDimension('dim',4)
@@ -1164,7 +1164,7 @@ 

Functions

Will be converted to a array of strings, where each string has a fixed length of b.shape[-1] characters.

optional kwarg encoding can be used to specify character encoding (default -utf-8). If encoding is 'none' or 'bytes', a numpy.string_ btye array is +utf-8). If encoding is 'none' or 'bytes', a numpy.string_ byte array is returned.

returns a numpy string array with datatype 'UN' (or 'SN') and shape b.shape[:-1] where where N=b.shape[-1].

@@ -2884,7 +2884,7 @@

Methods

The value of _Encoding is the unicode encoding that is used to decode the bytes into strings.

When numpy string data is written to a variable it is converted back to -indiviual bytes, with the number of bytes in each string equalling the +individual bytes, with the number of bytes in each string equalling the rightmost dimension of the variable.

The default value of chartostring() is True (automatic conversions are performed).

diff --git a/examples/reading_netCDF.ipynb b/examples/reading_netCDF.ipynb index 670b06340..95d33957d 100644 --- a/examples/reading_netCDF.ipynb +++ b/examples/reading_netCDF.ipynb @@ -479,7 +479,7 @@ "### Finding the latitude and longitude indices of 50N, 140W\n", "\n", "- The `X` and `Y` dimensions don't look like longitudes and latitudes\n", - "- Use the auxilary coordinate variables named in the `coordinates` variable attribute, `Latitude` and `Longitude`" + "- Use the auxiliary coordinate variables named in the `coordinates` variable attribute, `Latitude` and `Longitude`" ] }, { diff --git a/examples/writing_netCDF.ipynb b/examples/writing_netCDF.ipynb index 2e2fef5ef..61927929f 100644 --- a/examples/writing_netCDF.ipynb +++ b/examples/writing_netCDF.ipynb @@ -710,7 +710,7 @@ "\n", "netCDF version 4 added support for organizing data in hierarchical groups.\n", "\n", - "- analagous to directories in a filesystem. \n", + "- analogous to directories in a filesystem. \n", "- Groups serve as containers for variables, dimensions and attributes, as well as other groups. \n", "- A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. \n", "\n", diff --git a/include/membuf.pyx b/include/membuf.pyx index b964453e9..21a916db4 100644 --- a/include/membuf.pyx +++ b/include/membuf.pyx @@ -12,7 +12,7 @@ cdef memview_fromptr(void *memory, size_t size): buf.size = size # size of pointer in bytes return memoryview(buf) -# private extension type that implements buffer protocal. +# private extension type that implements buffer protocol. cdef class _MemBuf: cdef const void *memory cdef size_t size diff --git a/include/netCDF4.pxi b/include/netCDF4.pxi index f748bf82c..62b9be609 100644 --- a/include/netCDF4.pxi +++ b/include/netCDF4.pxi @@ -55,7 +55,7 @@ cdef extern from "netcdf.h": NC_NETCDF4 # Use netCDF-4/HDF5 format NC_CLASSIC_MODEL # Enforce strict netcdf-3 rules. # Use these 'mode' flags for both nc_create and nc_open. - NC_SHARE # Share updates, limit cacheing + NC_SHARE # Share updates, limit caching # The following flag currently is ignored, but use in # nc_open() or nc_create() may someday support use of advisory # locking to prevent multiple writers from clobbering a file @@ -111,7 +111,7 @@ cdef extern from "netcdf.h": NC_FILL NC_NOFILL # Starting with version 3.6, there are different format netCDF - # files. 4.0 instroduces the third one. These defines are only for + # files. 4.0 introduces the third one. These defines are only for # the nc_set_default_format function. NC_FORMAT_CLASSIC NC_FORMAT_64BIT diff --git a/setup.cfg b/setup.cfg index 55736d8b9..c0afd570c 100644 --- a/setup.cfg +++ b/setup.cfg @@ -32,7 +32,7 @@ use_ncconfig=True # use szip_libdir and szip_incdir. #szip_dir = /usr/local # if netcdf lib was build statically with HDF4 support, -# uncomment and set to hdf4 lib (libmfhdf and libdf) nstall location. +# uncomment and set to hdf4 lib (libmfhdf and libdf) install location. # If the libraries and include files are installed in separate locations, # use hdf4_libdir and hdf4_incdir. #hdf4_dir = /usr/local diff --git a/setup.py b/setup.py index c23eb0348..b17f09ef5 100644 --- a/setup.py +++ b/setup.py @@ -182,7 +182,7 @@ def extract_version(CYTHON_FNAME): HAS_NCCONFIG = False # make sure USE_NCCONFIG from environment takes -# precendence over use_ncconfig from setup.cfg (issue #341). +# precedence over use_ncconfig from setup.cfg (issue #341). if use_ncconfig and not USE_NCCONFIG: USE_NCCONFIG = use_ncconfig elif not USE_NCCONFIG: diff --git a/src/netCDF4/_netCDF4.pyx b/src/netCDF4/_netCDF4.pyx index 6023406c9..b8573270d 100644 --- a/src/netCDF4/_netCDF4.pyx +++ b/src/netCDF4/_netCDF4.pyx @@ -285,7 +285,7 @@ and whether it is unlimited. ``` `Dimension` names can be changed using the -`Datatset.renameDimension` method of a `Dataset` or +`Dataset.renameDimension` method of a `Dataset` or `Group` instance. ## Variables in a netCDF file @@ -997,7 +997,7 @@ use the `parallel` keyword to enable parallel access. The optional `comm` keyword may be used to specify a particular MPI communicator (`MPI_COMM_WORLD` is used by default). Each process (or rank) -can now write to the file indepedently. In this example the process rank is +can now write to the file independently. In this example the process rank is written to a different variable index on each task ```python @@ -5625,7 +5625,7 @@ of the the rightmost dimension of the variable). The value of `_Encoding` is the unicode encoding that is used to decode the bytes into strings. When numpy string data is written to a variable it is converted back to -indiviual bytes, with the number of bytes in each string equalling the +individual bytes, with the number of bytes in each string equalling the rightmost dimension of the variable. The default value of `chartostring` is `True` @@ -6751,7 +6751,7 @@ Will be converted to a array of strings, where each string has a fixed length of `b.shape[-1]` characters. optional kwarg `encoding` can be used to specify character encoding (default -`utf-8`). If `encoding` is 'none' or 'bytes', a `numpy.string_` btye array is +`utf-8`). If `encoding` is 'none' or 'bytes', a `numpy.string_` byte array is returned. returns a numpy string array with datatype `'UN'` (or `'SN'`) and shape diff --git a/test/test_alignment.py b/test/test_alignment.py index 3f7333cd6..39c437535 100644 --- a/test/test_alignment.py +++ b/test/test_alignment.py @@ -134,7 +134,7 @@ def test_setting_alignment(self): for line in h5ls_results.split('\n'): if not line.startswith(' '): data_variable = line.split(' ')[0] - # only process the data variables we care to inpsect + # only process the data variables we care to inspect if data_variable not in addresses: continue line = line.strip()