Skip to content

Commit

Permalink
Merge pull request #1359 from ocefpaf/remove_unused_readme
Browse files Browse the repository at this point in the history
Remove unused readme
  • Loading branch information
jswhit authored Aug 13, 2024
2 parents 3615623 + 2aeb78b commit f4a97c0
Show file tree
Hide file tree
Showing 12 changed files with 24 additions and 124 deletions.
14 changes: 7 additions & 7 deletions Changelog
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@

version 1.6.5 (tag v1.6.5rel)
===============================
* fix for issue #1271 (mask ignored if bool MA assinged to uint8 var)
* fix for issue #1271 (mask ignored if bool MA assigned to uint8 var)
* include information on specific object when reporting errors from netcdf-c
* python 3.12 wheels added, support for python 3.7 removed.

Expand Down Expand Up @@ -341,7 +341,7 @@
* Fix for auto scaling and masking when _Unsigned attribute set (create
view as unsigned type after scaling and masking). Issue #671.
* Always mask values outside valid_min, valid_max (not just when
missing_value attribue present). Issue #672.
missing_value attribute present). Issue #672.
* Fix setup.py so pip install doesn't fail if cython not installed.
setuptools >= 18.0 now required for installation (Issue #666).

Expand Down Expand Up @@ -415,7 +415,7 @@
reading, a vlen string array attribute is returned as a list of
strings. To write, use var.setncattr_string("name", ["two", "strings"]).)
* Fix for issue #596 - julian day calculations wrong for negative years,
caused incorrect rountrip num2date(date2num(date)) roundtrip for dates with year
caused incorrect roundtrip num2date(date2num(date)) roundtrip for dates with year
< 0.
* Make sure negative years work in utime.num2date (issue #596).
* raise NotImplementedError when trying to pickle Dataset, Variable,
Expand Down Expand Up @@ -958,7 +958,7 @@
lib after the 4.2 release). Controlled by kwarg 'diskless' to
netCDF4.Dataset (default False). diskless=True when creating a file
results in a file that exists only in memory, closing the file
makes the data disapper, except if persist=True keyword given in
makes the data disappear, except if persist=True keyword given in
which case it is persisted to a disk file on close. diskless=True
when opening a file creates an in-memory copy of the file for faster access.

Expand Down Expand Up @@ -1196,7 +1196,7 @@ version 0.8.1 (svn revision 744)

* Experimental variable-length (vlen) data type support added.

* changes to accomodate compound types in netcdf-4.1-beta snapshots.
* changes to accommodate compound types in netcdf-4.1-beta snapshots.
Compound types now work correctly for snapshots >= 20090603.

* Added __len__ method and 'size' property to Variable class.
Expand All @@ -1207,7 +1207,7 @@ version 0.8.1 (svn revision 744)

* Fixed bug occurring when indexing with a numpy array of length 1.

* Fixed bug that occured when -1 was used as a variable index.
* Fixed bug that occurred when -1 was used as a variable index.

* enabled 'shared access' mode for NETCDF3 formatted files (mode='ws',
'r+s' or 'as'). Writes in shared mode are unbuffered, which can
Expand Down Expand Up @@ -1376,7 +1376,7 @@ version 0.7.3 (svn revision 501)
to work as slice indices.

* (netCDF4_classic only) try to make sure file is not left in 'define mode'
when execption is raised.
when exception is raised.

* if slicing a variable results in a array with shape (1,), just return
a scalar (except for compound types).
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ For details on the latest updates, see the [Changelog](https://github.com/Unidat
06/13/2024: Version [1.7.0](https://pypi.python.org/pypi/netCDF4/1.7.0) released. Add support for complex numbers via `auto_complex` keyword to `Dataset` ([PR #1295](https://github.com/Unidata/netcdf4-python/pull/1295))

10/20/2023: Version [1.6.5](https://pypi.python.org/pypi/netCDF4/1.6.5) released.
Fix for issue #1271 (mask ignored if bool MA assinged to uint8 var),
Fix for issue #1271 (mask ignored if bool MA assigned to uint8 var),
support for python 3.12 (removal of python 3.7 support), more
informative error messages.

Expand Down
100 changes: 0 additions & 100 deletions README.wheels.md

This file was deleted.

8 changes: 4 additions & 4 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ <h2 id="dimensions-in-a-netcdf-file">Dimensions in a netCDF file</h2>
&lt;class 'netCDF4._netCDF4.Dimension'&gt;: name = 'lon', size = 144
</code></pre>
<p><code><a title="netCDF4.Dimension" href="#netCDF4.Dimension">Dimension</a></code> names can be changed using the
<code>Datatset.renameDimension</code> method of a <code><a title="netCDF4.Dataset" href="#netCDF4.Dataset">Dataset</a></code> or
<code>Dataset.renameDimension</code> method of a <code><a title="netCDF4.Dataset" href="#netCDF4.Dataset">Dataset</a></code> or
<code><a title="netCDF4.Group" href="#netCDF4.Group">Group</a></code> instance.</p>
<h2 id="variables-in-a-netcdf-file">Variables in a netCDF file</h2>
<p>netCDF variables behave much like python multidimensional array objects
Expand Down Expand Up @@ -901,7 +901,7 @@ <h2 id="parallel-io">Parallel IO</h2>
<p>The optional <code>comm</code> keyword may be used to specify a particular
MPI communicator (<code>MPI_COMM_WORLD</code> is used by default).
Each process (or rank)
can now write to the file indepedently.
can now write to the file independently.
In this example the process rank is
written to a different variable index on each task</p>
<pre><code class="language-python">&gt;&gt;&gt; d = nc.createDimension('dim',4)
Expand Down Expand Up @@ -1164,7 +1164,7 @@ <h2 class="section-title" id="header-functions">Functions</h2>
Will be converted to a array of strings, where each string has a fixed
length of <code>b.shape[-1]</code> characters.</p>
<p>optional kwarg <code>encoding</code> can be used to specify character encoding (default
<code>utf-8</code>). If <code>encoding</code> is 'none' or 'bytes', a <code>numpy.string_</code> btye array is
<code>utf-8</code>). If <code>encoding</code> is 'none' or 'bytes', a <code>numpy.string_</code> byte array is
returned.</p>
<p>returns a numpy string array with datatype <code>'UN'</code> (or <code>'SN'</code>) and shape
<code>b.shape[:-1]</code> where where <code>N=b.shape[-1]</code>.</p></div>
Expand Down Expand Up @@ -2884,7 +2884,7 @@ <h3>Methods</h3>
The value of <code>_Encoding</code>
is the unicode encoding that is used to decode the bytes into strings.</p>
<p>When numpy string data is written to a variable it is converted back to
indiviual bytes, with the number of bytes in each string equalling the
individual bytes, with the number of bytes in each string equalling the
rightmost dimension of the variable.</p>
<p>The default value of <code><a title="netCDF4.chartostring" href="#netCDF4.chartostring">chartostring()</a></code> is <code>True</code>
(automatic conversions are performed).</p></div>
Expand Down
2 changes: 1 addition & 1 deletion examples/reading_netCDF.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -479,7 +479,7 @@
"### Finding the latitude and longitude indices of 50N, 140W\n",
"\n",
"- The `X` and `Y` dimensions don't look like longitudes and latitudes\n",
"- Use the auxilary coordinate variables named in the `coordinates` variable attribute, `Latitude` and `Longitude`"
"- Use the auxiliary coordinate variables named in the `coordinates` variable attribute, `Latitude` and `Longitude`"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/writing_netCDF.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -710,7 +710,7 @@
"\n",
"netCDF version 4 added support for organizing data in hierarchical groups.\n",
"\n",
"- analagous to directories in a filesystem. \n",
"- analogous to directories in a filesystem. \n",
"- Groups serve as containers for variables, dimensions and attributes, as well as other groups. \n",
"- A `netCDF4.Dataset` creates a special group, called the 'root group', which is similar to the root directory in a unix filesystem. \n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion include/membuf.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ cdef memview_fromptr(void *memory, size_t size):
buf.size = size # size of pointer in bytes
return memoryview(buf)

# private extension type that implements buffer protocal.
# private extension type that implements buffer protocol.
cdef class _MemBuf:
cdef const void *memory
cdef size_t size
Expand Down
4 changes: 2 additions & 2 deletions include/netCDF4.pxi
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ cdef extern from "netcdf.h":
NC_NETCDF4 # Use netCDF-4/HDF5 format
NC_CLASSIC_MODEL # Enforce strict netcdf-3 rules.
# Use these 'mode' flags for both nc_create and nc_open.
NC_SHARE # Share updates, limit cacheing
NC_SHARE # Share updates, limit caching
# The following flag currently is ignored, but use in
# nc_open() or nc_create() may someday support use of advisory
# locking to prevent multiple writers from clobbering a file
Expand Down Expand Up @@ -111,7 +111,7 @@ cdef extern from "netcdf.h":
NC_FILL
NC_NOFILL
# Starting with version 3.6, there are different format netCDF
# files. 4.0 instroduces the third one. These defines are only for
# files. 4.0 introduces the third one. These defines are only for
# the nc_set_default_format function.
NC_FORMAT_CLASSIC
NC_FORMAT_64BIT
Expand Down
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ use_ncconfig=True
# use szip_libdir and szip_incdir.
#szip_dir = /usr/local
# if netcdf lib was build statically with HDF4 support,
# uncomment and set to hdf4 lib (libmfhdf and libdf) nstall location.
# uncomment and set to hdf4 lib (libmfhdf and libdf) install location.
# If the libraries and include files are installed in separate locations,
# use hdf4_libdir and hdf4_incdir.
#hdf4_dir = /usr/local
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ def extract_version(CYTHON_FNAME):
HAS_NCCONFIG = False

# make sure USE_NCCONFIG from environment takes
# precendence over use_ncconfig from setup.cfg (issue #341).
# precedence over use_ncconfig from setup.cfg (issue #341).
if use_ncconfig and not USE_NCCONFIG:
USE_NCCONFIG = use_ncconfig
elif not USE_NCCONFIG:
Expand Down
8 changes: 4 additions & 4 deletions src/netCDF4/_netCDF4.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,7 @@ and whether it is unlimited.
```
`Dimension` names can be changed using the
`Datatset.renameDimension` method of a `Dataset` or
`Dataset.renameDimension` method of a `Dataset` or
`Group` instance.
## Variables in a netCDF file
Expand Down Expand Up @@ -997,7 +997,7 @@ use the `parallel` keyword to enable parallel access.
The optional `comm` keyword may be used to specify a particular
MPI communicator (`MPI_COMM_WORLD` is used by default). Each process (or rank)
can now write to the file indepedently. In this example the process rank is
can now write to the file independently. In this example the process rank is
written to a different variable index on each task
```python
Expand Down Expand Up @@ -5625,7 +5625,7 @@ of the the rightmost dimension of the variable). The value of `_Encoding`
is the unicode encoding that is used to decode the bytes into strings.
When numpy string data is written to a variable it is converted back to
indiviual bytes, with the number of bytes in each string equalling the
individual bytes, with the number of bytes in each string equalling the
rightmost dimension of the variable.
The default value of `chartostring` is `True`
Expand Down Expand Up @@ -6751,7 +6751,7 @@ Will be converted to a array of strings, where each string has a fixed
length of `b.shape[-1]` characters.
optional kwarg `encoding` can be used to specify character encoding (default
`utf-8`). If `encoding` is 'none' or 'bytes', a `numpy.string_` btye array is
`utf-8`). If `encoding` is 'none' or 'bytes', a `numpy.string_` byte array is
returned.
returns a numpy string array with datatype `'UN'` (or `'SN'`) and shape
Expand Down
2 changes: 1 addition & 1 deletion test/test_alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ def test_setting_alignment(self):
for line in h5ls_results.split('\n'):
if not line.startswith(' '):
data_variable = line.split(' ')[0]
# only process the data variables we care to inpsect
# only process the data variables we care to inspect
if data_variable not in addresses:
continue
line = line.strip()
Expand Down

0 comments on commit f4a97c0

Please sign in to comment.