Releases: descarteslabs/descarteslabs-python
v2.1.1
Compute
- Filtering on datetime attributes (such as
Function.created_at
) didn't previously work with anything
butdatetime
instances. Now it also handles iso format strings and unix timestamps (int or float).
v2.1.0
General
- Following our lifecycle policy, client versions v1.11.0 and earlier are no longer supported. They may
cease to work with the Platform at any time.
Catalog
- The Catalog
Blob
class now has aget_data()
method which can be used to retrieve the blob
data directly given the id, without having to first retrieve theBlob
metadata.
Compute
-
Breaking Change The status values for
Function
andJob
objects have changed, to provide a
better experience managing the flow of jobs. Please see the updated Compute guide for a full explanation.
Because of the required changes to the back end, older clients (i.e. v2.0.3) are supported in a
best effort manner. Upgrading to this new client release is strongly advised for all users of the
Compute service. -
Breaking Change The base images for Compute have been put on a diet. They are now themselves built
from "slim" Python images, and they no longer include the wide variety of extra Python packages that were
formerly included (e.g. TensorFlow, SciKit Learn, PyTorch). This has reduced the base image size by
an order of magnitude, making function build times and job startup overhead commensurately faster.
Any functions which require such additional packages can add them in as needed via therequirements=
parameter. While doing so will increase image size, it will generally still be much smaller and faster
than the prior "Everything and the kitchen sink" approach. Existing Functions with older images will continue
to work as always, but any newly minted `Function`` using the new client will be using one of the new
slim images. -
Base images are now available for Python3.10 and Python3.11, in addition to Python3.8 and Python3.9.
-
Job results and logs are now integrated with Catalog Storage, so that results and logs can be
searched and retrieved directly using the Catalog client as well as using the methods in the Compute
client. Results are organized understorage_type=StorageType.COMPUTE
, while logs are organized under
storage_type=StorageType.LOGS
. -
The new
ComputeResult
class can be used to wrap results from aFunction
, allowing the user to
specify additional attributes for the result which will be stored in the CatalogBlob
metadata for
the result. This allows the function to specify properties such asgeometry
,description
,
expires
,extra_attributes
,writers
andreaders
for the resultBlob
. The use of
ComputeResult
is not required. -
A
Job
can now be assigned arbitrary tags (strings), and searched based on them. -
A
Job
can now be retried on errors, and jobs track error reasons, exit codes, and execution counts. -
Function
andJob
objects can now be filtered by class attributes (ex.
Job.search().filter(Job.status == JobStatus.PENDING).collect()
). -
The
Job.cancel()
method can now be used to cancel the execution of a job which is currently
pending or running. Pending jobs will immediately transition toJobStatus.CANCELED
status,
while running jobs will pass throughJobStatus.CANCEL
(waiting for the cancelation to be
signaled to the execution engine),JobStatus.CANCELING
(waiting for the execution to terminate),
andJobStatus.CANCELED
(once the job is no longer executing). Cancelation of running jobs is
not guaranteed; a job may terminate successfully, or with a failure or timeout, before it can
be canceled. -
The
Job.result()
method will raise an exception if the job does not have a status of
JobStatus.SUCCESS
. IfJob.result()
yields anNone
value, this means that there was no
result (i.e. the execution returned aNone
). -
The
Job.result_blob()
method will return the Catalog Storage Blob holding the result, if any. -
The
Job.delete()
method will delete any job logs, but will not delete the job result unless
thedelete_results
parameter is supplied. -
The
Function
object now has attributesnamespace
andowner
. -
The
Function.wait_for_completion()
and newFunction.as_completed()
methods provide a richer
set of functionality for waiting on and handling job completion. -
The
Function.build_log()
method now returns the log contents as a string, rather than printing
the log contents. -
The
Job.log()
method now returns the log contents as a list of strings, rather than printing the log
contents. Because logs can be unbounded in size, there's also a newJob.iter_log()
method which returns
an iterator over the log lines. -
The
requirements=
parameter toFunction
objects now supports morepip
magic, allowing the use
of specialpip
controls such as-f
. Also parsing of package versions has been loosened to allow
some more unusual version designators. -
Changes to the
Function.map()
method, with the parameter name change ofiterargs
changed tokwargs
(the old name is still honored but deprecated), corrected documentation, and enhancements to support more
general iterators and mappings, allowing for a more functional programming style. -
The compute package was restructured to make all the useful and relevant classes available at the top level.
Utils
- Property filters can now be deserialized as well as serialized.
v2.1.0rc1
[2.1.0] - 2023-09-21
General
- Following our lifecycle policy, client versions v1.11.0 and earlier are no longer supported. They may
cease to work with the Platform at any time.
Catalog
- The Catalog
Blob
class now has aget_data()
method which can be used to retrieve the blob
data directly given the id, without having to first retrieve theBlob
metadata.
Compute
-
Breaking Change The status values for
Function
andJob
objects have changed, to provide a
better experience managing the flow of jobs. Please see the updated Compute guide for a full explanation.
Because of the required changes to the back end, older clients (i.e. v2.0.3) are supported in a
best effort manner. Upgrading to this new client release is strongly advised for all users of the
Compute service. -
Breaking Change The base images for Compute have been put on a diet. They are now themselves built
from "slim" Python images, and they no longer include the wide variety of extra Python packages that were
formerly included (e.g. TensorFlow, SciKit Learn, PyTorch). This has reduced the base image size by
an order of magnitude, making function build times and job startup overhead commensurately faster.
Any functions which require such additional packages can add them in as needed via therequirements=
parameter. While doing so will increase image size, it will generally still be much better than the prior
"Everything and the kitchen sink" approach. Existing Functions with older images will continue
to work as always, but any newly minted `Function`` using the new client will be using one of the new
slim images. -
Base images are now available for Python3.10 and Python3.11, in addition to Python3.8 and Python3.9.
-
Job results and logs are now integrated with Catalog Storage, so that results and logs can be
searched and retrieved directly using the Catalog client as well as using the methods in the Compute
client. -
The new
ComputeResult
class can be used to wrap results from aFunction
, allowing the user to
specify additional attributes for the result which will be stored in the CatalogBlob
metadata for
the result. This allows the function to specify properties such asgeometry
,description
,
expires
andextra_attributes
for the resultBlob
. The use ofComputeResult
is not required. -
A
Job
can now be assigned arbitrary tags (strings), and searched based on them. -
A
Job
can now be retried on errors, and jobs track error reasons, exit codes, and execution counts. -
Function
andJob
objects can now be filtered by class attributes (ex.
Job.search().filter(Job.status == JobStatus.PENDING).collect()
). -
The
Job.cancel()
method can now be used to cancel the execution of a job which is currently
pending or running. Pending jobs will immediately transition toJobStatus.CANCELED
status,
while running jobs will pass throughJobStatus.CANCEL
(waiting for the cancelation to be
signaled to the execution engine),JobStatus.CANCELING
(waiting for the execution to terminate),
andJobStatus.CANCELED
(once the job is no longer executing). Cancelation of running jobs is
not guaranteed; a job may terminate successfully, or with a failure or timeout, before it can
be canceled. -
The
Job.result()
method will raise an exception if the job does not have a status of
JobStatus.SUCCESS
. IfJob.result()
yields anNone
value, this means that there was no
result (i.e. the execution returned aNone
). -
The
Job.result_blob()
will return the Catalog Storage Blob holding the result, if any. -
The
Function
object now has attributesnamespace
andowner
. -
The
Function.wait_for_completion()
and newFunction.as_completed()
methods provide a richer
set of functionality for waiting on and handling job completion. -
The
Function.build_log()
method now returns the log contents as a string, rather than printing
the log contents. -
The
Job.log()
method now returns the log contents as a list of strings, rather than printing the log
contents. Because logs can be unbounded in size, there's also a newJob.iter_log()
method which returns
an iterator over the log lines. -
The
requirements=
parameter toFunction
objects now supports morepip
magic, allowing the use
of specialpip
controls such as-f
. Also parsing of package versions has been loosened to allow
some more unusual version designators. -
Changes to the
Function.map()
method, with the parameter name change ofiterargs
changed tokwargs
(the old name is still honored but deprecated), corrected documentation, and enhancements to support more
general iterators and mappings, allowing for a more functional programming style. -
The compute package was restructured to make all the useful and relevant classes available at the top level.
Utils
- Property filters can now be deserialized as well as serialized.
v2.1.0rc0
General
- Following our lifecycle policy, client versions v1.11.0 and earlier are no longer supported. They may
cease to work with the Platform at any time.
Catalog
- The Catalog
Blob
class now has aget_data()
method which can be used to retrieve the blob
data directly given the id, without having to first retrieve theBlob
metadata.
Compute
-
Breaking Change The base images for Compute have been put on a diet. They are now themselves built
from "slim" Python images, and they no longer include the wide variety of extra Python packages that were
formerly included (e.g. TensorFlow, SciKit Learn, PyTorch). This has reduced the base image size by
an order of magnitude, making function build times and job startup overhead commensurately faster.
Any functions which require such additional packages can add them in as needed via therequirements=
parameter. While doing so will increase image size, it will generally still be much better than the prior
"Everything and the kitchen sink" approach. Existing Functions with older images will continue
to work as always, but any newly minted `Function`` using the new client will be using one of the new
slim images. -
Base images are now available for Python3.10 and Python3.11, in addition to Python3.8 and Python3.9.
-
Job results and logs are now integrated with Catalog Storage, so that results and logs can be
searched and retrieved directly using the Catalog client as well as using the methods in the Compute
client. -
The new
ComputeResult
class can be used to wrap results from aFunction
, allowing the user to
specify additional attributes for the result which will be stored in the CatalogBlob
metadata for
the result. This allows the function to specify properties such asgeometry
,description
,
expires
andextra_attributes
for the resultBlob
. The use ofComputeResult
is not required. -
A
Job
can now be assigned arbitrary tags (strings), and searched based on them. -
A
Job
can now be retried on errors, and jobs track error reasons, exit codes, and execution counts. -
Function
andJob
objects can now be filtered by class attributes (ex.Job.search().filter(Job.status == JobStatus.PENDING).collect()
). -
The
requirements=
parameter toFunction
objects now supports morepip
magic, allowing the use
of specialpip
controls such as-f
. Also parsing of package versions has been loosened to allow
some more unusual version designators. -
Changes to the
Function.map()
method, with the parameter name change ofiterargs
changed tokwargs
(the old name is still honored but deprecated), corrected documentation, and enhancements to support more
general iterators and mappings, allowing for a more functional programming style. -
The compute package was restructured to make all the useful and relevant classes available at the top level.
Utils
- Property filters can now be deserialized as well as serialized.
v2.0.3
Compute
- Allow deletion of
Function
objects.- Deleting a Function will deleted all associated Jobs.
- Allow deletion of
Job
objects.- Deleting a Job will delete all associated resources (logs, results, etc).
- Added attribute filter to
Function
andJob
objects.- Attributes marked
filterable=True
can be used to filter objects on the compute backend api. - Minor optimization to
Job.iter_results
which now uses backend filters to load successful jobs.
- Attributes marked
Function
bundling has been enhanced.- New
include_modules
andinclude_data
parameters allow for multiple other modules, non-code data files, etc to be added to the code bundle. - The
requirements
parameter has been improved to allow a user to pass a path to their ownrequirements.txt
file instead of a list of strings.
- New
v2.0.2
Catalog
- Allow data type
int32
in geotiff downloads. BlobCollection
now importable fromdescarteslabs.catalog
.
Documentation
- Added API documentation for dynamic compute and vector
v2.0.1
Raster
- Due to recent changes in
urllib3
, rastering operations were failing to retry certain errors which ought to be retried, causing more failures to propagate to the user than was desirable. This is now fixed.
v2.0.0
(Release notes from all the 2.0.0 release candidates are summarized here for completeness.)
Supported platforms
- Deprecated support for Python 3.7 (will end of life in July).
- Added support for Python 3.10 and Python 3.11
- AWS-only client. For the time being, the AWS client can be used to communicate with the legacy GCP platform (e.g.
DESCARTESLABS_ENV=gcp-production
) but only supports those services that are supported on AWS (catalog
andscenes
). This support may break at any point in the future, so it is strictly transitional.
Dependencies
- Removed many dependencies no longer required due to the removal of GCP-only features.
- Added support for Shapely 2.X. Note that user code may also be affected by breaking changes in
Shapely 2.X. Use of Shapely 1.8 is still supported. - Updated requirements to avoid
urllib3>=2.0.0
which breaks all kinds of things.
Configuration
- Major overhaul of the internals of the config process. To support other clients using namespaced packages within the
descarteslabs
package, the top level has been cleaned up, and most all the real code is down insidedescarteslabs.core
. End users should never have to import anything fromdescarteslabs.core
. No more magic packages means thatpylint
will work well with code usingdescarteslabs
. - Configuration no longer depends upon the authorized user.
Catalog
- Added support for data storage. The
Blob
class provides mechanism to upload, index, share, and retrieve arbitrary byte sequences (e.g. files).Blob
s can be searched by namespace and name, geospatial coordinates (points, polygons, etc.), and tags.Blob
s can be downloaded to a local file, or retrieved directly as a Pythonbytes
object.Blob
s support the same sharing mechanisms asProduct
s, withowners
,writers
, andreaders
attributes. - Added support to
Property
forprefix
filtering. - The default
geocontext
for image objects no longer specifies aresolution
but rather ashape
, to ensure
that default rastering preserves the original data and alignment (i.e. no warping of the source image). - As with
resolution
, you can now pass acrs
parameter to the rastering methods (e.g.Image.ndarray
,
ImageCollection.stack
, etc.) to override thecrs
of the default geocontext. - A bug in the code handling the default context for image collections when working with a product with a CRS based on degrees rather than meters has been fixed. Resolutions should always be specified in the units used by the CRS.
Compute
- Added support for managed batch compute under the
compute
module.
Raster Client
- Fixed a bug in the handling of small blocks (less than 512 x 512) that caused rasterio to generate bad download files (the desired image block would appear as a smaller sub-block rather than filling the resulting raster).
Geo
- The defaulting of
align_pixels
has changed slightly for theAOI
class. Previously it always defaulted to
True
. Now the default isTrue
ifresolution
is set,False
otherwise. This ensures that when specifying
ashape
and abounds
rather than a resolution,theshape
is actually honored. - When assigning a
resolution
to anAOI
, any existingshape
attribute is automatically unset, since the
two attributes are mutually exclusive. - The validation of bounds for a geographic CRS has been slightly modified to account for some of the
irregularities of whole-globe image products, correcting unintended failures in the past. - Fixed problem handling MultiPolygon and GeometryCollection when using Shapely 2.0.
v2.0.0rc5
Catalog
- Loosen up the restrictions on the allowed alphabet for Blob names. Now almost any printable
character is accepted save for newlines and commas. - Added new storage types for Blobs:
StorageType.COMPUTE
(for Compute job results) and
StorageType.DYNCOMP
(for saveddynamic-compute
operations).
Compute
- Added testing of the client.
v2.0.0rc4
Catalog
- The defaulting of the
namespace
value forBlob
s has changed slightly. If no namespace is specified,
it will default to<org>:<hash>
with the user's org name and unique user hash. Otherwise, any other value,
as before, will be prefixed with the user's org name if it isn't already so. Blob.get
no longer requires a full id. Alternatively, you can give it aname
and optionally anamespace
and astorage_type
, and it will retrieve theBlob
.- Fixed a bug causing summaries of
Blob
searches to fail.
Compute
Function.map
andFunction.rerun
now save the createdJob
s before returning.Job.get
return values fixed, and removed an extraneous debug print.
General
- Updated requirements to avoid
urllib3>=2.0.0
which break all kinds of things.