Skip to content

Releases: descarteslabs/descarteslabs-python

v2.1.1

16 Oct 19:19
Compare
Choose a tag to compare

Compute

  • Filtering on datetime attributes (such as Function.created_at) didn't previously work with anything
    but datetime instances. Now it also handles iso format strings and unix timestamps (int or float).

v2.1.0

05 Oct 02:20
Compare
Choose a tag to compare

General

  • Following our lifecycle policy, client versions v1.11.0 and earlier are no longer supported. They may
    cease to work with the Platform at any time.

Catalog

  • The Catalog Blob class now has a get_data() method which can be used to retrieve the blob
    data directly given the id, without having to first retrieve the Blob metadata.

Compute

  • Breaking Change The status values for Function and Job objects have changed, to provide a
    better experience managing the flow of jobs. Please see the updated Compute guide for a full explanation.
    Because of the required changes to the back end, older clients (i.e. v2.0.3) are supported in a
    best effort manner. Upgrading to this new client release is strongly advised for all users of the
    Compute service.

  • Breaking Change The base images for Compute have been put on a diet. They are now themselves built
    from "slim" Python images, and they no longer include the wide variety of extra Python packages that were
    formerly included (e.g. TensorFlow, SciKit Learn, PyTorch). This has reduced the base image size by
    an order of magnitude, making function build times and job startup overhead commensurately faster.
    Any functions which require such additional packages can add them in as needed via the requirements=
    parameter. While doing so will increase image size, it will generally still be much smaller and faster
    than the prior "Everything and the kitchen sink" approach. Existing Functions with older images will continue
    to work as always, but any newly minted `Function`` using the new client will be using one of the new
    slim images.

  • Base images are now available for Python3.10 and Python3.11, in addition to Python3.8 and Python3.9.

  • Job results and logs are now integrated with Catalog Storage, so that results and logs can be
    searched and retrieved directly using the Catalog client as well as using the methods in the Compute
    client. Results are organized under storage_type=StorageType.COMPUTE, while logs are organized under
    storage_type=StorageType.LOGS.

  • The new ComputeResult class can be used to wrap results from a Function, allowing the user to
    specify additional attributes for the result which will be stored in the Catalog Blob metadata for
    the result. This allows the function to specify properties such as geometry, description,
    expires, extra_attributes, writers and readers for the result Blob. The use of
    ComputeResult is not required.

  • A Job can now be assigned arbitrary tags (strings), and searched based on them.

  • A Job can now be retried on errors, and jobs track error reasons, exit codes, and execution counts.

  • Function and Job objects can now be filtered by class attributes (ex.
    Job.search().filter(Job.status == JobStatus.PENDING).collect()).

  • The Job.cancel() method can now be used to cancel the execution of a job which is currently
    pending or running. Pending jobs will immediately transition to JobStatus.CANCELED status,
    while running jobs will pass through JobStatus.CANCEL (waiting for the cancelation to be
    signaled to the execution engine), JobStatus.CANCELING (waiting for the execution to terminate),
    and JobStatus.CANCELED (once the job is no longer executing). Cancelation of running jobs is
    not guaranteed; a job may terminate successfully, or with a failure or timeout, before it can
    be canceled.

  • The Job.result() method will raise an exception if the job does not have a status of
    JobStatus.SUCCESS. If Job.result() yields an None value, this means that there was no
    result (i.e. the execution returned a None).

  • The Job.result_blob() method will return the Catalog Storage Blob holding the result, if any.

  • The Job.delete() method will delete any job logs, but will not delete the job result unless
    the delete_results parameter is supplied.

  • The Function object now has attributes namespace and owner.

  • The Function.wait_for_completion() and new Function.as_completed() methods provide a richer
    set of functionality for waiting on and handling job completion.

  • The Function.build_log() method now returns the log contents as a string, rather than printing
    the log contents.

  • The Job.log() method now returns the log contents as a list of strings, rather than printing the log
    contents. Because logs can be unbounded in size, there's also a new Job.iter_log() method which returns
    an iterator over the log lines.

  • The requirements= parameter to Function objects now supports more pip magic, allowing the use
    of special pip controls such as -f. Also parsing of package versions has been loosened to allow
    some more unusual version designators.

  • Changes to the Function.map() method, with the parameter name change of iterargs changed to kwargs
    (the old name is still honored but deprecated), corrected documentation, and enhancements to support more
    general iterators and mappings, allowing for a more functional programming style.

  • The compute package was restructured to make all the useful and relevant classes available at the top level.

Utils

  • Property filters can now be deserialized as well as serialized.

v2.1.0rc1

03 Oct 17:53
Compare
Choose a tag to compare
v2.1.0rc1 Pre-release
Pre-release

[2.1.0] - 2023-09-21

General

  • Following our lifecycle policy, client versions v1.11.0 and earlier are no longer supported. They may
    cease to work with the Platform at any time.

Catalog

  • The Catalog Blob class now has a get_data() method which can be used to retrieve the blob
    data directly given the id, without having to first retrieve the Blob metadata.

Compute

  • Breaking Change The status values for Function and Job objects have changed, to provide a
    better experience managing the flow of jobs. Please see the updated Compute guide for a full explanation.
    Because of the required changes to the back end, older clients (i.e. v2.0.3) are supported in a
    best effort manner. Upgrading to this new client release is strongly advised for all users of the
    Compute service.

  • Breaking Change The base images for Compute have been put on a diet. They are now themselves built
    from "slim" Python images, and they no longer include the wide variety of extra Python packages that were
    formerly included (e.g. TensorFlow, SciKit Learn, PyTorch). This has reduced the base image size by
    an order of magnitude, making function build times and job startup overhead commensurately faster.
    Any functions which require such additional packages can add them in as needed via the requirements=
    parameter. While doing so will increase image size, it will generally still be much better than the prior
    "Everything and the kitchen sink" approach. Existing Functions with older images will continue
    to work as always, but any newly minted `Function`` using the new client will be using one of the new
    slim images.

  • Base images are now available for Python3.10 and Python3.11, in addition to Python3.8 and Python3.9.

  • Job results and logs are now integrated with Catalog Storage, so that results and logs can be
    searched and retrieved directly using the Catalog client as well as using the methods in the Compute
    client.

  • The new ComputeResult class can be used to wrap results from a Function, allowing the user to
    specify additional attributes for the result which will be stored in the Catalog Blob metadata for
    the result. This allows the function to specify properties such as geometry, description,
    expires and extra_attributes for the result Blob. The use of ComputeResult is not required.

  • A Job can now be assigned arbitrary tags (strings), and searched based on them.

  • A Job can now be retried on errors, and jobs track error reasons, exit codes, and execution counts.

  • Function and Job objects can now be filtered by class attributes (ex.
    Job.search().filter(Job.status == JobStatus.PENDING).collect()).

  • The Job.cancel() method can now be used to cancel the execution of a job which is currently
    pending or running. Pending jobs will immediately transition to JobStatus.CANCELED status,
    while running jobs will pass through JobStatus.CANCEL (waiting for the cancelation to be
    signaled to the execution engine), JobStatus.CANCELING (waiting for the execution to terminate),
    and JobStatus.CANCELED (once the job is no longer executing). Cancelation of running jobs is
    not guaranteed; a job may terminate successfully, or with a failure or timeout, before it can
    be canceled.

  • The Job.result() method will raise an exception if the job does not have a status of
    JobStatus.SUCCESS. If Job.result() yields an None value, this means that there was no
    result (i.e. the execution returned a None).

  • The Job.result_blob() will return the Catalog Storage Blob holding the result, if any.

  • The Function object now has attributes namespace and owner.

  • The Function.wait_for_completion() and new Function.as_completed() methods provide a richer
    set of functionality for waiting on and handling job completion.

  • The Function.build_log() method now returns the log contents as a string, rather than printing
    the log contents.

  • The Job.log() method now returns the log contents as a list of strings, rather than printing the log
    contents. Because logs can be unbounded in size, there's also a new Job.iter_log() method which returns
    an iterator over the log lines.

  • The requirements= parameter to Function objects now supports more pip magic, allowing the use
    of special pip controls such as -f. Also parsing of package versions has been loosened to allow
    some more unusual version designators.

  • Changes to the Function.map() method, with the parameter name change of iterargs changed to kwargs
    (the old name is still honored but deprecated), corrected documentation, and enhancements to support more
    general iterators and mappings, allowing for a more functional programming style.

  • The compute package was restructured to make all the useful and relevant classes available at the top level.

Utils

  • Property filters can now be deserialized as well as serialized.

v2.1.0rc0

20 Sep 16:17
Compare
Choose a tag to compare
v2.1.0rc0 Pre-release
Pre-release

General

  • Following our lifecycle policy, client versions v1.11.0 and earlier are no longer supported. They may
    cease to work with the Platform at any time.

Catalog

  • The Catalog Blob class now has a get_data() method which can be used to retrieve the blob
    data directly given the id, without having to first retrieve the Blob metadata.

Compute

  • Breaking Change The base images for Compute have been put on a diet. They are now themselves built
    from "slim" Python images, and they no longer include the wide variety of extra Python packages that were
    formerly included (e.g. TensorFlow, SciKit Learn, PyTorch). This has reduced the base image size by
    an order of magnitude, making function build times and job startup overhead commensurately faster.
    Any functions which require such additional packages can add them in as needed via the requirements=
    parameter. While doing so will increase image size, it will generally still be much better than the prior
    "Everything and the kitchen sink" approach. Existing Functions with older images will continue
    to work as always, but any newly minted `Function`` using the new client will be using one of the new
    slim images.

  • Base images are now available for Python3.10 and Python3.11, in addition to Python3.8 and Python3.9.

  • Job results and logs are now integrated with Catalog Storage, so that results and logs can be
    searched and retrieved directly using the Catalog client as well as using the methods in the Compute
    client.

  • The new ComputeResult class can be used to wrap results from a Function, allowing the user to
    specify additional attributes for the result which will be stored in the Catalog Blob metadata for
    the result. This allows the function to specify properties such as geometry, description,
    expires and extra_attributes for the result Blob. The use of ComputeResult is not required.

  • A Job can now be assigned arbitrary tags (strings), and searched based on them.

  • A Job can now be retried on errors, and jobs track error reasons, exit codes, and execution counts.

  • Function and Job objects can now be filtered by class attributes (ex. Job.search().filter(Job.status == JobStatus.PENDING).collect()).

  • The requirements= parameter to Function objects now supports more pip magic, allowing the use
    of special pip controls such as -f. Also parsing of package versions has been loosened to allow
    some more unusual version designators.

  • Changes to the Function.map() method, with the parameter name change of iterargs changed to kwargs
    (the old name is still honored but deprecated), corrected documentation, and enhancements to support more
    general iterators and mappings, allowing for a more functional programming style.

  • The compute package was restructured to make all the useful and relevant classes available at the top level.

Utils

  • Property filters can now be deserialized as well as serialized.

v2.0.3

13 Jul 23:10
Compare
Choose a tag to compare

Compute

  • Allow deletion of Function objects.
    • Deleting a Function will deleted all associated Jobs.
  • Allow deletion of Job objects.
    • Deleting a Job will delete all associated resources (logs, results, etc).
  • Added attribute filter to Function and Job objects.
    • Attributes marked filterable=True can be used to filter objects on the compute backend api.
    • Minor optimization to Job.iter_results which now uses backend filters to load successful jobs.
  • Function bundling has been enhanced.
    • New include_modules and include_data parameters allow for multiple other modules, non-code data files, etc to be added to the code bundle.
    • The requirements parameter has been improved to allow a user to pass a path to their own requirements.txt file instead of a list of strings.

v2.0.2

26 Jun 22:12
Compare
Choose a tag to compare

Catalog

  • Allow data type int32 in geotiff downloads.
  • BlobCollection now importable from descarteslabs.catalog.

Documentation

  • Added API documentation for dynamic compute and vector

v2.0.1

14 Jun 16:12
Compare
Choose a tag to compare

Raster

  • Due to recent changes in urllib3, rastering operations were failing to retry certain errors which ought to be retried, causing more failures to propagate to the user than was desirable. This is now fixed.

v2.0.0

12 Jun 19:59
Compare
Choose a tag to compare

(Release notes from all the 2.0.0 release candidates are summarized here for completeness.)

Supported platforms

  • Deprecated support for Python 3.7 (will end of life in July).
  • Added support for Python 3.10 and Python 3.11
  • AWS-only client. For the time being, the AWS client can be used to communicate with the legacy GCP platform (e.g. DESCARTESLABS_ENV=gcp-production) but only supports those services that are supported on AWS (catalog and scenes). This support may break at any point in the future, so it is strictly transitional.

Dependencies

  • Removed many dependencies no longer required due to the removal of GCP-only features.
  • Added support for Shapely 2.X. Note that user code may also be affected by breaking changes in
    Shapely 2.X. Use of Shapely 1.8 is still supported.
  • Updated requirements to avoid urllib3>=2.0.0 which breaks all kinds of things.

Configuration

  • Major overhaul of the internals of the config process. To support other clients using namespaced packages within the descarteslabs package, the top level has been cleaned up, and most all the real code is down inside descarteslabs.core. End users should never have to import anything from descarteslabs.core. No more magic packages means that pylint will work well with code using descarteslabs.
  • Configuration no longer depends upon the authorized user.

Catalog

  • Added support for data storage. The Blob class provides mechanism to upload, index, share, and retrieve arbitrary byte sequences (e.g. files). Blobs can be searched by namespace and name, geospatial coordinates (points, polygons, etc.), and tags. Blobs can be downloaded to a local file, or retrieved directly as a Python bytes object. Blobs support the same sharing mechanisms as Products, with owners, writers, and readers attributes.
  • Added support to Property for prefix filtering.
  • The default geocontext for image objects no longer specifies a resolution but rather a shape, to ensure
    that default rastering preserves the original data and alignment (i.e. no warping of the source image).
  • As with resolution, you can now pass a crs parameter to the rastering methods (e.g. Image.ndarray,
    ImageCollection.stack, etc.) to override the crs of the default geocontext.
  • A bug in the code handling the default context for image collections when working with a product with a CRS based on degrees rather than meters has been fixed. Resolutions should always be specified in the units used by the CRS.

Compute

  • Added support for managed batch compute under the compute module.

Raster Client

  • Fixed a bug in the handling of small blocks (less than 512 x 512) that caused rasterio to generate bad download files (the desired image block would appear as a smaller sub-block rather than filling the resulting raster).

Geo

  • The defaulting of align_pixels has changed slightly for the AOI class. Previously it always defaulted to
    True. Now the default is True if resolution is set, False otherwise. This ensures that when specifying
    a shape and a bounds rather than a resolution,the shape is actually honored.
  • When assigning a resolution to an AOI, any existing shape attribute is automatically unset, since the
    two attributes are mutually exclusive.
  • The validation of bounds for a geographic CRS has been slightly modified to account for some of the
    irregularities of whole-globe image products, correcting unintended failures in the past.
  • Fixed problem handling MultiPolygon and GeometryCollection when using Shapely 2.0.

v2.0.0rc5

01 Jun 21:17
Compare
Choose a tag to compare

Catalog

  • Loosen up the restrictions on the allowed alphabet for Blob names. Now almost any printable
    character is accepted save for newlines and commas.
  • Added new storage types for Blobs: StorageType.COMPUTE (for Compute job results) and
    StorageType.DYNCOMP (for saved dynamic-compute operations).

Compute

  • Added testing of the client.

v2.0.0rc4

17 May 20:08
Compare
Choose a tag to compare

Catalog

  • The defaulting of the namespace value for Blobs has changed slightly. If no namespace is specified,
    it will default to <org>:<hash> with the user's org name and unique user hash. Otherwise, any other value,
    as before, will be prefixed with the user's org name if it isn't already so.
  • Blob.get no longer requires a full id. Alternatively, you can give it a name and optionally a namespace
    and a storage_type, and it will retrieve the Blob.
  • Fixed a bug causing summaries of Blob searches to fail.

Compute

  • Function.map and Function.rerun now save the created Jobs before returning.
  • Job.get return values fixed, and removed an extraneous debug print.

General

  • Updated requirements to avoid urllib3>=2.0.0 which break all kinds of things.