v0.17.0
Updating
$ npm update gcloud
Breaking Changes
Pub/Sub upgraded to v1 (#699, #711)
Upgrading to v1
of the official JSON API brought one change to our library, best described in a before & after:
// Before:
subscription.setAckDeadline({ ackId: 123 }, function(err) {});
// After:
subscription.setAckDeadline({ ackIds: [123] }, function(err) {});
No more nextQuery
by default! (#640, #692)
Previously, many API interactions forced you to manually page through the results:
// Before:
var callback = function(err, buckets, nextQuery, apiResponse) {
if (nextQuery) {
gcs.getBuckets(nextQuery, callback);
}
};
gcs.getBuckets(callback);
From a performance and cost perspective, this is an ideal way to paginate. From a usability perspective, this was not ideal. Now, pagination is automatically handled for you:
// After:
gcs.getBuckets(function(err, buckets) {
// `buckets` is *all of your buckets*
// No pagination necessary.
});
To enable the old behavior, supply an configuration object with autoPaginate
set to false
:
gcs.getBuckets({ autoPaginate: false }, function(err, buckets, nextQuery, apiResponse) {});
Topics must be manually created (#696, #742)
Our API previously supported autoCreate
when referencing a topic:
var topic = pubsub.topic('my-topic', { autoCreate: true });
This was added primarily to support publishing a message to a topic that may not exist yet. However, the topic that was just created likely didn't have any subscribers yet. And a message published to a topic with no subscribers is dropped. Instead of carrying around the overhead of supporting an error-prone use case, we have removed this option.
To help ease the transition, we have added topic.getMetadata
to help you check if a topic exists or not before publishing to it.
ConfigStore upgraded to v1 (#729, #730)
ConfigStore is a dependency used behind the scenes to support resumable uploads to your GCS buckets. When you enable resumable uploads, or if you are using bucket.upload with files > 5 MB, a hidden config file is written to with some metadata that enables resumable uploads.
Previously, the file was YAML, but it is now JSON. This is only a breaking change if you are in the middle of making a resumable upload while upgrading gcloud, as the old data from the YAML file is not carried over to the new file.
Features
- Core (#673, #692): Support
streams
across all APIs that involvednextQuery
. - Core (#701, #705): Use automatic HTTP request retry logic with exponential backoff for readable streams.
- Pub/Sub (#649, #650): Add
maxInProgress
option to limit the amount of messages consumed simultaneously. - Pub/Sub (#650): Add convenience methods to returned messages:
message.ack
andmessage.skip
. - Pub/Sub (#646, #742): Add
topic.getMetadata
. - Storage (#420, #700): Support reading a negative byte offset from a file.
- Storage (#680, #698): Add
deleteFiles
to recursively get and delete files from a bucket. - Storage (#751, #752, #753): Support automatic gzip compression for GCS uploads.
Fixes
- Core (#741): Return the
apiResponse
callback argument to all failed API operations. - BigQuery (#737): Return the
apiResponse
callback argument from a faileddataset.setMetadata
operation in the correct place. - Pub/Sub (#754): Prevent multiple automatic pulling processes.
- Storage (#654, #745): Previously, files with gzip encoding were not passing our validation check, as we were computing hashes on the decoded data. We now run our data validation on the raw HTTP response data.
Thank you!
All of these changes wouldn't have been possible without the help of many from our great community. Before recognizing them, there's a quick introduction to make. Please look out for gcloud-node's latest addition, Dave Gramlich (@callmehiphop). In a couple of short weeks, we've already seen great contributions from him, ranging from major doc enhancements to library-wide code quality improvements. Thanks, Dave! Keep it up 💯
And last but not least, these folks have opened issues, sent PRs, and made all of our codebases that much better for it:
@beshkenadze
@javorosas
@akashkrishnan
@jacobsa
@macvean
@leibale
@abelino
@mziccard
Thanks for all of the contributions - we would love to see even more! :)