Releases: suredone/qdone
1.7.0
New Features
Added --deduplication-id
option for enqueue (#40)
qdone
has always set a deduplication id (using a UUID v1) when sending enqueue calls, but it looks like the aws sdk does not have adequate retry defaults set. This option lets a qdone user retry enqueue operations. For more information please see the AWS docs for Message Deduplication ID.
Under the hood
- Updated aws-sdk.
- Updated locked dependencies.
1.6.0
New Features
Caching for SQS GetQueueAttributes
calls (#41)
After switching our infrastructure to --active-only
on jobs that have a large number of dynamic queues, we noticed that spend a lot of money on GetQueueAttributes calls. However the state of the active queues is very cacheable, especially if queues tend to have large backlogs, as ours do.
We added the following options to the idle-queues
, and worker
commands to be used in conjunction with --active-only
:
--cache-url
that takes aredis://...
or aredis-cluster://
url [no default]--cache-ttl-seconds
that takes a number of seconds [default10
]--cache-prefix
that defines a cache key prefix [defaultqdone:
]
The presence of the --cache-url
option will cause the worker to cache GetQueueAttributes
for each queue for the specified ttl.
1.6.0-beta1
How to install pre-releases
npm install qdone@next
New Features
Caching for SQS GetQueueAttributes
calls (#41)
After switching our infrastructure to --active-only
on jobs that have a large number of dynamic queues, we noticed that spend a lot of money on GetQueueAttributes calls. However the state of the active queues is very cacheable, especially if queues tend to have large backlogs, as ours do.
We added the following options to the idle-queues
, and worker
commands to be used in conjunction with --active-only
:
--cache-url
that takes aredis://...
or aredis-cluster://
url [no default]--cache-ttl-seconds
that takes a number of seconds [default10
]--cache-prefix
that defines a cache key prefix [defaultqdone:
]
The presence of the --cache-url
option will cause the worker to cache GetQueueAttributes
for each queue for the specified ttl.
1.5.0
New Features
Added --group-id-per-message
option for enqueue-batch
(#33)
This option creates a new Group ID for every message in a batch, for when you want exactly once delivery, but don't care about message order.
Bug Fixes
- Fixed (#35) by making
idle-queues
pairing behavior work for FIFO queues as well as normal queues.
1.5.0-beta2
How to install pre-releases
npm install qdone@next
New Features
Added --group-id-per-message
option for enqueue-batch
(#33)
This option creates a new Group ID for every message in a batch, for when you want exactly once delivery, but don't care about message order.
Bug Fixes
- Fixed (#35) by making
idle-queues
pairing behavior work for FIFO queues as well as normal queues.
v1.5.0-beta
How to install pre-releases
npm install qdone@next
New Features
Added --group-id-per-message
option for enqueue-batch
(#33)
This option creates a new Group ID for every message in a batch, for when you want exactly once delivery, but don't care about message order.
v1.4.0
v1.4.0-beta
How to install pre-releases
npm install qdone@next
Bug Fixes
- Fixed (#25) bug on Linux in
worker
where child processes were not getting killed after--kill-after
timer was reached.
v1.3.0
New Features
FIFO Option (#18)
Added a --fifo
and --group-id <string>
option to equeue
and enqueue-batch
- Causes any new queues to be created as FIFO queues
- Causes the
.fifo
suffix to be appended to any queue names that do not explicitly have them - Causes failed queues to take the form
${name}_failed.fifo
- Any commands with the same
--group-id
will be worked on in the order they were received by SQS (see FIFO docs) - If you don't set
--group-id
it defaults to a unique id per call toqdone
, so this means messages sent byenqueue-batch
will always be ordered as you sent them. - There is NO option to set group id per-message in
enqueue-batch
. Adding this feature in the future will change the format of the batch input file. - There is NO support right now for Content Deduplication, however a Unique Message Deduplication ID is generated for each command, so retry-able errors should not result in duplicate messages.
Added a --fifo
option to worker
- Causes the
.fifo
suffix to be appended to any queue names that do not explicitly have them - When wildcard names are specified (e.g.
test_*
or*
), worker only listens to queues with a.fifo
suffix. - Failed queues are still only included if
--include-failed
is set. - Regardless of how many workers you have, FIFO commands with the same
--group-id
will only be executed by one worker at a time. - There is NO support right now for only-once processing using the Receive Request Attempt ID
Only Listen To Active Queues with --active-only
We encountered an occasional production problem where aggressively deleting idle queues can cause the loss of a message that was sent between the idle check and the delete operation. We were using qdone idle-queues --delete --idle-for 10
, which is much more aggressive than the default of 60 minutes.
To address this, we are adding an alternate mode of operation to the worker with the new --active-only
flag for use with wildcard (*
) queues that does a cheap SQS API call to check whether a queue currently has waiting messages. If so, it's put into the list of queues for the current listening round. This should have the net effect of reducing the number of queues workers have to listen to (similarly to aggresive usage of qdone idle-queues --delete
) without exposing messages to the delete race condition. For cases where idle queues still must be deleted, we recommend using a longer timeout.
Bug Fixes
- Fixed (#29) bug in
enqueue-batch
where SQS batches where command lines added up to > 256kb would not be split correctly and loop
Under the hood
v1.3.0-beta
How to install pre-releases
npm install qdone@next
New Features
FIFO Option (#18)
Added a --fifo
and --group-id <string>
option to equeue
and enqueue-batch
- Causes any new queues to be created as FIFO queues
- Causes the
.fifo
suffix to be appended to any queue names that do not explicitly have them - Causes failed queues to take the form
${name}_failed.fifo
- Any commands with the same
--group-id
will be worked on in the order they were received by SQS (see FIFO docs) - If you don't set
--group-id
it defaults to a unique id per call toqdone
, so this means messages sent byenqueue-batch
will always be ordered as you sent them. - There is NO option to set group id per-message in
enqueue-batch
. Adding this feature in the future will change the format of the batch input file. - There is NO support right now for Content Deduplication, however a Unique Message Deduplication ID is generated for each command, so retry-able errors should not result in duplicate messages.
Added a --fifo
option to worker
- Causes the
.fifo
suffix to be appended to any queue names that do not explicitly have them - When wildcard names are specified (e.g.
test_*
or*
), worker only listens to queues with a.fifo
suffix. - Failed queues are still only included if
--include-failed
is set. - Regardless of how many workers you have, FIFO commands with the same
--group-id
will only be executed by one worker at a time. - There is NO support right now for only-once processing using the Receive Request Attempt ID
Only Listen To Active Queues with --active-only
We encountered an occasional production problem where aggressively deleting idle queues can cause the loss of a message that was sent between the idle check and the delete operation. We were using qdone idle-queues --delete --idle-for 10
, which is much more aggressive than the default of 60 minutes.
To address this, we are adding an alternate mode of operation to the worker with the new --active-only
flag for use with wildcard (*
) queues that does a cheap SQS API call to check whether a queue currently has waiting messages. If so, it's put into the list of queues for the current listening round. This should have the net effect of reducing the number of queues workers have to listen to (similarly to aggresive usage of qdone idle-queues --delete
) without exposing messages to the delete race condition. For cases where idle queues still must be deleted, we recommend using a longer timeout.
Bug Fixes
- Fixed (#29) bug in
enqueue-batch
where SQS batches where command lines added up to > 256kb would not be split correctly and loop