Releases: Blizzard/node-rdkafka
Releases · Blizzard/node-rdkafka
v2.1.0
Minor release
- Upgraded librdkafka to 0.11.1
Consumer Changes
- Partition EOF will no longer stop a batch from completing when consuming.
v2.0.0
Major release because there are breaking changes!
Breaking Changes
- Keys are now returned as buffers in delivery reports
- Keys are now produced as buffers. If you pass one in as a string it will be converted.
- Topic objects have been removed. You should use topic name strings to create topics.
- New librdkafka produce methods do not support topic objects because they are in the process of being removed.
- You should use topic configuration to configure topics, and separate producers if special cases are needed. Producers are cheap!
v1.0.6
New release version v1.0.5.
Root Object Changes
- Added
librdkafkaVersion
andfeatures
properties to root object.
Consumer Changes
- assign and unassign in the rebalance callback check if the consumer is connected before they throw.
v1.0.5
New release version v1.0.5. This release note includes changes from v1.0.2 - v1.0.5
Consumer Changes
- Bug fix for custom rebalance callbacks where assignments were not being set. (v1.0.5)
- Kafka read stream now supports
streamAsBatch
option, if it is running in objectMode. This will push arrays to the stack instead of individual messages. (v1.0.4) - Fixed callback "leak" in read stream under high message volume (v1.0.3)
v1.0.1
New release version v1.0.1
Producer Changes
- Passing a buffer as a key will not convert it to a string before sending it to Kafka
v1.0.0
New release version v1.0.0
Producer API Changes
- Producer write stream is now its own class that has its own producer. Producer methods can be accessed through the member variable.
- Create one by using
Producer.createWriteStream();
Consumer API Changes
- Added
offset_commit_cb
- Rebalance callback now has first parameter as an error object. The error object can be checked to see if it is an assignment or an unassignment
- Consumer write stream is now its own class that has its own consumer. Consumer methods can be accessed through the member variable.
- Create one by using
Consumer.createReadStream()
- Added
query_watermark_offsets
support - Added support for seek method, but currently not supported in librdkafka 0.9.5. Will likely be in next release.
v0.10.2
New release version v0.10.2
Producer API Changes
- Producer flush method now is called asynchronously and must be provided a callback.
v0.10.10
New release version v0.10.0
API Changes
error
event is renamedevent.error
to show it corresponds with errors reported by thelibrdkafka
internals. Only streams emit events namederror
now, when there are stream related errors.- Added new error codes for librdkafka.
Consumer API Changes
commit
asynchronous methods no longer take a callback, and instead map directly to thelibrdkafka
async commit variants.- Internal queue timeouts no longer considered error worthy for consume methods.
v0.9.0
New release version v0.9.0
Consumer API Changes
commit
synchronous methods nowthrow
errors in a similar pattern as the other throwable methods. They will be fulllibrdkafka
error objects with error codes, etc. Asynchronous methods are unchanged, as they return error objects in the callback.
Bug fixes
- Bug with
this
binding in producer write stream in previously published (now unpublished) version has been fixed.
v0.8.2
New release version v0.8.2
Producer API Changes
- Added new producer method,
setPollInterval(interval)
. If you simply want to poll for events on an interval, you can pass it here without needing to manage connections or disconnections. - Fixed some bugs that would be related to manually connecting/disconnecting when using the producer stream.
- Producer stream now only uses topic objects if topic options are provided.
Consumer API Changes
- Added new consumer methods,
commitMessage
andcommitMessageSync
. When committing a message instead of a topic partition, these methods will commit the proper offsets instead of the off-by-one issue seen prior.