-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lost lots of messages due to async produce #192
Comments
That sounds weird. What is the retention time of the topic? It's not log compacted, right? |
Here coms the log retention policy:
And log compaction is disabled I test 20 million messages using samara, and there are exactly 20 million messages after test finish:
I will test goka soon later and post the result here. Thank you. |
Oops there are only 11 million messages in the topic.
|
@XiaochenCui |
@XiaochenCui one error check could also help: the
|
|
Yep, nice :) |
@frairon |
Well, if Kafka says the messages are too big, then I guess they really are too big, ohne way or another. If your single messages really are only 1000bytes max, then it has to do with the internal batching of the kafka producer (which I doubt, but wouldn't know where else to look). So have you tried sending with |
@frairon |
@XiaochenCui yes I understand the performance issue here, but trying that could make sure that it's not sarama's internal batching that causes messages that are too big. |
I'll close that for now, feel free to reopen if it's still an issue |
I recently write a test script which produce 20 million messages to kafka with goka, but there
is only about 14 million messages after produce complete.
The disk space is enough and no error messages in kafak logs and client logs.
The script and kafka is running in docker, and under a subnet.
test scripts:
Count messages using
docker-compose exec kafka kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka:9092 --topic test --time -1 --offsets 1
The text was updated successfully, but these errors were encountered: