-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-start of logstash dies when no data were provided for some aggregate task_id patterns #62
Comments
Well, that's weird... |
Question : do you reproduce the issue with 2.3.1 aggregate plugin version? |
Could the issue have to do with line 592? @logger.debug("Aggregate remove_expired_maps call with '#{@task_id}' pattern and #{@@aggregate_maps[@task_id].length} maps") If the task_id does not exist in the aggregate_maps? |
Yes, but it should not be nil because of line 404. |
We ran it with --debug logs. The last logs are as follows: {:timestamp=>"2017-03-09T17:32:00.766000+0000", :message=>"Flushing", :plugin=><LogStash::Filters::Aggregate task_id=>"%{source}", map_action=>"update", end_of_task=>true, timeout=>172800, periodic_flush=>false, push_map_as_event_on_timeout=>false, push_previous_map_as_event=>false>, :level=>:debug, :file=>"(eval)", :line=>"246", :method=>"initialize"} The "source" task_id is expected to have 0 maps in this case |
2.3.1 aggregate works fine with the same scenario. |
@mrchang7 |
@mrchang7 Actually, I'm really surprised by this issue, because it should never append because of line 404. Could you provide a full Logstash configuration and a sample data so that I can reproduce your issue ? |
Yeah, reproduction was not easy even as I make a simple example as below. Basically, we use a filebeat for data shipping. And three logstash conf's are put in a folder and we provide the folder path when we run logstash (with option -w 1 -f <folder_path>). filebeat.yml
input.conf
second.conf
first.conf
Please note "TaskPattern_2" is not defined/shipped from filebeat for this test purpose. FYI, below is the --debug logs when I re-started the logstash with generated .aggregate_maps file.
Interestingly, if I join the two logstash conf's in a single file, it does not die. But we have multiple conf's for different data sets and prefer to keep this approach. Note: to be strange, at times, even the reproduced case never dies from some point, which is very weird (e.g., logstash conf file names matter??). |
Waou... very weird indeed... Just a remark : I guess that in |
Yes you're right. I just commented out that line. This might be a corner case and if the appropriate data are provided there will be no problem. |
Do you mean that when you remove this line, you don't reproduce anymore the issue ? |
Hey, We just found that the merging of the conf files goes in alphabetical order: And I tried changing the conf file names so that the one with data comes first, then the problem was gone. So, it seems like this issue arises when the task_id's without data is positioned ahead of those with data. Data here for the task_id pattern or value. Note: that way I updated above example. Also, my tests say that the order matters even with a single file joining all the conf files' content. So, you could just try with this:
|
Thanks a lot for all tests and explanation you give !
In conclusion, to solve this issue, when I load |
Good news : I managed to reproduce your issue ! |
Hope the fix goes straightforward without any other extra issues to be handled. Any chance to release the fixed one? |
Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
Re-start of Logstash died when no data were provided in 'aggregate_maps_path' file for some aggregate task_id patterns
Nice news for you @mrchang7 ! |
@fbaligand It WORKS without an issue now. I tested with possible different combinations using our codes and the restart worked as expected. Appreciate your swift care on this issue. Many thanks! |
Great news ! |
Hi again,
Sorry for many issues. I am testing aggregate 2.5.1 with Logstash 2.4.1.
This is a scenario I experience that the restart of logstash dies immediately after reading the .aggregate map file.
So, I suspect that the .aggregate file does not contain the map values for T2 and it makes a certain conflict when the logstash re-start reads the file. I bet you will know exactly what the issue is.
Below is the logstash error log for your information:
Thanks.
The text was updated successfully, but these errors were encountered: