Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move requires from register to top for eager loading, cleanups #10

Merged

Conversation

colinsurprenant
Copy link
Contributor

  • remove lazy loading of required libraries from the register method
  • move digest class selection at initialization
  • use symbols to avoid strings comparisons and improve performance
  • code cleanups

fixes elastic/logstash/issues/4419

[edited to add 2nd commit changes]

@ph
Copy link
Contributor

ph commented Jan 6, 2016

LGTM ?w=1 is handy for that

@colinsurprenant
Copy link
Contributor Author

👍 @ph thanks

@colinsurprenant
Copy link
Contributor Author

will bump version and publish

@jordansissel
Copy link
Contributor

LGTM

Some history on deferred require calls in Logstash:

  1. Years ago, doing require "openssl" used to take over 60 seconds (!!!!) under JRuby. This is no longer the case today, so yay! (require "openssl" is fast under JRuby 1.7.2x)
  2. Historically, the logstash docs build process used MRI which would not have certain jruby-specific gems available, so I solved this by moving the requires to run-time under the register method for most plugins - instead of file-load time.

Both of the above historical issues are no longer problems, so in general, I am in favor of moving requires to the top of files in most cases :)

@colinsurprenant
Copy link
Contributor Author

@jordansissel thanks for the context. we did have this discussion previously "somewhere" :P and agreed to move back requires at the top.

The bad news is that I just found out that this does not solve the problem here. I am still getting the same problem in my stress tests. investigating.

@jordansissel
Copy link
Contributor

@colinsurprenant bummer that your change didn't solve the problem, but at least it improved style :P

@colinsurprenant
Copy link
Contributor Author

@ph @jordansissel with this last commit, moving the digest class selection in register seems to fix the problem. I haven't been able to reproduce it after 500 runs. I will run another 1000 runs before going to bed, in the meantime if you can re-review this so I can push it to @cdahlqvist so he can also test it in his environment.

@colinsurprenant
Copy link
Contributor Author

The night stress run did not show any exception so I will call this issue "solved" without fully understanding the original cause. It hurts my OCD to walk away from trying to find the exact root cause but the intermittent nature makes it really hard to track. I have also tried to reproduce using the same openssl initialization + call logic with threads but outside the context of logstash without success.

The good news is this problem always surfaced upon the processing of the first event and does not happen anytime during the logstash runtime. I believe this is a concurrency/threading + constant visibility issue, by placing a rescue block around the code that produced the NameError exception, and doing a sleep(0.5) && retry, it worked. In that context, it makes sense that this refactor solves the issue.

@ph
Copy link
Contributor

ph commented Jan 7, 2016

I am OK with that, I think we should keep that problem in mind when working with plugins. Like you I am not sure what make OpenSSL is so special.

@ph
Copy link
Contributor

ph commented Jan 7, 2016

LGTM

@colinsurprenant
Copy link
Contributor Author

@ph thanks. I really don't know, and AFAICT OpenSSL does not do anything special, no autoload etc. The only thing "different" is that the OpenSSL module is created by the Java ext. Maybe I'll try to add this to my reproduction stress test to see if this is the cause.

@colinsurprenant colinsurprenant merged commit 14bb274 into logstash-plugins:master Jan 7, 2016
@colinsurprenant
Copy link
Contributor Author

version 2.0.3 pushed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fingerprint filter randomly failing
3 participants