Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client.close() does not release elasticsearch connection until maxKeepAliveTime elapses #40

Closed
aaronyo opened this issue Feb 6, 2014 · 13 comments

Comments

@aaronyo
Copy link

aaronyo commented Feb 6, 2014

Even when there are no pending requests, client.close() does not release an elasticsearch connection until maxKeepAliveTime elapses (which is defaulted to 5 min).

Expected: the connection closes immediately so that, for example, your node process can terminate immediately.

I'm using version 1.4.0

@jmonster
Copy link

jmonster commented Feb 6, 2014

I'm encountering a very similar issue. I have a high-traffic server running that makes 1-2 ES requests per incoming http request and it is guaranteed to eventually crash due to EMFILE errors.

I replaced ES with a barebones http client to do the same work and the issue no longer occurs.

Additionally, I have test scripts and the process never exits because of this ... again, replacing node-elasticsearch with a simple http client avoids this problem.

@aaronyo
Copy link
Author

aaronyo commented Feb 6, 2014

jmonster, the easiest work around I've found to get elasticsearch module to close quickly is to pass "maxKeepAliveTime: 0" (or something low like 100 -- it's milli seconds) in the options when you create elasticsearch.Client. YMMV under load.

Not sure if this will help your EMFILE issue. You can poke around src/lib/connectors/http.js and also agentkeepalive and you can see some of the default options for the http connection. Fortunately, most seem to be under your control as the top level Client config just gets passed through.

Curios - have you tried lsof or the like to see how many ES connections are being opened?

@spalger
Copy link
Contributor

spalger commented Feb 6, 2014

This is certainly a drawback to using a "forever" socket pool. I'm not seeing close() type functions in agentkeepalive or other keep-alive agent modules. I'll investigate how request handles this, and see if we can do the same thing. client.close() should certainly close those sockets once they are free.

@jmonster the number of sockets is certainly limited within the client, so I'm not sure how EMFILE errors are "guaranteed". Can you elaborate?

@jmonster
Copy link

jmonster commented Feb 6, 2014

I wish I could be more precise and accurate is my assessment.

The 10,000ft view is that using node-elasticsearch caused more memory overhead (around 30-40MB higher I believe) and somewhere/somehow it's leaking file descriptors for us. @lxe reduced one of the timeouts down to 2 seconds and it helped, but after a day or two of the single node process running continuously it finally encountered an EMFILE error again. We've played with lsof and changing ulimits but haven't determined a specific culprit.

Sorry if I'm crashing the wrong issue -- I actually came to the issues section to create a new one and then saw this and thought it seemed a little too familiar.

@spalger
Copy link
Contributor

spalger commented Feb 6, 2014

@jmonster Are you regularly creating and closing clients or just using one across all requests to a process? I really want to be certain this doesn't happen, so any other implementation details you can share would be great.

@jmonster
Copy link

jmonster commented Feb 6, 2014

client

this client was created when the server initializes and is then used for all ES interactions in the app

var elasticsearch = require('elasticsearch')
  , es = new elasticsearch.Client({
      host: argv['es-host'] + ':' + argv['es-port'],
      log: loggers,
      requestTimeout : 2000
    });

requests

es.create({
  index: idx,
  type:  'generic',
  body:  payload
}, function(err,response) {
  err && console.error(err);
  // console.info(response);
});

@spalger
Copy link
Contributor

spalger commented Feb 6, 2014

What are you using to log?

@spalger
Copy link
Contributor

spalger commented Feb 6, 2014

Also, I did a bit of a test writing 100000 records, taking heapsnapsots each 10000 records, and a memory leak did not manifest. Could you come up with a script that replicates this behavior?

@spalger
Copy link
Contributor

spalger commented Feb 7, 2014

@aaronyo Looks like the issue was a mixture of timeouts not being tracked and cleared, as well as the http Agent not offering a way to clear out the sockets it creates. The version 1.5.1 fixed the issue, and includes a test to be sure. Thank you for the report!

@spalger spalger closed this as completed Feb 7, 2014
@spalger
Copy link
Contributor

spalger commented Feb 7, 2014

@jmonster I'd like to solve the issue es-js caused in your environment. If you get a chance to replicate the behavior with a shareable script, I'd appreciate if you open another ticket. Thanks!

@jmonster
Copy link

It's looking like our problems were in the framework we built the app with but somehow was simply triggered in less time when using ES.

I'll be sure to follow-up if I can recreate it, I haven't fully put this client back into the production server yet so it may still creep up.

cdituri pushed a commit to cdituri/elasticsearch-js that referenced this issue Jun 14, 2017
Github issue elastic#39 - recover elasticsearch connection after temp ping error
@rash805115
Copy link

I am using the latest version ~0.7.5, and I am not able to close the connection using client.close() method. I am assuming that the keepAlive option is forcing the client to be forever open, but for testing in local/pipeline I need the connection to be closed after the test ends. How should I go about solving this problem?

@vanthome
Copy link

A close() method does not exist in the most recent version 15.2.0 but I think it's needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants