-
Notifications
You must be signed in to change notification settings - Fork 768
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limited to 1024 concurrent connections, and looking for suggestions. #300
Comments
Does this server only handles the socket application?
My guess is: your code may be doing something really weird. Try to profile your application in order to see where it's overwriting the adjacent memory. Worst case scenario: Bring more nodes to your application and use a shared memory (a.k.a memcached, riak, etc etc etc) to share the state between the nodes in order to scale (very last option; IMHO |
Yeah, I thought that seemed low too. The application is a simple chat application, and it's all that's running on the server. When the concurrent connections are under 1024 (where php craps out) the server doesn't get strained. |
Are you storing the messages in memory somehow? May be related. |
Not for this round of tests, just incrementing and decrementing a connection counter. |
Can you profile (xhprof, whatever) and share the reports? I'm interesting to see the results! |
Limited to about 50 concurrent connections, and looking for suggestions. Run command: echo "show info" | socat /tmp/haproxy.sock stdio Result: |
I found that my init.d/haproxy start haproxy with I change it to this and work haproxy_start()
{
$HAPROXY -f "$CONFIG" -D -p "$PIDFILE"
return 0
} |
@rmmoul I'm having theexact same issue my chat server fails at 1019 connections.I increased the allowed file opens.and compiled php with the necessary configurations.but it seems that php is still not detecting the changed amount.any luck fixings this. |
@rmmoul I ran into the 1024 limit and did NOT want to compile php and run my own version. So to get around the 1024 limit, i ran multiple instances on several ports and had a http endpoint that round robined the ports to the client. |
@benconnito That's what I've been considering doing as well. I was hoping to keep them all on the same port, though, to avoid needing to set up a redis pub/sub server to keep all of the clients connected. I think you've found the easiest / best solution. @hsvikum I haven't found a way to get around this through recompiling or changing up my server's config options. @benconnito's solution is probably the way to go, though I haven't tried what @lokielse suggested to up the memory limit (his connection limit was super small though). |
Try to Increase Open Files Limit https://rtcamp.com/tutorials/linux/increase-open-files-limit/ |
I did everything mentioned here on Linux, and my application is limited to 254 connections no matter what. I have optimised PHP and Apache, but still no luck. I also tested on my Windows dev enviroment, same result. My application is a multi-room chat server... very simple. Any suggestions ? |
@Mecanik use a event loop other then the default, which is limited to 1024 open file desciptors: https://github.com/reactphp/event-loop#loop-implementations |
Well I do not quite understand how that works ... but I will try. This is how I have it now:
|
Replace that |
Well this messes up my understanding of websocket server :) Thank you anyway, I will see what I can manage. |
@WyriHaximus No luck. The server tries to start, but it automatically stops without any error :( Could please give me a small example ? I will take it from there and try and understand the "loop"... |
@WyriHaximus I still cannot replace my event loop, I tried a lot... can you please give me an example ? I don't know what to do... |
@Mecanik take a look at https://github.com/reactphp/event-loop/blob/master/travis-init.sh it is used be the event loop component to install loops on travis for testing. |
@WyriHaximus If you are talking about "pecl event" I already installed it, and it made only a small difference from 254 connections to 1010, and it gets stuck on 1010. I honestly do not know what to do, and I need this in a production environment. I am using this "chat" example: https://github.com/pmill/php-chat I also increased every limit there is possible on server and PHP, I am using PHP-FPM 7.1 |
I managed to "debug" things, it appears that the extension "ev" is not being detected at all, thus the loop factory is using "StreamSelectLoop". I am trying to install now "event" on Centos 7 with PHP 7.1 but failing so far. |
@Mecanik did you load the .so as an extension in php.ini? When it is loaded in php |
@WyriHaximus Of course I did, still it`s not detected :/ is it because PHP is 7.1 ? .. |
@WyriHaximus I installed ( somehow ) libevent-devel and then pecl install event and now Ratchet starts with ExtEventLoop(). Hopefully my "limit" is gone now 👯♂️ |
@WyriHaximus Final result: I have passed the 1024 with "event" because "ev" is NOT detected in PHP 7.1 by Ratchet lib. |
@Mecanik glad to hear 👍 |
@WyriHaximus |
@ChojinDSL I've stopped paying attention to that after a couple thousand |
@kelunik sorry for my stupid questions, I'm new to this. I have php_sockets.so onboard. Which extension will solve my issue? |
Either ev or event from pecl. |
Okay, I've found that my PHP Sockets server is accepting more than 1024 sockets when I run it from the root. I'm starting it using this command. /etc/php/7.0/fpm/php-fpm.conf /etc/php/7.0/fpm/pool.d/www.conf /etc/security/limits.conf
Any suggestions? |
@i3bitcoin In that case your problem is probably |
su www-data --shell /bin/bash --command "ulimit -n" it's aslo increased |
Is there any limits for sudo command? |
You cannot override the 1024 connections limitation in Linux systems without recompile the kernel. See this: https://access.redhat.com/solutions/488623 It's a far more complex that some comments were mentioned ulimit. The ulimit command doesn't take effect in select for sockets when you put > 1024 values. The problem is FD_SETSIZE in libc that it's impossible to override, when you try probably the process hang, it's a undefined behaviour, actually. Some systems as Solaris accepts FD_SETSIZE until 65536, but vanilla Linux don't. The recomendation is to use epoll and libevent alternatives. Further readings about the C10k problem: Hope to help, |
for rhel/centos users see reactphp/event-loop#152 |
hi, I'm using the socket for a web app, now that the connections are increasing, more and more often the socket hangs when the 1024 connections are reached. all the others are in the state of close_wait, I've already tried unlimit, but nothing has changed. Do you know if there is a parameter to change? |
Possible solution is here |
My solution was to use HHVM instead of PHP for sockets. It doesn't have 1024 connections limit. |
@i3bitcoin how many connections did you achieve? |
More than 3k connections right now. It's the only solution worked for me. I believe it's limited only with rlimit. |
@i3bitcoin is there anyway i can contact you in personal about setting HHVM with ratchet chat? Im stuck with same 1024 connection limit. |
@josephmiller2000 , possible this post may help you #328 (comment) The main quick solution is to use any other Loop Event library instead of default |
@inri13666 Well, im using "even.so" event and tested with this method Ev, is not detected by php, so right now using "event.so" instead of StreamSelectLoop Increased all server side limits and php-fpm limits, still can't achieve more than 1024 at my peak time. Users are in close_wait(socket) stage when they are connected to the chat. So i planned to move to >> HHVM instead of basic php. |
Easiest solution to this is to install |
@WyriHaximus thanks for the comment, i can successfully can install "event", but cannot install "ext-uv". End up getting this error, |
Ok, could you please share the result
for my configuration it's
|
Here you go @inri13666
|
@josephmiller2000, I'm using socket server behind NGinX nginx.conf
default.conf
|
Did you check |
with a cent os with 2gb ram, uv installed and a node socket client sending connections from other machine at my company we are reaching 20k connections. node client => https://github.com/jupitern/node-socket-client |
Just got hit with this and was able to eventually work around it. Wanted to share what all I went through in case it helps someone else down the line, because it took me two frustrating days with angry clients to resolve completely. For reference, we're running Ratchet with an Apache 2.4 reverse proxy on PHP 7.0, all running on Ubuntu 16.04. The Ratchet script is kept running by a supervisor task, ensuring that it restarts if it ever crashes. The Ratchet script is pretty straight forward; it interacts with an API on connection or when receiving certain messages, and contains a timer to hit the API for some data to send to specific clients (maintained by a user -> client map). Ratchet was maxing out at around 500 connections when we started. First thing we noticed was Apache redlining both cores of the server. Ideally we'd move to a better server software like nginx, but our app currently prevents that. We also have to use a reverse proxy for SSL. We tried to use the underlying React library to run a WSS server directly without needing Apache/nginx, but weren't able to get it working correctly. Bumping the server up to 4 cores gave enough resources to run Apache comfortably. From there we noticed that we'd still get 500 errors periodically, and some investigation into Apache revealed that it was tuned poorly and would cap out at a few hundred concurrent connections. Since the websockets count as a connection, these would quickly eat up available threads and prevent Apache from serving other traffic (other PHP scripts and static content). We were already using mpm_event, and updated our config to the following:
Stress testing the server after this showed we could comfortably maintain thousands of requests a minute without any issue, which is well over what we needed to serve. From there, we noticed that while Apache was running fine, the Ratchet script was now redlining with only a few hundred connections. Various searching led to the well documented
Running a second instance of the Ratchet script that would initialize and then execute We attempted to connect directly to the Ratchet script from the server itself (ie, bypassing Apache) to see if we could connect.
This would also hang and then fail. When we restarted the Ratchet script, we could use the above to connect immediately, but once it started redlining we could not. This indicated that Apache was fine, and the limit was on the Ratchet script. We updated the script to output the number of connected clients on tick and restarted, which would get to
and saw that it was soft limited to 1024 soft / 4096 hard max open files. Updating this with
and checking the log verified that once these limits were raised, we were able to handle an additional several thousand connections, after which we could still connect via a browser to our app or via the cURL request above with no issue. We figured this was a user-limit issue (the script does not run as the webuser) and updated
Restarting verified that the limits were maintained on the Ratchet script. The Ratchet script is now handling ~2,500 connections and using about 10% of one core, with small spikes here and there (mainly on client connection, as we have to decrypt connection data). I imagine that the redlining occurs when Ratchet basically deadlocks waiting on a file handle that can't be created, but I haven't been able to verify this yet. It would explain the vast performance decrease once those connections are able to be properly created and maintained. |
I had an experience which may help somebody. |
I had that problem with ReactPHP, the core is deeper. Its nature rests in php methods of servicing socket events. Code was rewritten in cpp using epoll instead of select. |
I started a project using ratchet, and wanted to test the number of connections that could be handled at one time on our server (Digital Ocean Ubuntu 14.04, 2 cores, 4GB ram running php 5.6.7 and apache2 2.4.7).
I followed some of the suggestions here on the deploy page http://socketo.me/docs/deploy to help increase the number of connections that could be handled, and seemed to get the ulimit and such to up the number of open files to 10,000.
I started running tests today using thor (https://github.com/observing/thor):
I got a php error when the number of connections exceeded 1024:
I was actually using php 5.5.9 at the time, so I followed some old instructions from http://ubuntuforums.org/archive/index.php/t-2130554.html and increased the FD_SETSIZE value to 10000 in the following two files and then downloaded and compiled php 5.6.7.
That coupled with using this command to run the server through supervisor:
Seems to have allowed the number of connections to go beyond 1024, but now it causes a buffer overflow within php, showing this error in the log file before restarting the process:
I'm curious how other users are getting beyond 1024 concurrent connections, whether some of you have never hit this limit at all (could you share your environment details), or made certain changes to get beyond it (could you share what changes you've made)?
The text was updated successfully, but these errors were encountered: