Hello to everyone ,
Recently changed the OS and stucked on odd situation.
On the old one (Ubuntu 10.04) i never saw more than 1 index.cgi running (still sometimes the website started lagging from many clients , but the cpu didn't go over 10-20%) . After few adjustments to apache2.conf and started work correctly .
Now the OS is CentOS (i think 6 final) . So the website started lagging at some point , and as usual gone to httpd.conf and did few adjustments in the servers , put keepalive on off and such other suggestions which i took from the forum . However all those things fixed the issue for 1-2 hours only . In matter of seconds the cpu go from 20% to 90% . Index.cgi builds up over 160 times (~half of them defunct but still eating CPU) .
Has anyone stucked on similar situation ? And apart from the httpd.conf is there other tweaks to be done ?
XFileSharing Pro - Ubuntu =! CentOS
-
- Posts: 52
- Joined: Nov 26, 2012 9:58 am
-
- Posts: 52
- Joined: Nov 26, 2012 9:58 am
If you encounter any problems you cant fix a reboot is always something you should do otherwise those problems you have might never be fixed, plus what's a few minutes downtime for rebooting and a few hours for reinstalling just get a backup web server to take its place whilst its reinstalling the OS.Stefanzm wrote:Reinstalling of the OS is out of the question . This will lead to few hours downtime .
About the hard reboot i don't see reason for it ? There is ways to reboot apache or any service without actually rebooting the system .
-
- Posts: 52
- Joined: Nov 26, 2012 9:58 am
-
- Posts: 52
- Joined: Nov 26, 2012 9:58 am
Ye but does help in case its a small problem with perl then he knows how to fix the small problem instead of hiring another guy to fix the small problem plus if he knows the programming language he can skim threw to see if its a bug then fix it or work around it.chrda wrote:Dont need perl skills to run the script out of the box. Most cases out of the box works fine.
-
- Posts: 52
- Joined: Nov 26, 2012 9:58 am
When i have around 4000 visitors/hour , even few minutes downtime is unacceptable . The only time when i allow downtime is when moving the entire web server thru countries (etc DNS change). And even then the downtime is ~15-20 minutes .
So far your idea was to reboot and reinstall the OS , and chrda idea was to hire part time administrator . And since i don't need administrator for 1 issue only i`m thinking to skip that suggestion. About the reinstalling will skip that too unless i decide to change the entire server machine .
So my question stays , has anyone had issue where index.cgi builds up several times up to 100 running ?
The web server has plesk and there is marked Fast CGI support (not sure that this matter at all but still)
So far your idea was to reboot and reinstall the OS , and chrda idea was to hire part time administrator . And since i don't need administrator for 1 issue only i`m thinking to skip that suggestion. About the reinstalling will skip that too unless i decide to change the entire server machine .
So my question stays , has anyone had issue where index.cgi builds up several times up to 100 running ?
The web server has plesk and there is marked Fast CGI support (not sure that this matter at all but still)
-
- Posts: 52
- Joined: Nov 26, 2012 9:58 am
So your saying the most simplest task of trying to fix a problem and your not going to do it? facebook was like you until they found out it was impossible to not try every way possible to fix a problem.Stefanzm wrote:When i have around 4000 visitors/hour , even few minutes downtime is unacceptable . The only time when i allow downtime is when moving the entire web server thru countries (etc DNS change). And even then the downtime is ~15-20 minutes .
So far your idea was to reboot and reinstall the OS , and chrda idea was to hire part time administrator . And since i don't need administrator for 1 issue only i`m thinking to skip that suggestion. About the reinstalling will skip that too unless i decide to change the entire server machine .
So my question stays , has anyone had issue where index.cgi builds up several times up to 100 running ?
The web server has plesk and there is marked Fast CGI support (not sure that this matter at all but still)
Plus what's point in anyone even trying to help you if your going to throw possible solutions in there faces seriously if you don't want it to be fixed or try fixing it at least have the decency to say.
Hello ufkabakan ,
Here is the atop SS
http://postimage.org/image/utbs914gn/full/
As you see the sql itself is only 2% , the entire load on the server is from the cgi`s .
I have tried with the nginx since its build into Plesk 11 , still the same story .
Actually with nginx i get 500 error on the server with the following message:
2012/12/10 16:10:37 [alert] 44697#0: *46963097 socket() failed (24: Too many open files) while connecting to upstream, client: ******, server: example.org, request: "$
Here is the atop SS
http://postimage.org/image/utbs914gn/full/
As you see the sql itself is only 2% , the entire load on the server is from the cgi`s .
I have tried with the nginx since its build into Plesk 11 , still the same story .
Actually with nginx i get 500 error on the server with the following message:
2012/12/10 16:10:37 [alert] 44697#0: *46963097 socket() failed (24: Too many open files) while connecting to upstream, client: ******, server: example.org, request: "$
You have IO blocking somewhere, maybe the MySQL isn't responding fast enough hence all the index_dl open.
Otherwise check you're not hitting the open file limit on your OS:
ulimit -Hn
ulimit -Sn
On one of my very last boxes with CentOS on it it has a limit of 1024 which is very very low. You'll have to increase this in the OS via sysctl and in the web server you're using.
Otherwise check you're not hitting the open file limit on your OS:
ulimit -Hn
ulimit -Sn
On one of my very last boxes with CentOS on it it has a limit of 1024 which is very very low. You'll have to increase this in the OS via sysctl and in the web server you're using.