XFileSharing Pro - NGINX: High disk usage

Message
Author
nova
Posts: 42
Joined: Nov 21, 2011 7:07 pm

NGINX: High disk usage

#1 Postby nova » Apr 17, 2012 4:13 pm

I have a 1GBIT server that cannot surpass 400 MBPS due to high disk usage by NGINX.

The server has multiple disks in RAID5 and 8GB Ram so I don't think it is a hardware problem.

Do you have any suggestion about NGINX (and system) configuration to fully use a 1GBPS connection? Thanks.

atop results:
PRC | sys 0.85s | | user 0.11s | | | | #proc 244 | | #zombie 0 | | clones 0 | | | | #exit 0 |
CPU | sys 8% | | user 1% | irq 3% | | | | idle 431% | | wait 357% | | | steal 0% | | guest 0% |
cpu | sys 6% | | user 0% | irq 0% | | | | idle 0% | | cpu001 w 94% | | | steal 0% | | guest 0% |
cpu | sys 1% | | user 0% | irq 0% | | | | idle 0% | | cpu006 w 99% | | | steal 0% | | guest 0% |
cpu | sys 2% | | user 1% | irq 3% | | | | idle 12% | | cpu003 w 83% | | | steal 0% | | guest 0% |
CPL | avg1 13.84 | | avg5 13.86 | | avg15 12.86 | | | | csw 11707 | | intr 49033 | | | | numcpu 8 |
MEM | tot 7.7G | | free 153.0M | cache 6.7G | | dirty 680.2M | buff 3.1M | | slab 100.9M | | | | | | |
SWP | tot 2.0G | | free 2.0G | | | | | | | | | | vmcom 760.9M | | vmlim 5.8G |
PAG | scan 69024 | | | stall 0 | | | | | | | swin 0 | | | | swout 0 |
DSK | sdb | | busy 99% | read 1028 | | write 6 | KiB/r 192 | | KiB/w 256 | MBr/s 19.35 | | MBw/s 0.15 | avq 153.47 | | avio 9.67 ms |
DSK | sda | | busy 9% | read 0 | | write 11 | KiB/r 0 | | KiB/w 5 | MBr/s 0.00 | | MBw/s 0.01 | avq 1.49 | | avio 84.2 ms |
NET | transport | tcpi 106141 | | tcpo 32025 | udpi 0 | udpo 0 | tcpao 0 | | tcppo 133 | tcprs 1578 | tcpie 0 | tcpor 1 | | udpnp 0 | udpip 0 |
NET | network | | ipi 106140 | ipo 33603 | | ipfrw 0 | deliv 106140 | | | | | | icmpi 0 | | icmpo 0 |
NET | eth1 25% | pcki 106140 | | pcko 212103 | si 5260 Kbps | | so 251 Mbps | coll 0 | mlti 0 | | erri 0 | erro 0 | | drpi 0 | drpo 0 |

PID RUID EUID THR SYSCPU USRCPU VGROW RGROW RDDSK WRDSK ST EXC S CPUNR CPU CMD 1/1
16295 nobody nobody 1 0.07s 0.01s 0K 0K 15828K 0K -- - S 3 1% nginx
16287 nobody nobody 1 0.07s 0.01s 0K 0K 18936K 0K -- - D 1 1% nginx
98 root root 1 0.07s 0.00s 0K 0K 0K 58156K -- - S 6 1% kswapd0
16296 nobody nobody 1 0.05s 0.01s 0K 0K 10872K 0K -- - D 3 1% nginx
16292 nobody nobody 1 0.05s 0.01s 0K 0K 12912K 0K -- - R 3 1% nginx
16286 nobody nobody 1 0.06s 0.00s 0K 0K 16128K 0K -- - D 1 1% nginx
16285 nobody nobody 1 0.05s 0.01s 0K 0K 10440K 0K -- - S 3 1% nginx
16283 nobody nobody 1 0.05s 0.01s 0K 0K 19032K 0K -- - S 3 1% nginx
16290 nobody nobody 1 0.05s 0.01s 0K 0K 9856K 0K -- - D 1 1% nginx
16294 nobody nobody 1 0.04s 0.01s 0K 0K 11392K 4K -- - S 3 0% nginx
16297 nobody nobody 1 0.05s 0.00s 0K 0K 14848K 0K -- - D 1 0% nginx
16288 nobody nobody 1 0.05s 0.00s 0K 0K 14340K 0K -- - D 1 0% nginx
16291 nobody nobody 1 0.05s 0.00s 0K 0K 10856K 0K -- - S 3 0% nginx
16289 nobody nobody 1 0.04s 0.01s 0K 0K 8192K 0K -- - S 3 0% nginx
16298 nobody nobody 1 0.04s 0.00s 0K 0K 15464K 0K -- - D 3 0% nginx
16293 nobody nobody 1 0.04s 0.00s -1024K -896K 9088K 0K -- - D 3 0% nginx
17245 root root 1 0.02s 0.02s 0K 0K 0K 0K -- - R 1 0% atop
1559 root root 1 0.00s 0.00s 0K 0K 0K 0K -- - S 7 0% snmpd
1388 root root 1 0.00s 0.00s 0K 0K 0K 0K -- - S 4 0% irqbalance
408 root root 1 0.00s 0.00s 0K 0K 0K 4K -- - D 3 0% jbd2/sda4-8
1006 root root 1 0.00s 0.00s 0K 0K 0K 0K -- - D 5 0% jbd2/sdb1-8


hostlife
Posts: 194
Joined: Aug 13, 2011 12:34 pm

#2 Postby hostlife » Apr 17, 2012 5:58 pm

I am facing this issue in peak timings.

How many drives do you have in your server?

hdmagic
Posts: 14
Joined: Feb 14, 2012 6:14 pm

#3 Postby hdmagic » Apr 17, 2012 6:00 pm

smart ?

nova
Posts: 42
Joined: Nov 21, 2011 7:07 pm

#4 Postby nova » Apr 17, 2012 6:26 pm

In this particular server I have 3x3TB HDDs and I can't surpass 400MBPS.

But I've noticed that lately, during peak hours, I'm experiencing > 90% disk usage with another, more powerful, server with 12x1TB HDDs. Once this server was able to max out a 1GBPS connection, now it can't reach 900MBPS. May this be caused by disk fragmentation? How can I defragment on Linux (ext4)?

chrda
Posts: 296
Joined: Sep 14, 2009 7:16 pm

#5 Postby chrda » Apr 18, 2012 12:25 am

Random IO is your enemy.

Raid5 with 3 disks isnt much.

You can try putting more memory into the server, and tune nginx to use the memory better. might give you a few mbits extra.

I had one server with 12x2TB in raid10 with 24Gb ram.
It handled around 900+Mbit in peak. Alot of small files ( alot of random io )

But every filehost is different, small files, big files, video stream, mixed etc.

ufkabakan
Posts: 332
Joined: Apr 13, 2011 9:37 pm

#6 Postby ufkabakan » Apr 18, 2012 5:36 pm

Can u give me more info for ur system and hardware?
Do u have HARDWARE Raid5 ? What is your Raid Controller?

I have 12x2TB with Hardware Raid 5 + 2x E5630 CPU + 16GB Ram and i can handle 1GBit full in peak hours.

But raid5 is not a good solution, i lost one disk and my server is like tutrle about Disk Rebuilding Process.

Jaychew
Posts: 28
Joined: Feb 25, 2012 1:27 am

#7 Postby Jaychew » Apr 18, 2012 7:14 pm

ufkabakan wrote:I have 12x2TB with Hardware Raid 5 + 2x E5630 CPU + 16GB Ram and i can handle 1GBit full in peak hours.
Hello ufkabakan, where to buy this server?

ufkabakan
Posts: 332
Joined: Apr 13, 2011 9:37 pm

#8 Postby ufkabakan » Apr 19, 2012 1:06 pm

Jaychew wrote:
ufkabakan wrote:I have 12x2TB with Hardware Raid 5 + 2x E5630 CPU + 16GB Ram and i can handle 1GBit full in peak hours.
Hello ufkabakan, where to buy this server?
From Leaseweb reseller with own hardware. You need 800-900€ (price can be change for stok hardwares) for setup and 1100-1200€ monthly price.

sherayusuf3
Posts: 94
Joined: Jan 18, 2009 4:29 am

#9 Postby sherayusuf3 » May 06, 2012 6:31 pm

Im sure this is I/O problem, please check iowait and TPS per device, calculate your IOPS for your hardrive and compare with the output off command iostat (tps).

correct squence and random this the big enemy for hardrive

im using tmfs / ramfs / ramdisk to handle 3000-4000 simultant connection or around 1,5 Gbps traffic

benster
Posts: 7
Joined: Jan 17, 2012 4:47 pm

#10 Postby benster » May 08, 2012 11:07 am

the problem is really simple, you have to many requsts to the disk
DSK | sdb | | busy 99% | read 1028 | | write 6 | KiB/r 192 | | KiB/w 256 | MBr/s 19.35 | | MBw/s 0.15 | avq 153.47 | | avio 9.67 ms |
1028 requests every second, the disks cant handle so many requests(with your configuration) so you get lower output from the disk.
and thats why your avq is so high