Post
by NexusBender » 23 Aug 2018, 14:17
I am testing/largely switched to NZBGet from Sabnzbd, partly because I was seeking better performance. when I first switched, I didn't really see any difference in throughput, although CPU usage was lower. I was still capping out after around 50MB/s. My setup includes a high speed storage caching tier, so that all recent writes and reads effectively go through an SSD cache, so I didn't feel I was storage limited at first.
Eventually, I came to realize 2 things were happening in my case:
1) Initially, storage performance wasn't the problem. More or less any SSD can handle a single source of 100MB/s traffic. But what I realized was happening was I'd see a slow down when the decode operations started. Now I had multiple parallel ops happening against storage.
2) I was using network storage, and everything virtualized, but while connected through enterprise grade managed switches, I realized I was sharing a NIC with other VMs and even to handle both storage and regular communications. This will be true even in a physical machine, unless you went to the effort of setting multiple nics up, which I assume most non-IT, non-network people probably don't do.
Fortunately, there are some mitigations for both of those. First, and this one easy, I configured NZBGet to pause between downloads during unpacks. This prevents you from having disk contention issues (which I would strongly suggest you check into, even if you aren't hitting memory/cpu limits, the thing is, for write ops, cache isn't going to help as much, and CPU, well you are probably running into IOPS limitations in storage when you have combined read/write operations happening especially if you are running hard disks). Things often end up more performant as it turns out, if you can do high speed serial operations instead of trying to do things in parallel.
Second, in my case, I moved communications to NAS storage onto an internal virtual network so that it was crossing 10Gbps adapters(these are actually virtualized in-memory transfers) but anyway the point is, if you have NAS storage, moving storage data transfers to a different network adapter may make a huge difference because think of what is happening (and this is the parallel thing again):
1) you start a download...comes into the NZBGet at say 900Mbit. No problem so far...has to save to NAS...so writing 900Mbit out the same network adapter. Also no problem, generally, you've got gigabit in both directions typically.
2) download completes, starts on the next one. Tries to pull 900Mbit...here's where your problems start. It might also be trying to unpack the first transfer. So now you are trying to read and write from the NAS too. Well that means you have potentially 2 read and 2 write ops happening as fast as they can (read from usenet server, read from NAS for unpack, and write the usenet server to NAS, and write the unpack back to the NAS). And now you've saturated your gigabit connection, you've got 2 things trying to do gigabit read and gigabit writes, but you only have 1Gb in each direction. This probably turns into splitting the traffic roughly equally so you end up with like a 50MB/s from the news server, and 50MB/s with the NAS. So if you can put network traffic an another NIC entirely it may help a ton (this largely only applies tho if NZBGet isn't running on your NAS, but even then, you could be saturating your disks, especially if you only have 1 or 2 spindles).
When I moved my storage traffic to a separate network, and stopped hitting my storage system with both ops at once, I started being able to hit 80-90MB/sec in my transfers. I think the separate network for storage traffic made the larger difference, but I haven't turned the pause-for-unpack off again to check, as I find I rather like having the unpacks happen as fast as possible as well, I'd rather pause the download for a few seconds to make unpack ops happen much faster time-wise.
Hope that helps!