Performing speed tests

Discuss newly added features or request new features.
Post Reply
hugbug
Developer & Admin
Posts: 7645
Joined: 09 Sep 2008, 11:58
Location: Germany

Performing speed tests

Post by hugbug » 28 Oct 2016, 20:18

One of the new thing in v18 is a built-in NNTP server (I call it NServ) primary aimed at assisting with functional testing during development.

This NTTP server however is also suitable for download speed tests independent from connection line speed, for example to determine hardware capabilities.

That's how it works: you start NZBGet in special mode as NNTP server on the same or another computer. In NZBGet of the testing computer you register a new news server in the settings with server address localhost or the ip of the computer running NNTP server. When you put files into datadir of NNTP server, it generates an nzb-file which your add to NZBGet for downloading. Then you observe download speed in NZBGet.

Please see detailed guide in wiki - Performing speed tests.

Your test results or any questions are welcome in this thread.

sanderj
Posts: 184
Joined: 10 Feb 2014, 21:46

Re: Performing speed tests

Post by sanderj » 22 Dec 2016, 20:37

On a low-spec laptop (Celeron N2840, eMMC 'disk') running Linux: 400MB takes 23 seconds, so 17.4 MB/s

Commands used:

cd nzbget18/nzbget/
mkdir content-dir
mkdir cache-dir
fallocate -l 400M content-dir/400M-bin.bin

./nzbget --nserv -d content-dir/ -c cache-dir/ -z 500000 -p 1190

nzbget client with 12 connections. I let the client download twice (to create cache info in the cache-dir) and only measured the second download
Last edited by sanderj on 23 Dec 2016, 19:13, edited 1 time in total.

hugbug
Developer & Admin
Posts: 7645
Joined: 09 Sep 2008, 11:58
Location: Germany

Re: Performing speed tests

Post by hugbug » 22 Dec 2016, 23:12

Thanks for testing.

Was the CPU fully used or was the disk the bottleneck?

Few tips:
- use option "-v 1" with nserv, it limits logging. With default logging too much text is printed and that may increase CPU usage of terminal;
- one 400MB isn't that realisitic, on one side it's too big for one rar-file and on another side it's too small as a whole job as it may fit into article cache (if enabled). Better use multiple files of 100MB each, preferably several GB in total, if you have enough disk space. I recommend using real files. A sample filled with nulls isn't good; for example, SSD controllers use compression. That probably doesn't apply to your eMMC, merely a general tip for others.

sanderj
Posts: 184
Joined: 10 Feb 2014, 21:46

Re: Performing speed tests

Post by sanderj » 23 Dec 2016, 19:58

New measurement:

Core i3 laptop (old one: 2011) with 4MB RAM and a SSD, running Linux
10 files of 100MB random bytes each, created with the script below

Result: 1024 MB in 19 seconds, so 54 MB/s (on second run, so cache-dir filled)

CPU cores and threads were at 80%

Code: Select all

#!/bin/sh
# Create random files

a=1000

while [ $a -lt 1010 ]
do
   echo $a
   openssl rand -out sample--$a.txt 105000111
   a=`expr $a + 1`
done

Post Reply

Who is online

Users browsing this forum: No registered users and 20 guests