nzbget 14.0-testing-r1103

Announcements about new stable and testing releases.
Subscribe to the forum for e-mail notifications.
Forum rules
This forum is readonly and is intended to inform users about new releases.
You can subscribe to the forum to receive e-mail notifications.
Locked
hugbug
Developer & Admin
Posts: 7645
Joined: 09 Sep 2008, 11:58
Location: Germany

nzbget 14.0-testing-r1103

Post by hugbug » 22 Aug 2014, 19:44

nzbget 14.0-testing-r1103

Changes since nzbget 14.0-testing-r1070
  • improvements in quick par verification:
    • damaged (partially downloaded) files are now also verified quickly;
    • disabled block-by-block scan during par verification because: 1) it could cause incorrect verification results for certain kinds of damaged files; 1) after implementing of quick scan for damaged files the block-by-block scan was not necessary anymore;
    • when quick par verification is active the repaired files are not verified to save time; the only reason for incorrect files after repair can be hardware errors (memory, disk) but this is not something NZBGet should care about;
  • integrated par2-module (libpar2) into NZBGet’s source code tree:
    • the par2-module is now built automatically during building of NZBGet;
    • this eliminates dependency from external libpar2 and libsigc++...
    • ...making it much easier for users to compile NZBGet without patching libpar2;
    • for more info see forum topic [New Feature] Integrated par2-module;
  • added support for detection of bad downloads (fakes, etc.):
    • queue-scripts are now called after every downloaded file included in nzb;
    • new events "FILE_DOWNLOADED" and "NZB_DOWNLOADED" of parameter "NZBNA_EVENT";
    • FILE_DOWNLOADED is fired after each downloaded file (part of nzb);
    • NZB_DOWNLOADED is fired after all files of nzb are downloaded (before unpack);
    • the execution of queue-scripts is serialized - only one script is executed at a time and other scripts wait in script-queue; the script-queue is compressed so that the same script for the same event is not queued more than once; this reduces the number of calls of scripts if files are downloaded faster than queue-scripts can work up them; a call for event "NZB_DOWNLOADED" is always performed even if the previous calls for events "FILE_DOWNLOADED" were skipped;
    • queue-scripts have a chance to detect bad downloads when the download is in progress and cancel bad downloads by printing a special command;
    • downloads marked as bad become status "FAILURE/BAD" and are processed by the program as failures (triggering duplicate handling);
    • scripts executed thereafter see the new status and can react accordingly (inform an indexer or a third-party automation tool);
    • new env. var "NZBNA_DIRECTORY" passed to queue scripts;
    • when a script marks nzb as bad the nzb is deleted from queue, no further internal post-processing (par, unrar, etc.) is made for the nzb but all post-processing scripts are executed;
    • if option "DeleteCleanupDisk" is active the already downloaded files are deleted;
    • new status "BAD" for field "DeleteStatus" of nzb-item in RPC-method "history";
    • queue-scripts can set post-processing parameters by printing special command, just like post-processing-scripts can do that;
    • this simplifies transferring (of small amount) of information between queue-scripts and post-processing-scripts;
    • scripts supporting two modes (post-processing-mode and queue-mode) are now executed if selected in post-processing parameters: either in options "PostScript" and "CategoryX.PostScript" or manually on page "Postprocess" of download details dialog in web-interface;
    • it is not necessary to select dual-mode scripts in option "QueueScript"; that provides more flexibility: the scripts can be selected per-category or activated/deactivated for each nzb individually;
    • added option "EventInterval" allowing to reduce the number of calls of queue-scripts, which can be useful on slow systems;
    • For more info see forum topics:
  • the list of scripts (pp-scripts, queue-scripts, etc.) is now read once on program start instead of reading everytime a script is executed:
    • that eliminates the unnecessary disk access;
    • the settings page of web-interface loads available scripts every time the page is shown;
    • this allows to configure newly added scripts without restarting the program first (just like it was before); a restart is still required to apply the settings (just like it was before);
    • RPC-method "configtemplates" has new parameter "loadFromDisk"
  • options "ParIgnoreExt" and "ExtCleanupDisk" are now respected by par-check (in addition to being respected by par-rename): if all damaged or missing files are covered by these options then no par-repair is performed and the download assumed successful;
  • added new search field "dupestatus" for use in rss filters:
    • the search is performed through download queue and history testing items with the same dupekey or title as current rss item;
    • the field contains comma-separated list of following possible statuses (if duplicates were found): QUEUED, DOWNLOADING, SUCCESS, WARNING, FAILURE or an empty string if there were no matching items found;
  • updated configure-script to not require gcrypt for newer GnuTLS versions (when gcrypt is not needed);
  • fixed: cleanup may leave some files undeleted (Mac OSX only);
  • fixed: renaming of active downloads was broken (bug introduced in r1070);
  • fixed: when rotating log-files option TimeCorrection were not respected when bulding new file name - the filename could have wrong date stamp in the name (bug introduced in r1059);
  • fixed: malformed articles could crash the program (bug introduced in v14);
  • fixed: not all statistic fields were reset when using command "Download again" (bug introduced in v14);
  • fixed: compiler error if configured using parameter "--disable-gzip";
  • fixed: one log-message was printed only to global log but not to nzb-item pp-log.
Other changes since 13.0
  • added article cache:
    • new option "ArticleCache" defines memory limit to use for cache;
    • when cache is active the articles are written into cache first and then all flushed to disk into the destination file;
    • article cache reduces disk IO and may reduce file fragmentation improving post-processing speed (unpack);
    • it works with both writing modes (direct write on and off);
    • when option "DirectWrite" is disabled the cache should be big enough (for best performance) to accommodate all articles of one file (sometimes up to 500 MB) in order to avoid writing articles into temporary files, otherwise temporary files are used for articles which do not fit into cache;
    • when used in combination with DirectWrite there is no such limitation and even a small cache (100 MB or even less) can be used effectively; when the cache becomes full it is flushed automatically (directly into destination file) providing room for new articles;
    • new row in the "statistics and status dialog" in web-interface indicates the amount of memory used for cache;
    • new fields "ArticleCacheLo", "ArticleCacheHi" and "ArticleCacheMB" returned by RPC-method "status";
    • see forum topic [New Feature] Article memory cache for more info;
  • renamed option "WriteBufferSize" into "WriteBuffer":
    • changed the dimension - now option is set in kilobytes instead of bytes;
    • old name and value are automatically converted;
    • if the size of article is below the value defined by the option, the buffer is allocated with the articles size (to not waste memory);
    • therefore the special value "-1" is not required anymore; during conversion "-1" is replaced with "1024" (1 megabyte) but it can be of course manually changed to any other value later;
  • added quick file verification during par-check/repair:
    • if par-repair is required for download the files downloaded without errors are verified quickly by comparing their checksums against the checksums stored in the par2-file;
    • this makes the verification of undamaged files almost instant;
    • damaged files are verified as usual;
    • new option "ParQuick" (active by default);
    • added support for block-by-block scan of files during verification, which improves scan speed of damaged files;
    • see forum topic [New Feature] Quick par verification for more info;
  • added log file rotation:
    • options "CreateLog" and "ResetLog" replaced with new option "WriteLog (none, append, reset, rotate)";
    • new option "RotateLog" defines rotation period;
  • improved joining of splitted files:
    • instead of performing par-repair the files are now joined by unpacker, which is much faster;
    • the files splitted before creating of par-sets are now joined as well (they were not joined in v13 because par-repair has nothing to repair in this case);
    • the unpacker can detect missing fragments and requests par-check if necessary;
  • added per-nzb time and size statistics:
    • total time, download, verify, repair and unpack times, downloaded size and average speed, shown in history details dialog via click on the row with total size in statistics block;
    • RPC-methods "listgroups" and "history" return new fields: "DownloadedSizeLo", "DownloadedSizeHi", "DownloadedSizeMB", "DownloadTimeSec", "PostTotalTimeSec", "ParTimeSec", "RepairTimeSec", "UnpackTimeSec";
    • see forum topic [New Feature] Per-nzb time statistics for screenshots and more info;
  • pp-script "EMail.py" now supports mail server relays (thanks to l2g for the patch);
  • when compiled in debug mode new field "process id" is printed to the file log for each row (it is easier to identify processes than threads);
  • if an nzb has only few failed articles it may have completion shown as 100%; now it is shown as 99.9% to indicate that not everything was successfully downloaded;
Download link

Locked

Who is online

Users browsing this forum: No registered users and 11 guests