nzbget 13.0-testing-r979

Announcements about new stable and testing releases.
Subscribe to the forum for e-mail notifications.
Forum rules
This forum is readonly and is intended to inform users about new releases.
You can subscribe to the forum to receive e-mail notifications.
Locked
hugbug
Developer & Admin
Posts: 7471
Joined: 09 Sep 2008, 11:58
Location: Germany

nzbget 13.0-testing-r979

Post by hugbug » 06 Apr 2014, 21:04

nzbget 13.0-testing-r979

Changes since 13.0-testing-r963
  • reworking queue (continued): merged url queue into main download queue:
    • urls added to queue are now immediately shown in web-interface;
    • urls can be reordered and deleted;
    • when urls are fetched the downloaded nzb-files are put into queue at the positions of their urls;
    • this solves the problem with fetched nzb-files ordered differently than the urls if the fetching of upper (position wise) urls were completed after the lower urls;
    • removed options "ReloadUrlQueue" and "ReloadPostQueue" since there are no separate url- and post-queues anymore;
    • nzb-files added via urls have new field "URL" which can be accessed via RPC-methods "listgroups" and "history";
    • new env. var. "NZBNP_URL", "NZBNA_URL" and "NZBPP_URL" passed to NzbProcess, NzbAddedProcess and PostProcess-scripts;
    • removed remote command "--list U", urls are now shown as groups by command "--list G";
    • RPC-method "urlqueue" is still supported for compatibility but should not be used since the urls are now returned by method "listgroups", the entries have new field "Kind" which can be "NZB" or "URL";
  • added collecting of download volume statistics data per news server:
    • in web-interface the data is shown as chart in "Statistics and Status" dialog;
    • new RPC-method "servervolumes" returns the collected data;
    • new RPC-method "resetservervolume" to reset the custom counter;
    • for screenshots see topic [New Feature] Downloaded volume statistics
  • added new option "PropagationDelay", which sets the minimum post age to download; newer posts are kept on hold in download queue until they get older than the defined delay, after that they are downloaded;
  • column "age" in web-interface now shows minutes for recent posts (instead of "0 h");
  • remote command "-B dump" now can be used also in release (non-debug) versions and prints useful debug data as "INFO" instead of "DEBUG";
  • current time zone is now determined once on program start and if a clock adjustment is detected using system function "localtime"; the function "localtime" is no longer constantly used by the scheduler; this should solve the hibernation problem on synology NAS, even when task scheduler is used;
  • updated all links to go to new domain (nzbget.net);
  • fixed: RSS feed preview dialog displayed slightly incorrect post ages because of the wrong time zone conversion;
  • fixed: sometimes URLs were removed too early from the feed history causing them to be detected as "new" and fetched again; if duplicate check was not active the same nzb-files could be downloaded again;
Other changes since nzbget 12.0
  • reworked download queue:
    • queue now holds nzb-jobs instead of individual files (contained within nzbs);
    • this drastically improves performance when managing queue containing big nzb-files on operations such as pause/unpause/move items;
    • tested with queue of 30 nzb-files each 40-100GB size (total queue size 1.5TB) - queue managing is fast even on slow device;
    • limitation: individual files (contained within nzbs) now cannot be moved beyond nzb borders (in older version it was possible to move individual files freely and mix files from different nzbs, although this feature was not supported in web-interface and therefore was not much known);
    • this change opens doors for further speed optimizations and integration of download queue with post-processing queue and possibly url-queue;
    • current download data such as remained size or size of paused files is now internally automatically updated on related events (download of article is completed, queue edited, etc.);
    • this eliminates the need of calculating this data upon each RPC-request (from web-interface) and greatly decrease CPU load of processing RPC-requests when having large download queue (and/or large nzb-files in queue);
    • field "Priority" was removed from individual files;
    • instead nzb-files (collections) now have field "Priority";
    • nzb-files now also have new fields "MinTime" and "MaxTime", which are set when nzb-file is parsed and then kept;
    • this eliminates the need of recalculation file statistics (min and max priority, min and max time);
    • removed action "FileSetPriority" from RPC-command "editqueue";
    • removed action "I" from remote command "--edit/-E" for individual files (now it is allowed for groups only);
    • removed few (not more necessary) checks from duplicate manager;
    • merged post-processing queue into main download queue;
    • changing the order of (pp-queued) items in the download queue now also means changing the order of post-processing jobs;
    • priorities of downloads are now respected when picking the next queued post-processing job;
    • the moving of download items in web-interface is now allowed for downloads queued for post-processing;
    • removed actions of remote command "--edit/-E" and of RPC-method "editqueue" used to move post-processing jobs in the post-processing queue (the moving of download items should be used instead);
    • remote command "-E/--edit" and RPC-method "editqueue" now use NZBIDs of groups to edit groups (instead of using ID of any file in the group as in older versions);
    • remote command "-L/--list" for groups (G) and group-view in curses-frontend now print NZBIDs instead of "FirstID-LastID";
    • RPC-method "listgroups" returns NZBIDs in fields "FirstID" and "LastID", which are usually used as arguments to "editqueue" (for compatibility with existing third-party software);
    • items queued for post-processing and not having any remaining files now can be edited (to cancel post-processing), which was not possibly before due to lack of "LastID" in empty groups;
    • edit commands for download queue and post-processing queue are now both use the same IDs (NZBIDs);
  • improved fast par-renamer: it now automatically detects and renames misnamed (obfuscated) par2-files;
  • for downloads not having any (obviously named) par2-files the critical health is assumed 85% instead of 100% as the absense of par2-files suggests;
    • this avoids the possibly false triggering of health-check action (detele or pause) for downloads having misnamed (obfuscated) par2-files;
    • combined with improved fast par-renamer this provides proper processing of downloads with misnamed (obfuscated) par2-files;
  • improved par-check for damaged collections with multiple par-sets and having missing files:
    • only orphaned files (not belonging to any par-set) are scanned when looking for missing files;
    • this greatly decrease the par-check time for big collections;
  • eliminated the distinction between manual pause and soft-pause;
    • there is only one pause register now;
    • options "ParPauseQueue", "UnpackPauseQueue" and "ScriptPauseQueue" do not change the state of the pause but instead are respected directly;
    • RPC-methods "pausedownload2" and "resumedownload2" are aliases to "pausedownload" and "resumedownload" (kept for compatibility);
    • field "Download2Paused" of RPC-method "status" is an alias to "DownloadPaused" (kept for compatibility);
    • action "D2" of remote commands "--pause/-P" and "--unpause/-U" is not supported anymore;
  • avoiding unnecessary calls to system function "localtime" from scheduler if no tasks are defined; this solves hibernation issues on synology NAS (but requires no usage of scheduler);
  • adjusted modules initialization to avoid possible bugs due to delayed thread starts;
  • reorganized source code directory structure: created directory 'daemon' with several subdirectories and put all source code files there;
  • impoved error reporting if unpacker or par-renamer fail to move files;
  • fixed: strange (damaged?) par2-files could cause a crash during par-renaming;
  • fixed: damaged nzb-files containing multiple par-sets and not having enough par-blocks could cause a crash during par-check;
  • fixed: if during par-repair the downloaded extra par-files were damaged and the repair was terminated with failure status the post-processing scripts were executed twice sometimes;
  • fixed: post-processing scripts were not executed in standalone mode ("nzbget /path/to/file.nzb").
Download link

Locked

Who is online

Users browsing this forum: No registered users and 5 guests