Discuss newly added features or request new features.
-
get39678
- Posts: 222
- Joined: 09 Jun 2014, 10:49
Post
by get39678 » 30 Jan 2018, 18:14
Is it possible to only download the actual NZB file if and when NZBGet needs it?
Example, when fetching a RSS feed it may have 20 matches but with only say 3 unique dupekeys but it currently downloads all 20 NZB files instead of just the 3 NZB files of the ones with the highest dupescore.
If NZBget gets to a queue item where the NZB file is not available locally could it then download and use it instead of downloading all of them regardless and not using a large proportion of them?
If you have an indexer with limited NZB downloads this would be great feature and stop it from using resources when not required? e.g. it would make RSS/Queue processing much quicker if it wasn't downloading NZB files it didn't need.
TIA
-
get39678
- Posts: 222
- Joined: 09 Jun 2014, 10:49
Post
by get39678 » 22 May 2018, 20:46
hugbug I can see you seem to be giving new features a break at the moment (well deserved!) but do you see any method of achieving this with the current setup? If not possible I hope it could be considered for the future.
-
get39678
- Posts: 222
- Joined: 09 Jun 2014, 10:49
Post
by get39678 » 27 May 2018, 11:25
Thanks hugbug. As you have noted dupestatus does help in other circumstances but it doesn't help in this case as the downloading of the nzbs first is just the way it currently works.
Currently if RSS feeds find the following it will download 5 nzb files, the queue will be processed, duplicates appropriately removed using dupekey and test1 with dupescore 100, test2 and test3 will download. In the past (not sure if it still does this) but if there is a big list to process from the RSS it may also start downloading a dupekey with a lower score if it hasn't come across the higher dupescore yet but it will eventually sort itself out.
dupekey=test1 dupescore=100
dupekey=test1 dupescore=50
dupekey=test1 dupescore=25
dupekey=test2 dupescore=50
dupekey=test3 dupescolre=25
This proposed change will result in only 3 nzb files, test1 dupescore 100, test2 and test3 being downloaded and processed. test1 50 and test1 25 will be immediately added to the history as duplicates but *without* the nzb being downloaded. You can think of it as a preprocessing stage before all the matches in the RSS are added to the Queue dupekeys are checked and they go to either the queue to have their nzb downloaded and processed or straight to the History without the nzb being downloaded. If an action is later specified on test1 50 and test1 25 in the History nzbget will check if the nzb exists if it doesn't it will download it. The name, category, age and size used on the History screen has already been found when the RSS was processed.
-
hugbug
- Developer & Admin
- Posts: 7645
- Joined: 09 Sep 2008, 11:58
- Location: Germany
Post
by hugbug » 07 Jul 2018, 16:56
Your request (from the first post of this topic) has been implemented. Currently it is in branch 541-url-dupe. The feature required many changes and needs thorough testing. I'm going to test it myself for a while before merging into develop branch.
It would be nice if you could test it too. If you compile nzbget yourself please compile from the new branch. If you use nzbget installer please tell me which platform/CPU you run in on and I'll provide you with an installer.
-
get39678
- Posts: 222
- Joined: 09 Jun 2014, 10:49
Post
by get39678 » 20 Jul 2018, 14:15
Fantastic I will have a look over the weekend. Sorry I am a bit late, is everything in the normal develop branch now? Thanks
-
get39678
- Posts: 222
- Joined: 09 Jun 2014, 10:49
Post
by get39678 » 21 Jul 2018, 10:35
First it was a great surprise to return from vacation to see this feature. Thanks.
Only given it a quick look over so far (I will keep checking and update this thread with anything new) but it seems to be working great. After a RSS is processed it is no longer downloading all NZB files it is only downloading the best match for each key. Mark as Good and Mark as Bad seems to be processed as expected also.
One oddity in the logs is I got a URL with a DELETED status (log extract below). Guessing this was the first item found in the RSS and as soon as the next item was processed which was a better match this item was DELETED from the queue? All this was done quick enough though that the URL that was DELETED was never downloaded.
When you Mark as Good the URL/DELETED item is not removed from the History. Perhaps related to issue #308.
Also not sure if this points to an possible inconsistency but the entries Moving C1 to History and Deleting active URL C1 are repeated in the log. C1 was the first item in the RSS.
Code: Select all
Sat Jul 21 06:15:04 2018 INFO RSS has 8 new item(s)
Sat Jul 21 06:15:04 2018 INFO Reordering files for C1
Sat Jul 21 06:15:04 2018 INFO Moving collection C1 with lower duplicate score to history
Sat Jul 21 06:15:04 2018 INFO Deleting active URL C1
Sat Jul 21 06:15:04 2018 INFO Moving collection C1 with lower duplicate score to history
Sat Jul 21 06:15:04 2018 INFO Deleting active URL C1
Sat Jul 21 06:15:04 2018 INFO Reordering files for C2
Sat Jul 21 06:15:04 2018 DETAIL Download C1 @ snip cancelled
Sat Jul 21 06:15:04 2018 INFO Collection C3 is a duplicate to C1
Sat Jul 21 06:15:04 2018 INFO Collection C3 added to history
Sat Jul 21 06:15:04 2018 DETAIL Downloading C2 @ snip
Sat Jul 21 06:15:04 2018 INFO Collection C1 added to history
-
hugbug
- Developer & Admin
- Posts: 7645
- Joined: 09 Sep 2008, 11:58
- Location: Germany
Post
by hugbug » 21 Jul 2018, 22:31
When you Mark as Good the URL/DELETED item is not removed from the History.
Only items with status DUPE are removed. That's OK.
What isn't OK however is that the item removed from queue by duplicate check got status DELETED. It should have status DUPE.
I'll look into this.
the entries Moving C1 to History and Deleting active URL C1 are repeated in the log.
Not sure about this. Maybe C2 (if it was of the same dupekey) has attempted to delete C1 as well?
-
get39678
- Posts: 222
- Joined: 09 Jun 2014, 10:49
Post
by get39678 » 22 Jul 2018, 15:17
What isn't OK however is that the item removed from queue by duplicate check got status DELETED. It should have status DUPE.I'll look into this.
Just run another RSS and out of 12 items, 3 were SUCCESS (perfect), 5 were URL/DUPE and 4 were URL/DELETED.
the entries Moving C1 to History and Deleting active URL C1 are repeated in the log.
Not sure about this. Maybe C2 (if it was of the same dupekey) has attempted to delete C1 as well?
C1, C2 and C3 were the same dupe key where dupescore: C2 > C1 > C3.
Not sure if the number of groups of repeated items in the log are related to the number of URL/DELETED entries as this time there are 4 groups of repeated entries and 4 URL/DELETED. Last time 1 group of repeated entries and 1 URL/DELETED. Not enough data to say this for sure yet - could just be coincidence. I use urlconnections=2.
Code: Select all
Sun Jul 22 06:15:04 2018 INFO Moving collection C1a with lower duplicate score to history
Sun Jul 22 06:15:04 2018 INFO Deleting active URL C1a
Sun Jul 22 06:15:04 2018 INFO Moving collection C1a with lower duplicate score to history
Sun Jul 22 06:15:04 2018 INFO Deleting active URL C1a
....
Sun Jul 22 06:15:04 2018 INFO Moving collection C2a with lower duplicate score to history
Sun Jul 22 06:15:04 2018 INFO Deleting active URL C2a
Sun Jul 22 06:15:04 2018 INFO Moving collection C2a with lower duplicate score to history
Sun Jul 22 06:15:04 2018 INFO Deleting active URL C2a
.....
Sun Jul 22 06:15:05 2018 INFO Moving collection C3a with lower duplicate score to history
Sun Jul 22 06:15:05 2018 INFO Deleting active URL C3a
Sun Jul 22 06:15:05 2018 INFO Moving collection C3a with lower duplicate score to history
Sun Jul 22 06:15:05 2018 INFO Deleting active URL C3a
.....
Sun Jul 22 06:15:05 2018 INFO Moving collection C4a with lower duplicate score to history
Sun Jul 22 06:15:05 2018 INFO Deleting active URL C4a
Sun Jul 22 06:15:05 2018 INFO Moving collection C4a with lower duplicate score to history
Sun Jul 22 06:15:05 2018 INFO Deleting active URL C4a
-
hugbug
- Developer & Admin
- Posts: 7645
- Joined: 09 Sep 2008, 11:58
- Location: Germany
Post
by hugbug » 24 Jul 2018, 15:39
Fixed. The items should now have status DUPE instead of DELETED.
Who is online
Users browsing this forum: No registered users and 28 guests