20170126, 09:31  #1 
Jun 2016
10000_{2} Posts 
filtering wants 1000000 more relations
hello, my log always says " filtering wants 1000000 more relations" it reach 106% any ideia how much it will need for complete?
Found 60439089 relations, 106.0% of the estimated minimum (57029236). > ./msieve s /root/ggnfs2/example.dat l /root/ggnfs2/example.log i /root/ggnfs2/example.ini nf /root/ggnfs2/example.fb t 6 nc1 Msieve v. 1.52 (SVN Nicht versioniertes Verzeichnis) random seeds: cb5f51c8 ad2d4743 factoring 9410261707528438928682935893154405638512521601548766718693569524548541134957386922116578082219341332079709851929749166485431038792833629490066386450067973 (154 digits) no P1/P+1/ECM available, skipping commencing number field sieve (154digit input) R0: 2569588102508675989901913592056 R1: 59967557186607037 A0: 68567737430015540290801154102634743946145 A1: 2387684910178899757671107843137621 A2: 227045675501690108768510143 A3: 811554831654864911 A4: 40211154894 A5: 84 skew 76359603.48, size 4.749e15, alpha 6.976, combined = 2.791e12 rroots = 5 commencing relation filtering estimated available RAM is 8192.0 MB commencing duplicate removal, pass 1 found 23505935 hash collisions in 60439088 relations added 5 free relations commencing duplicate removal, pass 2 found 33829455 duplicates and 26609638 unique relations memory use: 362.4 MB reading ideals above 31916032 commencing singleton removal, initial pass memory use: 753.0 MB reading all ideals from disk memory use: 460.5 MB commencing inmemory singleton removal begin with 26609638 relations and 33013473 unique ideals reduce to 2004622 relations and 1022642 ideals in 26 passes max relations containing the same ideal: 9 reading ideals above 100000 commencing singleton removal, initial pass memory use: 94.1 MB reading all ideals from disk memory use: 84.3 MB commencing inmemory singleton removal begin with 2004648 relations and 4327199 unique ideals reduce to 70 relations and 11 ideals in 5 passes max relations containing the same ideal: 2 filtering wants 1000000 more relations elapsed time 00:11:51 LatSieveTime: 20723.4 > making sieve job for q = 32000000 in 32000000 .. 32025000 as file /root/ggnfs2/example.job.T0 > making sieve job for q = 32000000 in 32000000 .. 32050000 as file /root/ggnfs2/example.job.T1 > making sieve job for q = 31950000 in 31950000 .. 31962500 as file /root/ggnfs2/example.job.T2 > making sieve job for q = 31950000 in 31950000 .. 31975000 as file /root/ggnfs2/example.job.T3 > making sieve job for q = 31950000 in 31950000 .. 31987500 as file /root/ggnfs2/example.job.T4 > making sieve job for q = 31950000 in 31950000 .. 32000000 as file /root/ggnfs2/example.job.T5 > Lattice sieving algebraic q from 31950000 to 32050000. > gnfslasieve4I14e k o spairs.out.T0 v n0 a /root/ggnfs2/example.job.T0 > gnfslasieve4I14e k o spairs.out.T1 v n1 a /root/ggnfs2/example.job.T1 > gnfslasieve4I14e k o spairs.out.T2 v n2 a /root/ggnfs2/example.job.T2 > gnfslasieve4I14e k o spairs.out.T3 v n3 a /root/ggnfs2/example.job.T3 > gnfslasieve4I14e k o spairs.out.T4 v n4 a /root/ggnfs2/example.job.T4 > gnfslasieve4I14e k o spairs.out.T5 v n5 a /root/ggnfs2/example.job.T5 
20170126, 11:01  #2  
Jun 2003
2^{3}·5·11^{2} Posts 
Something has gone wrong with your run. The expected number of duplicates is on the order of 20% of the total, so for 60M relation collected, maybe about 1015M duplicates wouldn't be unusual. But in your case, it is 34M, and you have only 26M unique. You'll probably need about 40M unique to build the matrix.
Keep going, and keep an eye on the unique relations count. Eventually you'll succeed. EDIT: Quote:
T2, T3, T4 & T5 are sieving the same (overlapping) range. No wonder you're getting all these duplicates. Someone more familiar with the factorization script will have to guide you how to escape from this SNAFU. Last fiddled with by axn on 20170126 at 11:05 

20170126, 16:58  #3 
"Curtis"
Feb 2005
Riverside, CA
2^{2}×1,151 Posts 
OP restarted the script with a different number of cores assigned than the original run. The script does not adjust for this, so since the restart the tasks have been duplicating effort (as axn showed you).
The cure is to rewrite by hand the checkpoint file, called something like resume.job. Stop the script, edit the file to make each of the 6 qranges separate but covering the same total range of q (looks like T1 and T5 with 50k blocks cover the entire region the script intended to sieve, so total q is 100k per block?). I believe you originally started it with two threads, then changed to 6, but that doesn't really matter. Msieve filtering doesn't estimate how many more relations are needed; if filtering fails, the message you got is what is printed (filtering needs 1000000 more relations). Given your mistake, you're probably about twothirds through the factorization. 
20170126, 21:32  #4 
Jun 2016
2^{4} Posts 
hello, many thanks both reply.
yes i have started whit 2 cores and changed for 6 , should be better let it continue or make the changes like you said? 
20170126, 21:35  #5 
I moo ablest echo power!
May 2013
1741_{10} Posts 
Make the changes. Otherwise, you're doing the computational equivalent of spinning your tires in mud.

20170130, 16:52  #6 
Sep 2009
1,973 Posts 
Also you could remove most of the duplicates by:
Code:
sort ur example.dat > example.sorted mv example.dat example.dat.original mv example.sorted example.dat Chris 
20170130, 23:58  #7 
"Ed Hall"
Dec 2009
Adirondack Mtns
3,527 Posts 
There's also a program called remdups4 that removes duplicates and bad relations, that I use, but I don't know the source anymore.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
NFS filtering error...  Stargate38  YAFU  4  20160420 16:53 
The big filtering bug strikes again (I think)  Dubslow  Msieve  20  20160205 14:00 
Filtering  Sleepy  Msieve  25  20110804 15:05 
Filtering  R.D. Silverman  Cunningham Tables  14  20100805 08:30 
More relations mean many more relations wanted  fivemack  Factoring  7  20070804 17:32 