20130821, 10:45  #12 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
2×2,269 Posts 
It is irrelevant for you that you are wasting energy that it is not yours and you didn't understand my point. Take out 4M files from your big sieve file and keep sieving. Anyway, that's my energy efficient analyst opinion but for me you can keep sieving whatever ranges you want.
I only care about my computers efficiency and electricity bills. Carlos Last fiddled with by pinhodecarlos on 20130821 at 10:49 
20130821, 10:58  #13  
"Lennart"
Jun 2007
2^{5}·5·7 Posts 
Quote:
If the sieve time for a range was linear with the amount of candidates, yes we would have done smaller ranges. But that is not the case so we will continue to sieve as it is. The file 3M6M takes x sec. that does not mean that 5M6M takes x/3 sec. Do some tests and you will see for your self. Lennart Last fiddled with by Lennart on 20130821 at 10:58 

20130821, 11:00  #14 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
2·2,269 Posts 
I don't want to discuss something that you don't understand. Let's stop the discussion, ok?
Carlos 
20130821, 11:15  #15 
"Lennart"
Jun 2007
2^{5}×5×7 Posts 

20130821, 17:02  #16  
"Curtis"
Feb 2005
Riverside, CA
2·7^{2}·41 Posts 
Quote:
If I read this correctly, you're saying the file has no difference all the way from 3000 to 3950? That sounds like a pretty big error in the sieve process. Or do you mean that just 3910 to 3950 for just k=5 has no change? That would be part of the random nature of sieving. If you mean the latter, I have no idea how you compute that the file is oversieved. Please explain your calculation? 

20130821, 20:24  #17 
"Lennart"
Jun 2007
10001100000_{2} Posts 
RSP4M
rsp4M_20130819.abcd  (141,781,643 terms) p~303P rsp4M_20130708.abcd  (141,922,841 terms) p~291P rsp4M_20130531.abcd  (142,068,066 terms) p~280P RSP5M rsp5M_20130819.abcd  (141,803,092 terms) p~303P rsp5M_20130708.abcd  (141,944,107 terms) p~291P rsp5M_20130531.abcd  (142,088,967 terms) p~280P RSP6M rsp6M_20130819.abcd  (141,768,546 terms) p~303P rsp6M_20130708.abcd  (141,909,836 terms) p~291P rsp6M_20130531.abcd  (142,054,353 terms) p~280P Here are the last files. I have not compared any specific K Lennart 
20130822, 03:18  #18 
"Curtis"
Feb 2005
Riverside, CA
111110110010_{2} Posts 
Lennart
Those numbers look pretty consistent for candidates removed per 1P sieved, mostly ruling out an error though, as happened previously, it's possible there are errors in k=5. Since I like computing sieve efficiencies myself, can you give me an idea of the sieve rate in T/day (or whatever units you find convenient) and what card you sieve with? I don't mind doing the calcs myself instead of asking Carlos to explain it to me. 
20130822, 03:27  #19 
Nov 2003
2·1,811 Posts 
I updated test files for k=5, 15, and 17 based on the latest file of Aug. 19 to respective threads.
I agree that the 34M range is excessively oversieved! The candidates can be removed faster by primality tests. And finally, regarding the difference of k=5 test files between the previous sieving file (released in July) and this one, there are now 48 candidates less, and they appear to be almost uniformly distributed between 3027110 and 3980308. 
20130822, 04:21  #20  
"Lennart"
Jun 2007
460_{16} Posts 
Quote:
most sieving is done with GPU's. If you compare with CPU sieve time I can agree, but most sieving are done with GPU's. I will ask Jim if he can give more info about the removal rate. Here is some timings for a range on a GTX480 Sieve started: 303675222000000000 <= p < 303675231000000000 Thread 0 starting Detected GPU 0: GeForce GTX 480 Detected compute capability: 2.0 Detected 15 multiprocessors. Thread 0 completed Sieve complete: 303675222000000000 <= p < 303675231000000000 count=223575636,sum=0xd1617f2096f164d2 Elapsed time: 1067.52 sec. (1.09 init + 1066.42 sieve) at 8439596 p/sec. Processor time: 198.03 sec. (1.09 init + 196.94 sieve) at 45699162 p/sec. Average processor utilization: 1.00 (init), 0.18 (sieve) called boinc_finish The fastest card will do this range on 600 sec Normal speed on those ranges is 10min to 30 min depending on the card. Lennart Last fiddled with by Lennart on 20130822 at 04:40 

20130822, 07:02  #21 
"Curtis"
Feb 2005
Riverside, CA
2×7^{2}×41 Posts 
Time for some estimatingoutloud:
a 9G range takes 20 minutes on this 480GTX. That's 27G/hr, or 600G/day. According to Lennart's sievedata post, each 1P sieved removes about 12,000 candidates from 34M, or 36,000 from the entire sieve. 1P/600G = 1650 GPUdays to remove 36,000 candidates, or 22.5 candidates per GPU day. Recall that the srsieve series of programs scales with the squareroot of nrange, so removing 34M and continuing to sieve 46M would not result in the sieve running 50% faster it would be more like 25% faster while finding 66% of the factors, a net decrease in efficiency. If this GPU sieve scales the same way, sieving 46M might produce something like 18 factors per day. I did lots of calculations back in 200910, and found that a factoroftwo nrange was small enough that breaking off the lower end of the sieve would not produce a gain in efficiency one should just run the entire sieve until the average LLR time per test matched the sieve. I don't know if this sieve code scales like srsieve... My GPU (460M) can complete 3.5 CudaLLR tests per day at n=5M, the rough mean. The 480 Lennart cites might be 4050% faster let's say 4.5 tests/day as a guess. So, the sieve is currently 5x more efficient than CUDALLR on the typical candidate. However, a large number of the k's in this sieve will never be tested. If less than 20% of the k's will be tested ever, then only 20% of the factors found via sieving "matter". 20% of 22.5 is 4.5 factors per day, exactly the rate LLR runs at. Carlos has observed that CPUs are more powerefficient at LLR than GPUs, but I am making an assumption that those who run BOINC for this project woudl run their GPUs on something anyway, so I may as well compare CUDALLR to GPUsieve. So, one's opinion of optimal sieve depth revolves around one's opinion of how much of this sieve will ever be tested. I believe 20% is optimistic for the fraction of k's that will be tested, so I think primegrid has reached the optimal level at this time.... but I missed something: this sieve finds factors for riesel and proth numbers at the same time! So it's finding twice as many factors as assumed above, and thus only 10% of the k's need ever be tested to justify its current 300P level. So, how many k's do you think will ever be tested from 36M? Do we have any idea how many proth k's will be tested? My arbitrary guess is 15% sounds about right, making 450P the optimal depth for this sieve. RPS will use 5300, NPLB 3001000, Peter 10001300. That's 13% right there. I don't know a thing about the proth side. 
20130822, 07:11  #22 
"Curtis"
Feb 2005
Riverside, CA
2×7^{2}×41 Posts 
Lennart
Do you know what the limits of the software are? Max pvalue? If a 3M range is much more efficient than 2M, would a 4M or 5M range be yet more efficient? Since 69M has barely started, it may be wiser to do 610M or even 612M for greater efficiency. Is there a reason not to run a bigger nrange? One more idea: It may make sense to stop this sieve now, run 69M until 300P, and then add 56M to the 69M sieve and continue with 59M up to 800P or more. I thought this should have been done with 23M added in to 36M from 100P to 150P. Curtis 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Posting log files or other text files  Xyzzy  Forum Feedback  3  20181230 19:37 
Algebraic factors in sieve files  pepi37  Conjectures 'R Us  95  20170704 13:37 
Searching data from npg files ( Primegrid)  pepi37  Linux  5  20160629 11:11 
Advantage of lattice sieve over line sieve  binu  Factoring  3  20130413 16:32 
PrimeGrid PSP (Sieve) is stopping soon!  Joe O  Prime Sierpinski Project  1  20101104 23:05 