Go Back > Prime Search Projects > Conjectures 'R Us

Thread Tools
Old 2009-08-09, 07:46   #56
gd_barnes's Avatar
May 2007
Kansas; USA

27A216 Posts

Originally Posted by Mini-Geek View Post
Does anyone know how PrimeGrid runs automatic sieves?

Obviously, a major problem with any automatic distributed sieve, at least when run with something like sr2sieve, is distributing the sieve file.

Why would it be difficult to have an optimal sieve depth calculator? It seems like a very exact scientific process to me, at least when certain data is available.
For small files where no ranges are broken off, it can be very exact. But if the ranges are huge where ranges need to be broken off, it is anything but exact. You have to know how many ranges that you plan to break off, you need to know approximately how many CPU years the entire file will take to primality test, etc. Most of this stuff cannot be figured out scientifically. For instance, the optimal depth for searching a huge, say, 100 k's for n=1M-5M file, might be P=10P (10000T) if you do not allow for increases in computer speed in the future. But sieving to 10P would be a big waste of current computer resources for a job that is likely to take 5 CPU years. You are far better off sieving to perhaps P=200T, breaking off n=1M-1.5M, primality testing, sieving perhaps P=200T-500T a year from now, break off some more n-ranges, etc. By the time you're towards the end, you're likely sieving n=4M-5M for the final range of P=5P-10P, computer capacity/speed has increased 10-fold so that a 5P sieving range is like a current 500T range and you've had a nice orderly progression of searching and sieving and you're not just consuming all current more expensive resources to save far cheaper resources in the future.

From my personal observation, over sieving is a much larger waste of current resources than under sieving. The entire Riesel Sieve project and k=5 at RPS are excellent examples. The point is that you don't sieve monster jobs assuming static computer speeds and capacity. When I did the calculation for k=2000-3000 for n=50K-1M at NPLB, for calculation purposes, I only used the portion of the file that is n=50-650K because that is all that I felt we could do within a year else the optimum was going to be in the 60T range, it would have taken twice as long to sieve, and we would have had to wait 1-2 months longer to start the drive. As it is, in a year with computer capactiy likely nearly 50% higher, sieving P=30T-60T should take 2/3rds as much of available computer capacity at that time so long-term efficiency is gained. That is because we would have only eliminated an additional 2-3% of the k/n pairs up to n=650K by sieving to the higher depth.

It is scenarios such such as this that makes computing the optimum sieve depths so difficult.


Last fiddled with by gd_barnes on 2009-08-09 at 07:49
gd_barnes is online now   Reply With Quote
Old 2009-08-12, 09:59   #57
Just call me Henry
henryzz's Avatar
Sep 2007
Cambridge (GMT)

2×2,837 Posts

Does anyone have a working script to do the removal of those ks?

Last fiddled with by gd_barnes on 2009-08-13 at 20:59 Reason: remove part not applicable to this thread
henryzz is offline   Reply With Quote
Old 2009-08-12, 11:17   #58
kar_bon's Avatar
Mar 2006

B0016 Posts

Originally Posted by henryzz View Post
Does anyone have a working script to do the removal of those ks?
i've done a script for that, but not yet checked by anyone and i've not changed them since Jan.2009

you can download it at

these scripts are for testing RieselBase 3 but you can change it for any other base (next version should do this by invoking):

- replace the "b3_n0.txt" with your remaining pfgw-list (format see the original) -> so should be "b15_n0.txt"

- replace a line in "MOB_get.awk"
from "if (($0 % 3)==0)" to "if (($0 % 15)==0)"

- replace in "MOB_do.bat" every "b3_n0.txt" to your "b15_n0.txt"

- the included pfgw.exe is a patched version: all screen-output are disabled so faster in doing the Automated_low_n-Pass (i've not tested any newer version yet).
the patches i've done can be found in the "ReadMe.txt" in

- call MOB_do.bat

- "b15_n0_org.txt" contains your k's before this script
- "b15_n0.txt" contains k's after

BTW: your posted list of remaining k's confirm with mine: 216 k's left befor MOB-reduction
after MOB: 200 k's left!

Last fiddled with by kar_bon on 2009-08-12 at 11:20
kar_bon is offline   Reply With Quote
Old 2009-08-12, 12:16   #59
rogue's Avatar
Apr 2003
Between here and the

23·3·241 Posts

Originally Posted by henryzz View Post
Would it be possible for PFGW to have an option that makes it run prp tests and if a prp is found then it will be primality tested with -t, -tp, or -tc?
Possibly. I've thought about it, but the structure of PFGW would make that fairly difficult.
rogue is online now   Reply With Quote
Old 2009-08-13, 07:47   #60
Just call me Henry
henryzz's Avatar
Sep 2007
Cambridge (GMT)

2·2,837 Posts

Thanks kar_bon!!
Once i had managed to transfer my files to windows it worked like a charm.
You wrote brilliant instructions.
The only thing missing was replacing b3_n0_org.txt with b15_n0_org.txt in do.bat.

I got the same results as you.
Now for working out prpnet.
henryzz is offline   Reply With Quote

Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Automated LLR testing with LLRnet mdettweiler No Prime Left Behind 24 2011-11-04 19:20
Automated PRP using LLRNet axn Sierpinski/Riesel Base 5 73 2008-11-26 03:46
Automated primality testing with LLRnet mdettweiler Conjectures 'R Us 18 2008-03-04 00:06
Automated P-1 thoughts. nucleon Marin's Mersenne-aries 3 2004-03-25 02:45
Semi-automated P-1 testing GP2 Marin's Mersenne-aries 2 2003-09-29 19:01

All times are UTC. The time now is 21:02.

Fri Jul 3 21:02:53 UTC 2020 up 100 days, 18:35, 1 user, load averages: 1.35, 1.28, 1.35

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.