20061202, 14:58  #45  
"Sander"
Oct 2002
52.345322,5.52471
1189_{10} Posts 
Quote:
If you need help, I do have a pc available that can do some sieving. Last fiddled with by smh on 20061202 at 15:03 

20061203, 17:23  #46 
Mar 2004
3×127 Posts 
Well observed.
I already performed some tests on that Range. I assume that Daniel Heuer did not miss any Prime. He was earching for 1 Riesel Primes and did not sieving for twins. I did Sieving for twins and then testing +1 first (I still like the fermat factor idea). That's why I have fewer primes per Range even I have to confess that I was very lucky (5 primes in K = 2  7M); At the moment I expect 1 prime per 10M. I searched the Range up to k=20M. I agree that it is a good Idea to distribute sieving, if the twinprimesearch community agrees on this exponent. At the moment I am removing around 80 candidates per minute (even if my P4 3.4 GHz is not optimal for sieving) at 160T. One Primalty test takes 267 Seconds on the same machine. We can expect a sieving depth of 5560P (60 Million G)  maybe even more, depending on how much faster amd is. (On my P4 that depth needs 24 cpu years of sieving) Last fiddled with by biwema on 20061203 at 17:30 Reason: typos 
20061203, 17:38  #47 
Mar 2005
Internet; Ukraine, Kiev
11·37 Posts 
I have an athlon and a celeron doing sieving for k=195000. Will it slow down sieving if I merge both k?

20061203, 18:12  #48 
Mar 2004
17D_{16} Posts 
If the program supports that I think it will not slow down. On the other hand, I don't know if the overall performance gets better.
The most important thing you may consider is the memory. My range is 4 times larger and NewPGen uses more than 300MBytes while running. Mybe there is a little slowdown because the 2 ranges (025G and 0100G) are not matching. Regarding the range of sieved p you can start at that point you arrived with N=195000, The the range between can be sieved by someone else at a leter time and merged after. 
20061215, 15:08  #49 
Mar 2005
Internet; Ukraine, Kiev
11·37 Posts 
Range [1700e6; 25e9] is at p=1033.9T, 8,171,379 k's left.

20061227, 19:35  #50 
Mar 2004
575_{8} Posts 
I normally use NewPGen for sieving. I don't know how much it is optimized to the new architectures. It is not maintained anymore since more than a year. If we compare the speed in vertical sieving to the tool used in sobsieve etc, we can assume that there is still some potential for improvement.
While implementing a tool that is specialised for horizontal sieving, specific optimizations are possible. Siving constellations of more than one prime (twin, triplet, various chains...) has the problem that the initial range/number of candidates is very large and is dropping very quickly. The first part is completed with an array, where the numbers are crossed out. NewPGen uses a bitmap for sieving up to 1 billion. After that point, the number of candidates dropped so much that severly subranges can be grouped together. The feature of reduced screen output below 1 G is a bit flawy with the opposite effect that every divisor below 1 G is printed. The extensive screen output is slowing down the whole process. Optimizations below 1 G:  Adapt the bitmap to the characteristic of the constellation: k must be divisible by 3 (that +/1 can be prime), so it needs only to contain these candidates. Hence the same amount of memory can hold a 3 times larger range.  Instead of splitting the range in pieces of (0..2G, 2..4G, 4..6G), one can split modulo groups. (1 mod 35, 2 mod 35, 3 mod 35, 4 mod 35, 6 mod 35, 8 mod 35 ...) that needs just 24 instead of 35 groups and the slow sieving 5 an 7 can be avoided.  define the bitmap siza as product of small primes (11*13*17*19*23*29*31=955049953). These primes can be sieved out once. After that every time the presieved range can be copied and continued sieving 37. Maybe this step is not necessary, if the second point is implemented.  While sieving small primes, it is not necessary to count the sieved candidates. Just delete the appropriate bit without checking if that candidate was alread sieved earlier. The number of candidates can be counted when they are saved. When I sieved a 100G range of N=333333, i was quite lucky, that the merged file after 1G was less than 2GByte. Optimizations beyond 1 G:  Use a hash table (NewPGen does that already) It is faster than a list or tree.  finding candidate divisors: they don't need to be prime. If they are composite they simply don't sieve out a candidate. So the candidate divisors can be sieved (some of the optimizations above can be applied here, too) up to maybe 1000 (the optimal threshold has to be found), then run a quick fermat test (or similar) on these number and trial divide the ramaining divisors. Test dividing some pseudoprimes is faster than prove the primalty of the candidates. (Prime95 uses that strategy already  here a fermat is too slow that this part is omitted) Multithreading (architecture specific)  Splitting up the divisor generator and siever into different threads (maybe also a thread each for 1 sieving and +1 sieving). To go further: Is one thread for 6n1 divisors and one thread for 6n+1 divisors even faster or could it be that they drift off each other? Any Ideas? 
20061227, 21:23  #51 
"Robert Gerbicz"
Oct 2005
Hungary
2^{2}·367 Posts 
See, biwema:
By hashtable the sieve wouldn't be faster. In Newpgen everything is in the memory. Newpgen is using Erathosthenes or Atkin's fast method to find all prime numbers in an interval, and this is much faster than checking if p is a divisor of some k*2^n+1 or not, if n is large and in this project n is large. So Newpgen isn't sieving by composite numbers, as I know. Sobsieve is using a complicated multiple ksieve, twinsieve isn't a such problem. The math of the twin prime sieving is very easy, I think this is the method: By powmod determine A=((p+1)/2)^333333 mod p (here p is prime), it is require 18 modulare squarings and 7 multiplies by (p+1)/2 which is really fast, because that is only a bitshift or add+bitshift. If for A or (A) mod p is in your range then you can cancel that k value, because k*2^333333+1 will be divisible by p, and if p is small then by sieve you can cancel also k+p,k+2*p,k+3*p and so on. There is also another method: compute B=2^333333 mod p by the same powmod but then do A=modinv(B,p), I think this is faster, because it is require only about 13 modular square+4 bitshifts but also one modular inverse operation, which isn't so cheap, but the total cost will be less and I think Newpgen using this method. Therefor your multithreading idea is wrong, newpgen has got almost the same speed for single k*2^n+1 sieve and for twins k*2^n+1 sieve. About memory: it is possible to write a sieve that is using say less than 1M of memory: store the remaining k numbers in a sievefile and do another say 210000 files: for example c_129589.txt will contain the k numbers where k*2^n+1 has got a prime divisor, what sieve has found (here 129589*10^6<=k<129590*10^6). If you've done some work, say you've sieved from 1000G to 1001G then you can open the sievefiles and merge the k factors. After this delete all 210000 files and continue sieving. In theory it wouldn't be slow, it is very rare that you've to merge the lists. Last fiddled with by R. Gerbicz on 20061227 at 21:27 Reason: adding some notes 
20061228, 01:34  #52  
Mar 2004
3×127 Posts 
I think there are a few misunderstandings...
Quote:
On the other hand: If we are sieving a 100G range at 1 P, only every 10000th p divides a k in [0..100G] that lookups are quite rare (a few 1000 per second). Maybe the offline sieving could be an option when we are working on really large candidates (300000 k or 1 M digits). Quote:
cascaded sieves, The outer sieved just the sieve divisors of the inner sieve.) Quote:
Quote:
How about the other way? Which part of the twinsieving takes more time?  The generation of p, which are used for sieving.  Testing if a p divides a twin candidate. If the first part takes much more time, generating more (but also composite) p in much less time could make sense. Maybe setting the thershhold of the part 1 sieve Quote:
New Idea (Adapted from R.Gerbicz approach): Don't hold the list of still remaining twin candidates in the memory. Just sieve and keep all removed p in the sieve range (here 0..100G) in the memory. They appear randomly in the range. Therefore build up a tree, where you can read all p in a sorted way. After the memory, that is reserved for the sieving application is used up, merge the p with the twincandidate file on the disk (That process replaces the intermediate save). The whole thing is only effecient after reaching an adequate level of sieving (some T). Last fiddled with by biwema on 20061228 at 01:35 Reason: a 'quote' tag was not correct 

20061228, 22:09  #53 
"Jason Goatcher"
Mar 2005
3507_{10} Posts 
I run newpgen on Linux and I have a question.
While I'm always well informed by the program what the the sieving value is at, I only seem to see updates of how many k are left, or the rate, when a k is removed. This could be a problem if a siever wants to sieve until the rate is more than about 2 minutes, but wants to stop the sieve manually. Am I just not patient enough, or is this a problem that hasn't been considered? 
20061229, 10:49  #54 
Dec 2006
21_{16} Posts 
Where is the problem? just wait 2 minutes. you are not watching the programm 24/4, are you? So it's no big deal, even if one sieves a day longer.

20061230, 23:20  #55 
"Jason Goatcher"
Mar 2005
3×7×167 Posts 
Question: If you set it to log factors, will it log factors as soon as it gets them, or go by the update interval? The reason I ask is that I set the update interval to 10,000 minutes, thinking that only applied to the main file. I'm hoping the factors will update more frequently, and if there's a power outage I can simply update the starting point with vi.
Is this the correct way to do things? 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
S9 and general sieving discussion  Lennart  Conjectures 'R Us  31  20140914 15:14 
Sieving discussion thread  philmoore  Five or Bust  The Dual Sierpinski Problem  66  20100210 14:34 
Combined sieving discussion  ltd  Prime Sierpinski Project  76  20080725 11:44 
Sieving Discussion  ltd  Prime Sierpinski Project  26  20051101 07:45 
Sieving Discussion  R.D. Silverman  Factoring  7  20050930 12:57 