![]() |
|
|
#12 | |
|
Jun 2003
110001011102 Posts |
Quote:
As for what I mean? The answer is simple, If PSP will ever want to go above 20M we should sieve now or else not. What I don't know is that will PSP have enough computing power ever to go above 20M. I just don't want to waste the time of SOB sievers if PSP is never going to go beyond 20M. What does everyone else involved in the PSP project think? Citrix |
|
|
|
|
|
|
#13 |
|
Jun 2003
2×7×113 Posts |
Just had another idea!
Since the time for runing proth_sieve over a range depends on the sqrt of the range it would be more efficient to make one dat such that PSP ranges:- 400K to 50 M SOB ranges :- 20 M to 50 M though it will be slower than just doing both PSP and SOB from 20-50 M. But faster than PSP and SOB from 20-50 M and PSP 400k to 20M seperately. what do you all think about this? Do the sievers at SOB agree to this format? Citrix |
|
|
|
|
|
#14 |
|
Jun 2005
373 Posts |
Of course it would be faster to continue sieving from 20M up for SoB, but the sense of restarting from 991 was secondpass. Until now, we found sufficient missed factors in these early ranges to rectify the secondpass idea. One day, with increasing p, this will become obsolete, perhaps, but until now it made sense.
Others can give you further information. H. |
|
|
|
|
|
#15 | |
|
Aug 2002
20D16 Posts |
Quote:
The first step in the process would be to create a dat for PSP from 20M(?) to 50 M and sieve it to 3T. If it's still too large then we continue on to 10T or more (possibly 50T) until its a reasonable size. Then we can combine it with the current PSP dat and see what sieving would need to be done. Then we could consider combining it with the SB dat. It's a game of "catch up". The SB dat has been completely sieved from 991<n<50M and p<50T. Most of the range 50T<n<100T is currently being worked on. Additionally, the primary sieving effort has sieved 1M<n<20M for p<700T and many ranges beyond that. I could send you a png file graphing all this progress if you want or you could look on this page |
|
|
|
|
|
|
#16 |
|
Aug 2002
3×52×7 Posts |
And here is the progress for 100T<n<1P
|
|
|
|
|
|
#17 |
|
Jun 2003
2·7·113 Posts |
I think if you guys help us we sould be able to catch up
Btw here is the algorithm proth_sieve uses for( int b = 0; b < nmax; b += sqrt(nmax) ) hash_store( 2^-b, b ); for( int c = 0; c < sqrt(nmax); ++c ) { int b = hash_lookup( k*2^c ); if( b != -1 ) int a = c+b; //yay! p divides k*2^a-1 } So till where do you want us to be sieve for the 0-20M dat and where for the 0-50M dat? We will get started as soon as possible? Citrix |
|
|
|
|
|
#18 | |
|
Aug 2002
10000011012 Posts |
Quote:
|
|
|
|
|
|
|
#19 |
|
Jul 2004
Potsdam, Germany
3×277 Posts |
If someone builds a dat file, I would join sieving.
|
|
|
|
|
|
#20 |
|
Jun 2003
2×7×113 Posts |
Lars will as soon as he gets back.
|
|
|
|
|
|
#21 |
|
Apr 2003
22×193 Posts |
I have already started newpgen presieving. I keep you all informed.
If i can manage it i will sieve ther file to lets say 100G i make it public to reduce the size a little bit. I will start with a 1006 to 50 Mil file to see how many lost factors we have. If there are not that many i will change to a 20m-50m file for public release. Hope to have the all done till saturday. @Citrix: Can you create the new reservation page and a sieve history page as it will not be easily possibly to recreate the different sievers with the correct ranges from the database with this new sieving effort. Lars Last fiddled with by Citrix on 2005-07-12 at 03:45 |
|
|
|
|
|
#22 |
|
Dec 2004
13·23 Posts |
The other major point here is to have a on-line submission that accepts factors up to 50M. Acutally 100M.
Factrange.txt actually produces 15% unique factors above the limit of the dat. Sure it sounds silly to collect factors upto 100M but it's even more silly to just ignore them in an effort to save a few MB of disk space or 1 burned CD. If you guys can get this done to a decent level. I'll combine the two dats for a few T around p=50T. The major factor is getting a few people together that can actually sieve those low p quickly. If someone asks for 2T-3T make sure they can do it in a months time or less other wise your stuck waiting for that high density range. Also first time sievers will really be shocked at both how slow and how many factors there are below 1T. A couple 100G <1T ![]() ____________________________ Thommy has a very valid point, how far to you want to push PSP? Will it ever get to 20M. SoB certainly looks like it will but will PSP? The other valid point was we could make a dat with n=1-infininty and b =2-infininty +/- 1 to infinity. That would be the most efficient as well from a stance of sieving. Practical... no, useful ... no, possible ... not currently. Sieving SoB to n=50M for now... has been useful, it is interesting and probably will be useful for the future. |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Line sieving vs. lattice sieving | JHansen | NFSNET Discussion | 9 | 2010-06-09 19:25 |
| 64-bit sieving | paleseptember | Five or Bust - The Dual Sierpinski Problem | 16 | 2009-01-25 20:26 |
| Should a diskless node run it's own ecm program or should I combine them somehow? | jasong | GMP-ECM | 1 | 2006-02-24 08:34 |
| Sieving | ValerieVonck | Math | 9 | 2005-08-05 22:31 |
| Sieving | OmbooHankvald | Prime Sierpinski Project | 4 | 2005-06-30 07:51 |