![]() |
Sieving idea!
We have tried this before and it did not work, the reason why I suggest the following is because I think the switch to the 50M dat makes it reasonable to do so.
The idea is to make a dat from 1.5-1.6M to 2M and sieve immediately on some slow computers. I estimate that each slow computer should find about 2-3 factors each day. ( The candidates PRPed already will not be in this dat, just the candidates not PRPed) Though few but I think it will speed up the PRP process. A factor will save double check work as well. Note: The range will have to be done again using the 50M dat as well. What do you all think? May be p-1 will be a better option? ltd, can we have a dat for trial? Citrix |
Based on the stats 2 k's have already been PRP to 2M and most k's are at 1.6M.
So a dat with the 12k's and 1.6 to 2M would be needed. Citrix |
I think a small p-1 effort would be more beneficial. The sieveing will get done in time, but the P-1 effort would pick up additional factors that normal sieving would never find.
As there is little PRP work done it wouldn't require much CPU power to run off all the KN pairs at b1=40000 b2=500000 . They only need to be done at the same speed of the PRP (or slightly faster to keep ahead) Just my 2 cents Regards Foots |
P-1 may not be very efficine tfor such small numbers. It depends on how deep the sieve has been, but at SOB, where the sieve has been done to 2^49, it is still not useful to do P-1 even though they are up to numbers as large as 2^9M. The returns for P-1 are very very small for small numbers. Try to put a Pminus1 entry in the worktodo for Prime95 and you will see what I mean.
Also, I do not think the 1.5-2M dat is a good idea. Note that sieve speed is roughly proportional to square root of the dat range. So this dat would sieve 6 times faster than the current 20M dat, but only give about 1/40th the number of factors. It would be very inefficient! |
I am doing some p-1 from time to time on ranges before giving them to the public and must say it is only for the fun to find large factors. It does not make the project faster at the moment.
For the idea with the small factorbase i will make some tests over the weekend and come up with results hopefully on monday. Lars |
Could you a make a dat for me. I wanted to run some tests of my own. Posibly use it for p-1 as well.
Citrix |
I will send you a copy latest tomorrow by mail.
Lars |
ok, I will wait. btw, what software do you use for p-1?
|
IMHO, the priority should be to push the new sieve as far as possible, as soon as possible, while not bothering with new tasks. The psp-sob.dat will hardly become smaller then 10MB, according to some quick-style calculations, and will thus still be larger then the SoB-sob.dat, at currently about 8MB. This is normal, too, as PSP has 14 k's left.
So, PSP should, I think (and I say that from a PSP standpoint), prove that it is not going to take lopsidedly advantage of an eventual joint sieve effort, and sieve quickly up to say 15G, at which point I would start some joint sieve tests. The current sieve point is far from being satisfactory. But well, we just started, hey. After sieving to a reasonable level, the priority will swich to PRP, of course, to find some more primes. One or two should be possible in the near future. Well, I said already too much. CU, H. |
One more thing speaking against the idea is the real density of factor.
In the last 60 days we had the following numbers. (All with the old 20M dat file) Factors found in the range 1.5M to 2.0M: 56 Factors found in 300k to 20M: 1838 That shows in my opinion that a special sieving of the short range will not make sence at all. Lars @Citrix: i uses prime95v2413 for the p-1 tests. |
How do you get prime 95 to work on K*2^n+1? I can get it to work on mersenne numbers!
Thanks, Citrix |
| All times are UTC. The time now is 16:05. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.