![]() |
![]() |
#1 |
Feb 2007
211 Posts |
![]()
30 days ago sieving for TPS n=333333 seemed doomed or even abandoned. People were slowly losing interest and here came jmblazek marching PG and Peter-Puzzle with his arsenal of cores and revived Sieving and hopefully in 3 months We will have finished all the LLR' and hopefully find a new Monster Twin Prime.
Its time to look to the future and pick our next n, But how to decide which n would be our next candidate. Few reasons to keep in mind to pick our next n. 1) n should be large enough, so any primes found would end in Top 5000 list. 2) n shouldn't be so large that we need to LLR 1000G to have a decent probability of finding a twin. 3) continue n=333333 from 100G- till until we find a prime (highly unlikely as primes found with this n will not end up in Top 5000 list and contributors will loose interest.) Please recommend any next n or any suggestions on how to pick next n and reasons supporting why we should pick it. I recommend or suggest n=999999 as LLR time is approx 1 hour, and any prime found with this n would end in Top 5000 list @ 300-5000 range, assuring they will stay on for next 10 years. If someone can calculate the probability of how many G's we need to sieve to have 90% chance of finding Twin and how many primes (not twin) can we expect in 0-100G range. thanks cipher |
![]() |
![]() |
![]() |
#2 | |
Apprentice Crank
Mar 2006
7068 Posts |
![]() Quote:
http://www.mersenneforum.org/showpos...&postcount=146 For n=999999, you need to sieve 827G to have a 90% chance of finding a twin, and the optimal sieve depth will be over 1000P. There will be about 2500-3000 primes in a 0-100G range at n=999999. |
|
![]() |
![]() |
![]() |
#3 | |
Jun 2009
22×52×7 Posts |
![]() Quote:
Thanks, Peter |
|
![]() |
![]() |
![]() |
#4 |
A Sunny Moo
Aug 2007
USA (GMT-5)
11000011010012 Posts |
![]()
A while back there was a lot of discussion in this thread about doing a fixed *range* of n over a smaller range of k. Somewhere in there an analysis was linked to that mathematically proved this to be more efficient than searching by fixed n, variable k.
Any thoughts on utilizing such an approach instead of just picking another n? Last fiddled with by mdettweiler on 2009-06-20 at 15:57 |
![]() |
![]() |
![]() |
#5 |
Feb 2007
D316 Posts |
![]()
I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
|
![]() |
![]() |
![]() |
#6 |
A Sunny Moo
Aug 2007
USA (GMT-5)
3×2,083 Posts |
![]()
As was discussed in the thread I linked to earlier, the current de facto way of sieving such a range was to use NewPGen's fixed-n mode with the increment counter turned on. Possibly tpsieve or sr2sieve could be used in a similar way with the help of a batch file/shell script. (We'd want to try all three sieving programs to see which one operates the fastest, since all of them could theoretically work for this type of range.)
|
![]() |
![]() |
![]() |
#7 | |
Mar 2003
New Zealand
13×89 Posts |
![]() Quote:
However you need to start the sieve with NewPGen sieving each n separately until p > k1 at least, and then merge the resulting files for input to tpsieve (Just delete the header from all except the first file and then concanenate the results). Edit: For testing purposes you can sieve without an existing sieve file, just start tpsieve with all of the -k -K -n -N -p -P switches. This will find all factors in the range without checking to see whether they have already been removed from the (non-existing) sieve file. Last fiddled with by geoff on 2009-06-22 at 02:41 |
|
![]() |
![]() |
![]() |
#8 |
Mar 2004
3·127 Posts |
![]()
The advantage of using many N is the smaller FFT size, if k is not too large. The influence is proportional to log(k), so the difference between k=3 and k=9 is the same as between k= 10^6 and k=10^12.
That means, we need a large number of N that k gets reasonably smaller. For example, if we only use 10 N, we have to sieve 100G each instead of 1T. This imprvement makes not much difference. The larger the number of N gets, the higher is the sieving time and will reduce its advantage. I think there is some research necessary that we can find out how big k can be (for a certain N) before a FFT size change occurs (for the most common CPU architecture). If we use for example 100 N that are close to each other, there is still the possibility to sieve up to 20 N together: At the beginning sieve all N separately up to (for example) p=1 billion. At theis level the number of remaining candidates is so small that it is possible to use hashtables instead of arrays. Then we can group 10 .. 20 N together: Use the smallest exponent, multiply all candidates k (where the exponent is N+1,2,3) with 2,4,8 and merge these k. It depends on the limits of the siever, how large k it can handle. |
![]() |
![]() |
![]() |
#9 |
Jun 2009
22×52×7 Posts |
![]()
Any new ideas / insights on this? I'm lacking the knowledge to be useful at this stage, so I stick to being curious
Last fiddled with by Puzzle-Peter on 2009-06-27 at 06:15 |
![]() |
![]() |
![]() |
#10 |
"Gary"
May 2007
Overland Park, KS
1184710 Posts |
![]()
If you're going to go after a record twin, keep it reasonable. Make it a moderate-sized range of k and n around n=205K-225K. Don't just stick with one n. Keep the k's somewhat low by going after a range of n=100 or 500 or something like that. Testing times are very fast at this level so sieving efficiency is much less important. Besides, the much longer testing times for large k's likely completely offsets any sieving advantage of a single n. Don't let an out-of-date piece of sieving software (NewPGen) dictate your project goals.
With the current talk and at the current rate, a single individual could easily come up with a record twin long before a huge DC effort does. IMHO, talk of going after a twin at n=500K, 666666, 666777, or 999999 is nonsense. Making a big slash with a monster twin seems cool but the risk is too large of not finding one for 5 years or more and encourages people to do more manageable searches individually. Make it a sure thing and work your way up and you'll get more participants in the long run. A twin at n=220K or 225K would bring many more participants to future efforts for n=250K or 275K or 300K. Gary |
![]() |
![]() |
![]() |
#11 | |
I quite division it
"Chris"
Feb 2005
England
31×67 Posts |
![]() Quote:
![]() *Specifically k=2001-993277, 13 ns at a time using NewPgen with its maximum 485Mb of RAM. (I emailed Paul Jobling to see if the max RAM could be increased, but Windows prevents him.) iirc, There is a bit of chat about different methods in the "List of small twins ..." thread. (My method is referred to in post #143.) Maybe it's faster using tpsieve now anyway. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Sieving and LLR-in in same time | pepi37 | Software | 1 | 2017-05-15 17:17 |
Sieving both sides vs one side at a time | paul0 | Factoring | 5 | 2015-11-18 13:58 |
PRP:- Pick A Range | Citrix | Prime Sierpinski Project | 2 | 2014-02-16 18:47 |
Optimal Sieving Time | jasong | Sierpinski/Riesel Base 5 | 10 | 2009-01-25 15:56 |
prime density vs. sieving vs. prp time | jasong | Math | 18 | 2006-03-31 03:14 |