mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Twin Prime Search

Reply
 
Thread Tools
Old 2009-06-19, 05:16   #1
cipher
 
cipher's Avatar
 
Feb 2007

211 Posts
Default Thanks PG for Sieving, now time to pick a new n?

30 days ago sieving for TPS n=333333 seemed doomed or even abandoned. People were slowly losing interest and here came jmblazek marching PG and Peter-Puzzle with his arsenal of cores and revived Sieving and hopefully in 3 months We will have finished all the LLR' and hopefully find a new Monster Twin Prime.

Its time to look to the future and pick our next n, But how to decide which n would be our next candidate. Few reasons to keep in mind to pick our next n.
1) n should be large enough, so any primes found would end in Top 5000 list.
2) n shouldn't be so large that we need to LLR 1000G to have a decent probability of finding a twin.
3) continue n=333333 from 100G- till until we find a prime (highly unlikely as primes found with this n will not end up in Top 5000 list and contributors will loose interest.)

Please recommend any next n or any suggestions on how to pick next n and reasons supporting why we should pick it.

I recommend or suggest n=999999 as LLR time is approx 1 hour, and any prime found with this n would end in Top 5000 list @ 300-5000 range, assuring they will stay on for next 10 years.

If someone can calculate the probability of how many G's we need to sieve to have 90% chance of finding Twin and how many primes (not twin) can we expect in 0-100G range.

thanks
cipher
cipher is offline   Reply With Quote
Old 2009-06-19, 06:36   #2
MooooMoo
Apprentice Crank
 
MooooMoo's Avatar
 
Mar 2006

2×227 Posts
Default

Quote:
Originally Posted by cipher View Post
I recommend or suggest n=999999 as LLR time is approx 1 hour, and any prime found with this n would end in Top 5000 list @ 300-5000 range, assuring they will stay on for next 10 years.

If someone can calculate the probability of how many G's we need to sieve to have 90% chance of finding Twin and how many primes (not twin) can we expect in 0-100G range.
This was discussed a bit in a post a few years ago:

http://www.mersenneforum.org/showpos...&postcount=146

For n=999999, you need to sieve 827G to have a 90% chance of finding a twin, and the optimal sieve depth will be over 1000P. There will be about 2500-3000 primes in a 0-100G range at n=999999.
MooooMoo is offline   Reply With Quote
Old 2009-06-19, 10:13   #3
Puzzle-Peter
 
Puzzle-Peter's Avatar
 
Jun 2009

683 Posts
Default

Quote:
Originally Posted by MooooMoo View Post
This was discussed a bit in a post a few years ago:

http://www.mersenneforum.org/showpos...&postcount=146

For n=999999, you need to sieve 827G to have a 90% chance of finding a twin, and the optimal sieve depth will be over 1000P. There will be about 2500-3000 primes in a 0-100G range at n=999999.
How do you calculate these numbers? There must be some formulas behind this, but I have no knowledge of them.

Thanks,
Peter
Puzzle-Peter is offline   Reply With Quote
Old 2009-06-20, 15:56   #4
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default

A while back there was a lot of discussion in this thread about doing a fixed *range* of n over a smaller range of k. Somewhere in there an analysis was linked to that mathematically proved this to be more efficient than searching by fixed n, variable k.

Any thoughts on utilizing such an approach instead of just picking another n?

Last fiddled with by mdettweiler on 2009-06-20 at 15:57
mdettweiler is offline   Reply With Quote
Old 2009-06-21, 07:54   #5
cipher
 
cipher's Avatar
 
Feb 2007

211 Posts
Default

I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
cipher is offline   Reply With Quote
Old 2009-06-21, 17:17   #6
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default

Quote:
Originally Posted by cipher View Post
I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
As was discussed in the thread I linked to earlier, the current de facto way of sieving such a range was to use NewPGen's fixed-n mode with the increment counter turned on. Possibly tpsieve or sr2sieve could be used in a similar way with the help of a batch file/shell script. (We'd want to try all three sieving programs to see which one operates the fastest, since all of them could theoretically work for this type of range.)
mdettweiler is offline   Reply With Quote
Old 2009-06-22, 02:29   #7
geoff
 
geoff's Avatar
 
Mar 2003
New Zealand

13×89 Posts
Default

Quote:
Originally Posted by cipher View Post
I am all about efficiency but what utilities do we need to use hence i am guessing we would have to write a new Program. (which is very time consuming.) can we use tools like newpgen, sr2sieve tpsieve etc to work with this?
tpsieve can sieve a range of n, that is a range of k,n pairs with k odd and k0 <= k <= k1 and n0 <= n <= n1.

However you need to start the sieve with NewPGen sieving each n separately until p > k1 at least, and then merge the resulting files for input to tpsieve (Just delete the header from all except the first file and then concanenate the results).

Edit: For testing purposes you can sieve without an existing sieve file, just start tpsieve with all of the -k -K -n -N -p -P switches. This will find all factors in the range without checking to see whether they have already been removed from the (non-existing) sieve file.

Last fiddled with by geoff on 2009-06-22 at 02:41
geoff is offline   Reply With Quote
Old 2009-06-22, 10:18   #8
biwema
 
biwema's Avatar
 
Mar 2004

1011111012 Posts
Default

The advantage of using many N is the smaller FFT size, if k is not too large. The influence is proportional to log(k), so the difference between k=3 and k=9 is the same as between k= 10^6 and k=10^12.
That means, we need a large number of N that k gets reasonably smaller. For example, if we only use 10 N, we have to sieve 100G each instead of 1T. This imprvement makes not much difference.
The larger the number of N gets, the higher is the sieving time and will reduce its advantage.
I think there is some research necessary that we can find out how big k can be (for a certain N) before a FFT size change occurs (for the most common CPU architecture).
If we use for example 100 N that are close to each other, there is still the possibility to sieve up to 20 N together: At the beginning sieve all N separately up to (for example) p=1 billion. At theis level the number of remaining candidates is so small that it is possible to use hashtables instead of arrays. Then we can group 10 .. 20 N together: Use the smallest exponent, multiply all candidates k (where the exponent is N+1,2,3) with 2,4,8 and merge these k. It depends on the limits of the siever, how large k it can handle.
biwema is offline   Reply With Quote
Old 2009-06-27, 06:13   #9
Puzzle-Peter
 
Puzzle-Peter's Avatar
 
Jun 2009

12538 Posts
Default

Any new ideas / insights on this? I'm lacking the knowledge to be useful at this stage, so I stick to being curious

Last fiddled with by Puzzle-Peter on 2009-06-27 at 06:15
Puzzle-Peter is offline   Reply With Quote
Old 2009-07-06, 04:32   #10
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

13×797 Posts
Default

If you're going to go after a record twin, keep it reasonable. Make it a moderate-sized range of k and n around n=205K-225K. Don't just stick with one n. Keep the k's somewhat low by going after a range of n=100 or 500 or something like that. Testing times are very fast at this level so sieving efficiency is much less important. Besides, the much longer testing times for large k's likely completely offsets any sieving advantage of a single n. Don't let an out-of-date piece of sieving software (NewPGen) dictate your project goals.

With the current talk and at the current rate, a single individual could easily come up with a record twin long before a huge DC effort does. IMHO, talk of going after a twin at n=500K, 666666, 666777, or 999999 is nonsense. Making a big slash with a monster twin seems cool but the risk is too large of not finding one for 5 years or more and encourages people to do more manageable searches individually. Make it a sure thing and work your way up and you'll get more participants in the long run. A twin at n=220K or 225K would bring many more participants to future efforts for n=250K or 275K or 300K.


Gary
gd_barnes is online now   Reply With Quote
Old 2009-07-06, 11:01   #11
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

40358 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
If you're going to go after a record twin, keep it reasonable. Make it a moderate-sized range of k and n around n=205K-225K. Gary
Sounds like a good idea. The top-5000 rate is proceeding too fast for twin searching imho. Btw I am still currently searching below 206k *, so your searching above would be good.

*Specifically k=2001-993277, 13 ns at a time using NewPgen with its maximum 485Mb of RAM. (I emailed Paul Jobling to see if the max RAM could be increased, but Windows prevents him.)
iirc, There is a bit of chat about different methods in the "List of small twins ..." thread. (My method is referred to in post #143.)

Maybe it's faster using tpsieve now anyway.
Flatlander is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Sieving and LLR-in in same time pepi37 Software 1 2017-05-15 17:17
Sieving both sides vs one side at a time paul0 Factoring 5 2015-11-18 13:58
PRP:- Pick A Range Citrix Prime Sierpinski Project 2 2014-02-16 18:47
Optimal Sieving Time jasong Sierpinski/Riesel Base 5 10 2009-01-25 15:56
prime density vs. sieving vs. prp time jasong Math 18 2006-03-31 03:14

All times are UTC. The time now is 23:26.

Fri May 7 23:26:49 UTC 2021 up 29 days, 18:07, 0 users, load averages: 1.68, 1.61, 1.69

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.