mersenneforum.org (https://www.mersenneforum.org/index.php)
-   3*2^n-1 Search (https://www.mersenneforum.org/forumdisplay.php?f=14)
-   -   What is the status of the search and another link (https://www.mersenneforum.org/showthread.php?t=503)

 garo 2003-04-08 04:05

What is the status of the search and another link

Since the Announcement mesg was locked I thought I'd start this new post.

LLR can also be found at http://mersenne.org/gimps/llr.zip
for those of you who are not Yahoo members and do not want the hassle of adding onto a new group.

Paul, it would be a good idea if you gave us a simple explanation of what you are trying to achieve, what the steps are - eg what sieving etc. and what the current status is. I know you answered several of these questions in your original post - which I moved to the Psearch forum - Will/Mike can you move that post here? - But it would be a good idea to restate all of that - prefereable in the announcement thread.
Thanks

 paulunderwood 2003-04-08 04:30

Re: What is the status of the search and another link

We are building on work done on primes of the form k*2^n-1 by Wilfred Keller et al:

http://www.prothsearch.net/riesel2.html

Therefore the known values of 'n' that make 3*2^n-1 prime are: 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987

The domain for 'n' in this search is initially 191600-1000000. I started sieving with Paul Jobling's NewPGen on the 2nd of this month using a 1GHz Athlon. Eleven blocks of work (191600-300000) became available on the 5th. Of those, one was completed in a couple of days using a P4 and LLR. Three of us are currently processing five blocks. This leaves five blocks and already the sieve is approaching the point where I release some more for testing.

Unlike Mersenne primes these numbers don't have to have a prime binary length. They look like this in binary:

101
1011
10111
101111
10111111
etc

 ET_ 2003-04-08 07:12

I have a 1 GHz Athlon available and not connected to the Internet for a while.

Luigi

 paulunderwood 2003-04-08 23:25

Luigi, I have mailed you a block.

We are now five people with over 10 GHz of computers and there are three blocks available for anyone to test -- 'n' less than 300,000.

I am sieving 'n' ( 300,000 - 1,000,000 ). I just checked NewPGen and it is knocking out one candidate every 4:12 minutes. The target is over 5 minutes for the release of blocks from 300,000 to 400,000 for testing. NewPGen's 'p', the current divisor being checked, is 366.2 billion.

 garo 2003-04-09 01:10

Paul,
Regards sieving have you looked at the discussions on the Seventeen or Bust forum site. They have some very interesting ideas and the efficiency of the sievers has gone up about 100 times since Newpgen. Paul Jobling who wrote newpgen wrote a specialized version of newpgen called sobsieve and it was much faster. Then Phil Carmody also wrote nbegone which is works for non-x86 computers as well. They also had some discussions about the algorithms behind the sieving as well.

[Edit: Corrections to programs attributed to different people.]

 smh 2003-04-09 07:45

[quote]They have some very interesting ideas and the efficiency of the sievers has gone up about 100 times since Newpgen. Phil Carmody who wrote newpgen - I think - wrote a specialized version of newpgen called nbegone and it was much faster. Then Paul jobling also wrote sobsieve which is even faster.[/quote]

Newpgen was written by Paul Jobling, and many of the optimizations made for SoBsieve are now also in Newpgen. Also, a big part of the 'efficiency' is due to the fact that 12 K values are sieved at the same time, which is faster then sieving 12 K's seperately.

Paul:

If you are planning to search higher then N=1M, it's much more efficient to sieve the complete N range at once since sieving is proportional to the square root of the range.

 paulunderwood 2003-04-09 18:46

When to stop sieving and start using LLR, PRP or PFGW is difficult to establish. There are many things to take into account:-
* it is nice to have some non-sieve testing happening on smaller 'n'
* the time take to test '2*n' is just over four times of that for 'n'
* sieving is proportional to the square root of the range
* judging NewPGen -- elimination timing goes up and down
* LLR/PRP/PFGW timings vary but not exactly proportional to size
* different computer architectures
* demand

Originally, I was going to sieve in 100,000 'n' blocks. I have decided that the work involved to complete 'n' to 1 million is enough work to organise for now.

Of course Paul Jobling's NewPGen is to be attributed when submitting primes to the top 5000 primes. Phil Carmody helped Paul with some maths to get a fixed-k sieve running faster.

After my last message NewPGen dropped to 3:20 minutes, but now reads 5:48 minutes. So by rough calculations the block 300,000-400,000 can be released for testing soon. There are two blocks left from 200,000-300,000 free for testing ( email me ). How I break up 300,000-400,000 depends on feedback from participants, but 2 GHz weeks seems about the right size for a block.

 paulunderwood 2003-04-12 00:31

We now number seven. I have just taken 300,000 to 400,000 out of the sieve leaving 42423 candidates in there -- it had sieved upto NewPGen's p=553,420,892,818. I timed a PrP test on my Athlon 1GHz for 3*2^47000-1 at 3,600 seconds or exactly a hour meaning that NewPGen's new cut off point for 'n' between 400,000 and 500,000 is 10 plus minutes. Meanwhile I have started to issue blocks of 5000 from 300,000 to 400,000; there are nineteen blocks left for anyone to help test -- please email me at paulunderwood@mindless.com to get a block.

[Edit: should be 3*2^470000-1]

 smh 2003-04-12 11:25

[quote]I timed a PrP test on my Athlon 1GHz for 3*2^47000-1 at 3,600 seconds or exactly a hour meaning that NewPGen's new cut off point for 'n' between 400,000 and 500,000 is 10 plus minutes. [/quote]

I don't agree on this one, i think you should be sieving much deeper, close to 1 hour per candidate removal. Especially when lower tests are still available to assign. sieving form 400K-1M is only 10% slower then sieving from 500K-1M

 paulunderwood 2003-04-12 18:24

:surprised:ops: Thanks for making this clearer to me smh. I have now upped the sieving throughput by using NewPGen's service: The sieving has been split across three Athlons (1 TB and 2 XP's ). Each is doing a range of 600,000,000,000 (0.6 trillion). I will have to wait for the slowest to finish before merging the files. I guess this is going to take a week, but it takes the maximum divisor from 0.6 trillion to 2.4 trillion. I will then review the rates of candidate elimination.

Meanwhile we have two new members and there are seventeen blocks available for team members to test.

 paulunderwood 2003-04-14 01:48

I want people not to be off joining the search because of the mistake I made in sieving. I have lost about 120 Ghz hours in a project that is going to take maybe 120,000 GHz hours to complete. We still need your help. ;) So please write to me ( paulunderwood@mindless.com ) if you want to join in.

All times are UTC. The time now is 07:29.