Register FAQ Search Today's Posts Mark Forums Read

 2009-09-06, 17:53 #298 henryzz Just call me Henry     "David" Sep 2007 Cambridge (GMT/BST) 572210 Posts is there any chance of a number of factors found counter + seconds per factor counter at some point?
2009-09-06, 17:53   #299
mdettweiler
A Sunny Moo

Aug 2007
USA (GMT-5)

3·2,083 Posts

Quote:
 Originally Posted by henryzz is there any chance of a number of factors found counter + seconds per factor counter at some point?
Indeed, that would be very useful in determining optimal depth. Once you have the removal rate and an estimated yield, it's a simple matter of plugging stuff into a formula to project the target depth.

Last fiddled with by mdettweiler on 2009-09-06 at 17:56

 2009-09-06, 18:50 #300 Ken_g6     Jan 2005 Caught in a sieve 2×197 Posts I've never been able to figure out the math for a factors-per-second counter. Even NewPGen isn't quite right, sometimes claiming it's slower when it finds two factors quickly in a row. Plus, that wouldn't tell you *unique* factors per second anyway, since the sieve file never gets updated. axn, I noticed your 2GHz comment, so I re-benchmarked at 2GHz and got 17.5/14.25M for x64/SSE2 respectively.
2009-09-07, 03:54   #301
axn

Jun 2003

37×127 Posts

Quote:
 Originally Posted by mdettweiler Indeed, that would be very useful in determining optimal depth. Once you have the removal rate and an estimated yield, it's a simple matter of plugging stuff into a formula to project the target depth.
Much more accurate estimates of factor removal rates can be performed using the sieve speed parameter. The feature is not at all reliable for a project this magnitude. That said, it can be a useful feature for individuals running their own projects.
Quote:
 Originally Posted by Ken_g6 axn, I noticed your 2GHz comment, so I re-benchmarked at 2GHz and got 17.5/14.25M for x64/SSE2 respectively.
Holy crap, batman! That means we can take my earlier estimate of 300T and straight out multiply that by 10 -- so 3P. It also means that your original calculation of 5P is indeed in the right ball park -- maybe more, if you factor in doublechecks.

 2009-09-07, 04:31 #302 geoff     Mar 2003 New Zealand 13·89 Posts The qmax=10e6 option in the default tpconfig.txt file should probably be removed/commented out, or at least made much larger, as the 10e6 value was intended for single-n sieving and will slow down the current project once p > 100e12.
2009-09-07, 04:37   #303
axn

Jun 2003

37·127 Posts

Quote:
 Originally Posted by Ken_g6 As for BOINC...each sieve instance takes about 1GB of RAM. If BOINC could run one sieve instance with as many threads as needed (which seems to work for me at least), it might be plausible; but 1GB is still a lot. On the other hand, the file could be broken up more, and it wouldn't cost too much in overhead - at least, not yet.
I envisioned the whole thing like this:

* Manual sievers take the sieve blocks to ~100T.
* Then feed into BOINC sieve, 1000n at a time.
* As each block reaches optimal depth, they get put into LLR (manual/BIONC/whatever).

Once one group is done with their "work unit", they move on to the next batch. The project doesn't have to end at any particular N -- this thing scales very well.

Last fiddled with by axn on 2009-09-07 at 04:40

2010-05-29, 08:12   #304
Oddball

May 2010

499 Posts

I've spent the past half hour or so browsing through this thread, and two things come to mind.

First, the sieve files for n=500000 seem to have vanished into thin air, as nothing ever gets uploaded. From what I've seen, at least three people had the files at some time: pacionet, cipher, and MooooMoo. Lets start from the beginning. At first, pacionet started sieving 0-50G. From: http://www.mersenneforum.org/showpos...5&postcount=90

Quote:
 OK, I'll sieve 0-50G because I have not enough RAM.
Next come some status updates, the latest of which is here: http://www.mersenneforum.org/showpos...&postcount=134

Quote:
 n=500,000 range= 0-50G sieving depth= 89.0 T candidates= 20,176,923 rate= 1 k every 1.2 seconds
A change in the siever then occurs: http://www.mersenneforum.org/showpos...&postcount=135

Quote:
 From today, cipher replaces me in sieving range 0-50G, n=500,000. I sent him the current output files. He can run sieve with more powerful hardware than mine.
No sieve file is uploaded, so I'm assuming they exchanged info via email. cipher later says that he intends to upload the file: http://www.mersenneforum.org/showpos...&postcount=143

Quote:
 I will upload the Data for n=500000 at 100T 200T and so on

Quote:
 n=500,000 range= 0-50G sieving depth= 228.8 T candidates= 19,042,103 Avg K per 1M = 381
and neither does this one: http://www.mersenneforum.org/showpos...&postcount=149

Quote:
 n=500,000 range= 0-50G sieving depth= 300 T candidates= 18,732,832 Avg K per 1M = 374
A request to upload the file then follows: http://www.mersenneforum.org/showpos...&postcount=153
Quote:
 Cipher, Could you post the 1-50G file when it gets to 500T? I'm going to make it available for distributed sieving then
and the request is acknowledged...: http://www.mersenneforum.org/showpos...&postcount=154

Quote:
 Allright MooooMOO you got it i will post it when it reaches 500T.
...but no file is posted! : http://www.mersenneforum.org/showpos...&postcount=157

Quote:
 n=500,000 range= 0-50G sieving depth= 520.7 candidates= 18,128,550 Avg K per 1M = 363
A discussion of sieving to 3P comes up. Carlos (em99010pepe) indicates an interest in joining the sieve effort: http://www.mersenneforum.org/showpos...&postcount=174

Quote:
 I can help but I don't know what to do. Please send me a PM with details.
and PMs were probably exchanged: http://www.mersenneforum.org/showpos...&postcount=175

Quote:
 If you still want to help out with 500,000, PM cipher for the 1-50G file. You can sieve the last part of the range (maybe 2500T-3000T?), while cipher can finish the first part.
Any sieving done past that point is a mystery. The last known update is this one: http://www.mersenneforum.org/showpos...&postcount=177
Quote:
 n=500,000 range= 0-50G sieving depth= 2500T candidates= 16,554,793 Avg K per 1M = 331
What happened? Did carlos complete 2500T-3000T, as MooMoo suggested? That range is never mentioned again in that thread, and is only mentioned in the forum nearly two years later, which leaves even more confusion: http://www.mersenneforum.org/showthread.php?t=12038

Quote:
 cipher: does anyone have a backup for that n? I know joshua2 was helping me out joshua2: I don't remember that
Now lets get back to the 50G-208G part of the sieve. Unfortunately, it seems that information on this is even harder to find than information on the 0-50G part. First, we have the initialization part from MooooMoo: http://www.mersenneforum.org/showpos...6&postcount=95
Quote:
 I've started sieving 50G-208G. My progress is at 26T.
followed by further progress: http://www.mersenneforum.org/showpos...&postcount=136

Quote:
 50G-208G is now complete to 1200T
and an offer to upload the file: http://www.mersenneforum.org/showpos...&postcount=166
Quote:
 OK, I'll upload the 50G-208G files two weeks or so from today.
But once again, no file is uploaded! A year after that, there's another request to upload the file, but there is no mention of the n=500000 effort: http://www.mersenneforum.org/showpos...&postcount=191
Quote:
 Joshua2: how is seiving for next n coming? Are you going to upload it and make it public soon? MooooMoo: the next value after n=333,333 may be a range of n, not a single n-value.
Does anyone know what became of this n? If the sieve file is found, it could possibly replace the undersieved n=390000 effort or at least make testing n=500000 from k=1-10M much easier (for the variable n project).

2010-05-29, 08:33   #305
Oddball

May 2010

499 Posts

Quote:
 Originally Posted by Oddball I've spent the past half hour or so browsing through this thread, and two things come to mind. First...
...and here's the second part. Both philmoore and thommy provided links to the primeform user group. Two interesting posts (by David Broadhurst and David Underbakke) are here:

http://tech.groups.yahoo.com/group/p...m/message/8342
http://tech.groups.yahoo.com/group/p...m/message/8344

Lets look at the first message by Mr. Underbakke:
Quote:
 I have not updated the public copy of TwinGen for some time, but have continued development. (NO public release means no need for really good user interfaces). Many of the problems with very wide sieves were optimized in the development. The hugh cache misses in array mode were resolved, more than tripling the speed of the seive.
Does anyone have the most recent copy of TwinGen or the secret version that's not for public release yet? If so, could you upload it and run some benchmarks? We're doing a quad sieve for the first part of the hybrid Operation Megabit Twin sieve, so this info would be useful. Just compare the times to get to p=10G with a range of k=1-10G for both NewPGen and TwinGen.

Quote:
 In the 172 kbit work, I sieved from k=1 to 1,300,000,000,000 using a quad sieve from p=2 to around 486,000,000,000,000. (k=1 to 1300T, p=2 to 486T) There is some effort to get the sieve off the ground (combination of automated bitmap sieve, and array mode sieves; details available upon request) It is very manageable. The automation worked with 100T k ranges until p=10T was reached. Then everything was merged into a master file and run in parallel with different p ranges. All steps ran fine on computers with only 256 MB of total RAM.
First of all, shouldn't it be k=1 to 1.3T, not k=1 to 1300T? And should "The automation worked with 100T k ranges" be "The automation worked with 100G k ranges"? Oh well, that's a minor issue. The more important one is the part where he says that only 256 MB of RAM is required. Can someone compare the RAM requirements for sieving a million candidates using NewPGen, a million candidates using tpsieve, and a million candidates using TwinGen?

Here's the message from Mr. Broadhurst:
Quote:
 Jean tells me that LLR slowdown (not, I repeat not, "unreliability") begins at k = 2^53 = 9P, due to limitations on George's gwnums.
I've done some tests that indicate the slowdown is at 750T, not at 9P. Could someone confirm this for me? Just check the iteration times for k*2^1000000-1 for k=740T and k=760T.

2010-05-30, 07:42   #306
Oddball

May 2010

499 Posts

Quote:
 Originally Posted by Oddball Does anyone know what became of this n?
Well, it seems that part of the file has been found. Here's an email I've just received:

Dear Oddball,

I happened to have come across your post this morning, and I would like to tell you that the file was not completely lost. MooooMoo was a real life friend of mine, and he sent me this file for safekeeping, which I have zipped and uploaded. The bad news, as you'll soon discover, is that only the first part of the range is available; the rest was either sent to someone else or is on an old computer I no longer own. I hope this is of some use to you, and even if it isn't, I am happy to have done my part to help close an unsolved chapter in the project's history.

Warm regards,

(name withheld)

n=500,000
range= 50-100G
sieving depth= 135.9 T
candidates= 19,651,092
Avg K per 1M = 393

Odds that a random candidate in the file will yield a twin: 1 in 36 million
Odds that a random candidate in the file will yield a prime: 1 in 6000
Estimated number of (single) primes in the file: 3300
Probability that one of the candidates in the file will yield a twin: 42%

2010-05-30, 10:08   #307
Oddball

May 2010

499 Posts

Quote:
 Originally Posted by Oddball Well, it seems that part of the file has been found.
And here's another part:

http://www.sendspace.com/file/8l2wci

I was emailed literally a few minutes ago by someone else (not the person who sent me the 50G-100G file) who left me a message:

Got this a few winters ago. You might want it. Enjoy.

n=500,000
range= 100G-208G
sieving depth= 78.36 T
candidates= 43,914,579
Avg K per 1M = 407

I suppose this provides some closure to the mystery files. If you have some cores to spare, I'd prefer that you work on the variable n range project instead of LLRing the n=500000 candidates or sieving either of these files further. Right now, the focus is on getting the variable n range to the optimal sieve depth, and we could use all the help we can get

 2010-09-30, 15:33 #308 Ken_g6     Jan 2005 Caught in a sieve 1100010102 Posts FYI, I have a new version of TPSieve out, based on and in the same archive with the newer PPSieve, v0.3.10 (source) Despite being newer, this version is unfortunately a little slower in many cases. But it's likely to be faster for people with AMD processors. If anyone finds it more than 10% slower than the old version on their machine, let me know. I plan to work on speedups later. By the way, if you're looking for the SSE2 version, it's rolled into the regular 32-bit version and used automatically. Last fiddled with by Ken_g6 on 2010-09-30 at 15:33

 Similar Threads Thread Thread Starter Forum Replies Last Post Lennart Conjectures 'R Us 31 2014-09-14 15:14 philmoore Five or Bust - The Dual Sierpinski Problem 66 2010-02-10 14:34 ltd Prime Sierpinski Project 76 2008-07-25 11:44 ltd Prime Sierpinski Project 26 2005-11-01 07:45 R.D. Silverman Factoring 7 2005-09-30 12:57

All times are UTC. The time now is 23:56.

Tue Sep 29 23:56:38 UTC 2020 up 19 days, 21:07, 0 users, load averages: 1.68, 1.66, 1.55