![]() |
Ah, yes ... I have measured the distribution of the primes in the two relation files, and indeed Batalov's run on R30-31M has many more relations with algebraic primes in the 30-125M range than xyzzy's, and if I account for that then it does seem that xyzzy is using -15e. The problem is that a relation with one prime in the 30-125M range and two large primes would be found with Batalov's parameters and not with xyzzy's.
If you are running a chunk on the R side from 27.93 to 27.94 million, rlim should be 27930000 and alim should be 125000000; so you need to have a different .poly file for each chunk you run. |
[quote]If you are running a chunk on the R side from 27.93 to 27.94 million, rlim should be 27930000 and alim should be 125000000; so you need to have a different .poly file for each chunk you run.[/quote]Okay. (See our edited post above.)
We have run the "R" side semi-properly, we think. On the "A" side we took the poly file from the "R" side and edited it to fix the "A" side without fixing the "R" side back to the right value. Doh! If the chunk is 31M to 32M, and you are working the "R" side, would you raise the rlim to 31M or would a generic 30M be okay? We used 30M for the rlim for the whole 30-35M "R" chunk. What parts do we need to redo? :smile: |
The test results are in.
[code]-rw-r--r-- 1 m m 1031656 2009-01-11 14:09 a -rw-r--r-- 1 m m 573416 2009-01-11 13:56 ax -rw-r--r-- 1 m m 972593 2009-01-11 14:07 r -rw-r--r-- 1 m m 599596 2009-01-11 13:50 rx[/code][LIST][*]"A" side sieving 25,000,000 to 25,000,4999[LIST][*]a = alim = 25M & rlim = 100M[*]ax = alim = 25M & rlim = 25M[/LIST][*]"R" side sieving 25,000,000 to 25,000,4999[LIST][*]r = alim = 125M & rlim = 25M[*]rx = alim = 25M & rlim = 25M[/LIST][/LIST] |
ay, there's the rub; do not lower both limits
My posted Opteron binary is 15e (even though it is a bit old and doesn't have the -R option). I've just checked - I sieve with exactly the same binary. One of its benefits is that you don't have to manipulate the *.poly file [I]ever[/I] (it lowers the necessary limit itself, internally).
Do not lower [I]both[/I] limits - when using any binary. Using this particular binary (and GGNFS-built after 332) - do not lower [I]any[/I] limits. However, the bottom line is that no intervals are not worth redoing (they have enough valuable data as they are, the relations are compatible if fewer, and you will produce more relations by doing another interval (this goes for me, too). I'll take 65-70M, both sides. |
[quote]Do not lower [I]both[/I] limits - when using any binary.
Using this particular binary (and GGNFS-built after 332) - do not lower [I]any[/I] limits.[/quote]Thanks! |
We just uploaded the "R" side for 30-34M. You can tell our files because we use lower case letters. The file sizes look close in size to the others so we expect we haven't mangled them up too bad.
We are currently (and properly) working the "A" and "R" side of 34-35M and the "A" and "R" side of 25-30M. The "A" side of 30-34M that we uploaded earlier is obviously suboptimal, but we gather that it is a workable situation. Thanks for letting us participate! |
[quote=Batalov;158173]However, the bottom line is that no intervals are [strike]not[/strike] worth redoing ...(if they are >50% done) [/quote]
My semantic had already been noted to be abstruse (see above). Well I am glad that you saw through it. Yes, those r30-34 look quite good, so don't [URL="http://lyricwiki.org/Baz_Luhrmann:Everybody's_Free_(To_Wear_Sunscreen)"]berate yourself[/URL] too hard. It's all good. _____ [SIZE=1]"...sometimes you're ahead, sometimes you're behind.[/SIZE] [SIZE=1]The race is long and, in the end, it's only with yourself.[/SIZE] [SIZE=1]Remember compliments you receive.[/SIZE] [SIZE=1]Forget the insults; if you succeed in doing this, tell me how." (c) Mary Schmich[/SIZE] |
Res. 75-85.
|
Reserving 90-100.
|
We can understand (and dream of) having a lot of horsepower to throw at a job, but how do you manage to keep track of everything?
|
[quote=Xyzzy;159612]We can understand (and dream of) having a lot of horsepower to throw at a job, but how do you manage to keep track of everything?[/quote]
I know you're not referring to me here, but from my POV the answer is "scripts". I'm not sure what arrangement of computers J.F. has access to, but for me the cluster uses a queuing system and thus all jobs are faciliated by qsub. Once a set of scripts is in place to divide up a range and kick off jobs to qsub, which then distributes them to worker nodes, the management of files goes pretty easy. |
| All times are UTC. The time now is 22:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.