mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Aliquot Sequences (https://www.mersenneforum.org/forumdisplay.php?f=90)
-   -   Reserved for MF - Sequence 4788 (https://www.mersenneforum.org/showthread.php?t=11615)

Dubslow 2012-04-12 02:03

How does the sec/rel rate compare across different size jobs? (On a C125 I'm doing, I'm getting about ~0.0077 sec/rel.)

And while I'm making the post, how do you compare polys? That 'e' value, which I think I've seen referred to as 'Murphy <something-or-other>"? There isn't exactly a beginner's guide to running (not completely understanding) NFS that I've found yet :razz:

(Or, how do you estimate rels required for a given size number?)

Batalov 2012-04-12 02:17

The speed will be what it will be (and on different nodes it will be different but proportional to their 'strength'). It is gauged by a small sieve with a command line like
[CODE]gnfs-lasieve4I13e -a t1.poly -f 9000000 -c 2000
gnfs-lasieve4I13e -a t2.poly -f 9000000 -c 2000
# and then repeat with different staggered -f ("from")[/CODE]

Murphy E is a rough estimation function, but a head-to-head run like above will help to decide (between a few top E-rated contenders; makes sense to bother with this only for gnfs>150, snfs>220, roughly, otherwise take the top poly by E)

For the third question, you may want to search forum and read. There are volumes written about this. Roughly, you want 46M non-redundant rels for a job with lpbr/a 29; 22M for lpbr/a 28; 92M for lpbr/a 30 - minimum; more is initially better, then becomes a waste (lost time for more sieving is not compensated by less time for the algebra).
_____________

EDIT: musing about Jayson's correct message below (a primitive explanation) Norm makes this one faster, but less productive per the work range (due to alpha), so the range does need correction. Oh, wait, you shrunk it /I would have expected it to need be extended/. Anyway, Ben will adjust as needed.

Good call B[sup]2[/sup]! Too small for RSALS and even if they did it, the job would be first in queue for a day.

jrk 2012-04-12 02:37

Batalov's poly looks good to start sieving, but the specialQ range needs changed from the previous poly it was copied from.

I calculated a specialQ range of 5M to 24M for Batalov's poly, using siever 13e.

bsquared 2012-04-12 02:39

[QUOTE=jrk;296196]Batalov's poly looks good to start sieving, but the specialQ range needs changed from the previous poly it was copied from.

I calculated a specialQ range of 5M to 24M for Batalov's poly, using siever 13e.[/QUOTE]

I'll do it.

Dubslow 2012-04-12 02:50

[QUOTE=Batalov;296194]The speed will be what it will be (and on different nodes it will be different but proportional to their 'strength'). It is gauged by a small sieve with a command line like
[CODE]gnfs-lasieve4I13e -a t1.poly -f 9000000 -c 2000
gnfs-lasieve4I13e -a t2.poly -f 9000000 -c 2000
# and then repeat with different staggered -f ("from")[/CODE]

Murphy E is a rough estimation function, but a head-to-head run like above will help to decide (between a few top E-rated contenders; makes sense to bother with this only for gnfs>150, snfs>220, roughly, otherwise take the top poly by E)

For the third question, you may want to search forum and read. There are volumes written about this. Roughly, you want 46M non-redundant rels for a job with lpbr/a 29; 22M for lpbr/a 28; 92M for lpbr/a 30 - minimum; more is initially better, then becomes a waste (lost time for more sieving is not compensated by less time for the algebra).[/QUOTE]Ok, thanks, it's a good start. Now, the (really) stupid question: What's lpbr/lpba (and the other similar lines in the poly file)?
Edit: Is mfb* something to do with 'factor-base'? I've heard that term, but have only a small idea of how it applies to NFS... is mfb* always 2*lpb*?


(And is there a reference for how to you lasieve? There's no -h... I can guess that -a is the job file, -f is the starting q, and -c is how far to go?)

I just tried your poly (and your command), and got this:
total yield: 2905, q=9002003 (0.01983 sec/rel)
That would mean I could do the whole job (with four cores) in just over 1.5 days... but I don't really want to put all four cores on it. Maybe a C130 or 135? Oh well. [strike]RSALS![/strike]

Edit: Whoops, late to the party. That's what I get for going on a drink-run with suitemates halfway through writing a post :P

jrk 2012-04-12 03:07

[QUOTE=Batalov;296194]EDIT: musing about Jayson's correct message below (a primitive explanation) Norm makes this one faster, but less productive per the work range (due to alpha), so the range does need correction. Oh, wait, you shrunk it /I would have expected it to need be extended/. Anyway, Ben will adjust as needed.[/QUOTE]
What I did is sample a small number of specialQ roots in each 1M range of Q, and then multiply the number of relations found in each sample by the ratio of the number of specialQ roots in the range over the number of roots in its sample.

If the number of samples is large enough, this leads to a good estimate for the number of raw relations that will be found. This still doesn't tell us how many raw relations will be needed, since the duplication rate will depend on the number of specialQ sieved, but I took a guess that 29M raw relations would be more than adequate.

bsquared 2012-04-12 03:10

[QUOTE=jrk;296202]What I did is sample a small number of specialQ roots in each 1M range of Q, and then multiply the number of relations found in each sample by the ratio of the number of specialQ roots in the range over the number of roots in its sample.

If the number of samples is large enough, this leads to a good estimate for the number of raw relations that will be found. This still doesn't tell us how many raw relations will be needed, since the duplication rate will depend on the number of specialQ sieved, but I took a guess that 29M raw relations would be more than adequate.[/QUOTE]

Do you have a script that does that? If so I'd love to see it.

jrk 2012-04-12 03:22

[QUOTE=Dubslow;296201]Ok, thanks, it's a good start. Now, the (really) stupid question: What's lpbr/lpba (and the other similar lines in the poly file)?
Edit: Is mfb* something to do with 'factor-base'? I've heard that term, but have only a small idea of how it applies to NFS... is mfb* always 2*lpb*?[/QUOTE]
lpbr/a are the limits (in bits) on the size of large primes which will be allowed in the relation values. mfbr/a are the limits (in bits) on the size of (trial-factored) cofactors which will be split by MPQS after sieving.

[QUOTE=Dubslow;296201](And is there a reference for how to you lasieve? There's no -h... I can guess that -a is the job file, -f is the starting q, and -c is how far to go?)[/QUOTE]
-a is the flag to tell the siever to choose specialQ on the algebraic side of the relation. In some jobs you would use -r instead to sieve specialQ on the rational side.

bsquared 2012-04-12 03:29

[QUOTE=Dubslow;296201]Ok, thanks, it's a good start. Now, the (really) stupid question: What's lpbr/lpba (and the other similar lines in the poly file)?
Edit: Is mfb* something to do with 'factor-base'? I've heard that term, but have only a small idea of how it applies to NFS... is mfb* always 2*lpb*?


(And is there a reference for how to you lasieve? There's no -h... I can guess that -a is the job file, -f is the starting q, and -c is how far to go?)

I just tried your poly (and your command), and got this:
total yield: 2905, q=9002003 (0.01983 sec/rel)
That would mean I could do the whole job (with four cores) in just over 1.5 days... but I don't really want to put all four cores on it. Maybe a C130 or 135? Oh well. [strike]RSALS![/strike]

Edit: Whoops, late to the party. That's what I get for going on a drink-run with suitemates halfway through writing a post :P[/QUOTE]

If you've fetched the ggnfs SVN repository, then the file /src/lasieve4/INSTALL.and.USE may answer some of your questions. Then again, it may only create more ;)

In brief, lpbr/a specifies how big so called "large primes" are allowed to be for each relation. NFS factorization get almost all of their relations from combinations of relations with large primes. The bigger this value is, the more relations are needed to get the required number of combinations. This works the same as in QS, so google QS large prime variation or NFS large prime variation and you can learn more about it.

mfb/r specifies the maximum size composite that will be attempted to be split into large primes for each potential relation after sieving has removed all primes in the factor base.

But you should be doing more drink runs instead of reading this :)

Dubslow 2012-04-12 03:41

[QUOTE=bsquared;296210]If you've fetched the ggnfs SVN repository, then the file /src/lasieve4/INSTALL.and.USE may answer some of your questions. Then again, it may only create more ;)
[/quote]Nope, just asked somebody for links to lasieve.x so I could let yafu do all the hard work ;)
[QUOTE=bsquared;296210]
But you should be doing more drink runs instead of reading this :)[/QUOTE]
But I still got plenty of water!
[QUOTE=jrk;296207]lpbr/a are the limits (in bits) on the size of large primes which will be allowed in the relation values. mfbr/a are the limits (in bits) on the size of (trial-factored) cofactors which will be split by MPQS after sieving.
[/QUOTE]

[QUOTE=bsquared;296210]
In brief, lpbr/a specifies how big so called "large primes" are allowed to be for each relation. NFS factorization get almost all of their relations from combinations of relations with large primes. The bigger this value is, the more relations are needed to get the required number of combinations. This works the same as in QS, so google QS large prime variation or NFS large prime variation and you can learn more about it.

mfb/r specifies the maximum size composite that will be attempted to be split into large primes for each potential relation after sieving has removed all primes in the factor base.
[/quote]Okay, what about *lim and *lambda? (Or Y0 and Y1? Are those some sort of intercepts?)

bsquared 2012-04-12 03:55

[QUOTE=Dubslow;296211]
Okay, what about *lim and *lambda? (Or Y0 and Y1? Are those some sort of intercepts?)[/QUOTE]

*lim are the factor base bounds on the rational and algebraic sides of the number field. *lambda is:

[QUOTE=INSTALL.and.USE]
# All sieve reports for which the sieve value is at least
# log(abs(polynomial value))-lambda*log(factor base bound)
# will be investigated by the trial division sieve.
[/QUOTE]

Y0/1 are the rational polynomial coefficients.

If none of that makes sense to you, I'd recommend you go read a bit about the basics of NFS and sieving (start with QS if you need to).


All times are UTC. The time now is 23:15.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.