mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > YAFU

Reply
 
Thread Tools
Old 2012-05-02, 20:43   #23
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3·29·83 Posts
Default

Quote:
Originally Posted by bsquared View Post
This is the issue - I'm embarrased I didn't see it before when this discussion originally came up. The problem is that YAFU doesn't know about the new qintsize, so it keeps telling ggnfs to sieve much smaller ranges of special-q. When ggnfs actually goes off and sieves large ranges of special-q, you are in fact sieving the same regions multiple times. Hence the huge number of duplicates. Apologies again for the inefficient qintsize ranges that yafu currently uses - it is an easy fix but one I won't be able to get to for a couple more days. Until then, you'll just have to put up with the slightly more inefficient filtering schedule (or use a different tool...).

- b.
Would it be possible for me to manually run some sieving? I'm sure it is possible, but the problem is I wouldn't know how to modify the YAFU save files to tell it what's going on after sieving is done.

For my own use:
"ggnfs-13e -a nfs.job -f (q) -c (rangesize) -o rels0.dat"
Dubslow is offline   Reply With Quote
Old 2012-05-02, 21:02   #24
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

41·229 Posts
Default

Something like that, yes. There are many things that could be done, but I find it dangerous to give (too much) advice to a person who is already changing a tire - the end result could be that the car falls on the person under it.

Whenever you experiment - make a "cp -r dir dir2; cd dir2" and play around there. It is very painful when you type a few lines and suddently the whole data file is gone and the new factorization session is started; I've been there, it hurts.
Anyway, with a backup, in another directory, you can try
mv nfs.dat nfs0.dat
remdups4 199 < nfs0.dat > nfs.dat
and when it does what you'd expect remove qintsize line from .poly (for now), try again - it will not filter ad nauseum because you will now have a small yet solid set of relations. remdups4 code is here
Batalov is offline   Reply With Quote
Old 2012-05-02, 21:10   #25
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

22×23×37 Posts
Default

Quote:
Originally Posted by Dubslow View Post
Would it be possible for me to manually run some sieving? I'm sure it is possible, but the problem is I wouldn't know how to modify the YAFU save files to tell it what's going on after sieving is done.

For my own use:
"ggnfs-13e -a nfs.job -f (q) -c (rangesize) -o rels0.dat"
Syntax looks good, assuming you've renamed your sievers from the standard "ggnfs-lasieveI13e".

If you then did "cat rels0.dat >> nfs.dat" to append the data to the current dataset, then when yafu restarts it doesn't know about or care about the behind the scenes sieving. When it restarts it will stomp on any rels*.dat file, so appending to nfs.dat is necessary (and you need not use rels0.dat as the temporary output - it could be anything). Yafu on restart will just count relations in nfs.dat and decide whether to proceed to filtering or not based on the count and minimum required.
bsquared is online now   Reply With Quote
Old 2012-05-02, 21:32   #26
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by Batalov View Post
Something like that, yes. There are many things that could be done, but I find it dangerous to give (too much) advice to a person who is already changing a tire - the end result could be that the car falls on the person under it.

Whenever you experiment - make a "cp -r dir dir2; cd dir2" and play around there. It is very painful when you type a few lines and suddently the whole data file is gone and the new factorization session is started; I've been there, it hurts.
Anyway, with a backup, in another directory, you can try
mv nfs.dat nfs0.dat
remdups4 199 < nfs0.dat > nfs.dat
and when it does what you'd expect remove qintsize line from .poly (for now), try again - it will not filter ad nauseum because you will now have a small yet solid set of relations. remdups4 code is here
Quote:
Originally Posted by bsquared View Post
Syntax looks good, assuming you've renamed your sievers from the standard "ggnfs-lasieveI13e".

If you then did "cat rels0.dat >> nfs.dat" to append the data to the current dataset, then when yafu restarts it doesn't know about or care about the behind the scenes sieving. When it restarts it will stomp on any rels*.dat file, so appending to nfs.dat is necessary (and you need not use rels0.dat as the temporary output - it could be anything). Yafu on restart will just count relations in nfs.dat and decide whether to proceed to filtering or not based on the count and minimum required.
Cool. So as I understand it, remdups4 will remove the duplicates from nfs.dat, which is where yafu stores relations; run ggnfs for ~5 million rels, then append those to nfs.dat, and let yafu restart. Last question: Is it safe to assume that rels/q is roughly constant over any a similar range of q? (What's a good estimate for "similar enough"?) I got ~2.75 rels/q in the last q range of 4848517 - 4888517.

(@B2: I copied the command from my system monitor because there's no help file/switch for these sievers, so I have so far learned by imitation :P. The file name was just shorthand.)

Edit: There's a few nfs.dat.* files. What are those?

Edit2:
Code:
bill@Gravemind:~/yafu/derf∰∂ remdups4 199 < nfs0.dat > nfs.dat 
Found 5184959 unique, 11233589 duplicate, and 3 bad relations.
Largest dimension used: 39 of 199
Average dimension used: 15.8 of 199
Fortunately this agrees with yafu/msieve at last filter (there were even exactly three errors to go with the bad rel count).
Code:
found 11233589 duplicates and 5184959 unique relations
It seems to me that it would be simple enough just to overwrite the nfs.dat with the dups removed, then let YAFU automate the rest; OTOH it would be fun to try my hand at manual NFS...

Last fiddled with by Dubslow on 2012-05-02 at 22:07
Dubslow is offline   Reply With Quote
Old 2012-05-03, 00:50   #27
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by Dubslow View Post
Edit: There's a few nfs.dat.* files. What are those?


It seems to me that it would be simple enough just to overwrite the nfs.dat with the dups removed, then let YAFU automate the rest
Well, while it was sieving, I overwrote nfs.dat with the dups-removed version produced by Batalov/Greg's program; however, I'm pretty sure the total rel count is still kept in some of those other files, because of this:
Code:
commencing relation filtering
estimated available RAM is 12019.7 MB
commencing duplicate removal, pass 1
found 271288 hash collisions in 5797077 relations
added 107 free relations
commencing duplicate removal, pass 2
found 41092 duplicates and 5756092 unique relations
memory use: 16.3 MB
reading ideals above 100000
commencing singleton removal, initial pass
memory use: 188.2 MB
reading all ideals from disk
memory use: 211.2 MB
keeping 8852912 ideals with weight <= 200, target excess is 29692
commencing in-memory singleton removal
begin with 5756092 relations and 8852912 unique ideals
reduce to 100 relations and 0 ideals in 7 passes
max relations containing the same ideal: 0
nfs: commencing algebraic side lattice sieving over range: 5208517 - 5248517
total yield: 106230, q=5248519 (0.01106 sec/rel) 
found 17174848 relations, need at least 9036311, proceeding with filtering ...
So even though filtering actually ran on only ~5,797K relations, the total count is still north of 17,175K. Which files hold what?

Edit: All but .p are binary, and .p appears to be just a list of polys leftover from poly select. Okay, at this point I'll run sieving manually, unless I can modify the rel count; the C130 required ~10.3M rels, whose matrix got 30 deps; I'll stab a guess at 10M rels would be sufficient here. I will also assume that rels/q it almost (but not quite) constant. How high on the q can I go?

Edit2: Actually, I noticed I'm only getting around 2/5s unique rel per total rels at this point in sieving (max q is ~5.289M). Is that a function of going too high with the q? Is there something I can do on the "rational" side? (I have no idea what they/the differences are, but I know that -a and -r are separate things, and can possibly be used in tandem?) Probably dumb question: rels from different polys can't be combined, right?

Edit3: "Back of the napkin" (as it were) calculation suggests that continuing as I have been is not efficient, however, per my questions above, I have no idea what to do about it ATM, so I have provisionally started this:
Code:
gnfs-lasieve4I13e -a nfs.job -f 5288520 -c 3000000 -o rels0.dat
Note the range is 3 million, which probably still won't give me anywhere enough unique rels.

Last fiddled with by Dubslow on 2012-05-03 at 01:10
Dubslow is offline   Reply With Quote
Old 2012-05-03, 03:52   #28
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

22×23×37 Posts
Default

Quote:
Originally Posted by Dubslow View Post
Well, while it was sieving, I overwrote nfs.dat with the dups-removed version produced by Batalov/Greg's program; however, I'm pretty sure the total rel count is still kept in some of those other files, because of this:
If you did it while it was still sieving, then the total is held inside yafu (i.e., RAM), not a file. Stop/restart should cause it to learn the new total because the first thing it will do is count relations in the .dat file. While it is running, it keeps a running total and just adds new relations found rather than recounting the entire nfs.dat file every time. It doesn't expect the file to be mucked with while it is busy

Quote:
Originally Posted by Dubslow View Post
Edit: All but .p are binary, and .p appears to be just a list of polys leftover from poly select. Okay, at this point I'll run sieving manually, unless I can modify the rel count; the C130 required ~10.3M rels, whose matrix got 30 deps; I'll stab a guess at 10M rels would be sufficient here. I will also assume that rels/q it almost (but not quite)constant. How high on the q can I go?
You are still within the factor base (i.e., current q is less than rlim/alim in the .job file), so there is a long way to go before yield will significantly drop. You'll be done long before then.


Quote:
Originally Posted by Dubslow View Post
Edit2: Actually, I noticed I'm only getting around 2/5s unique rel per total rels at this point in sieving (max q is ~5.289M). Is that a function of going too high with the q? Is there something I can do on the "rational" side? (I have no idea what they/the differences are, but I know that -a and -r are separate things, and can possibly be used in tandem?) Probably dumb question: rels from different polys can't be combined, right?

Edit3: "Back of the napkin" (as it were) calculation suggests that continuing as I have been is not efficient, however, per my questions above, I have no idea what to do about it ATM, so I have provisionally started this:
Code:
gnfs-lasieve4I13e -a nfs.job -f 5288520 -c 3000000 -o rels0.dat
Note the range is 3 million, which probably still won't give me anywhere enough unique rels.
based on this:
Code:
nfs: commencing algebraic side lattice sieving over range: 5208517 - 5248517
total yield: 106230, q=5248519 (0.01106 sec/rel)
you are getting over 2.5 rels per unit special-q range, so with a range of 3M, you can expect to get 7.5M relations or so. That should be plenty.

One can use -r and -a in tandem, but -r will have a much lower yield and won't be necessary here. And no, you can't combine relations from two different polynomials.
bsquared is online now   Reply With Quote
Old 2012-05-03, 05:10   #29
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by bsquared View Post
<snip>
Sweet! Perfect, that was a very informative post for a n00b like me

Re: q range, the rlim and alim in nfs.job are both at 5.9M, which is quite a bit smaller than the 8.2M I'm targeting right now.
Code:
n: 182406447322336820309560461032830305181092209591614813564635719276441105985665782458025218176998282272069123864041991524112937073
skew: 591710.38
c0: 1528599633851371776329622734285
c1: 295884009843159380424326151
c2: -1406474204227932139749
c3: 458226357690021
c4: 3564267496
c5: 1860
Y0: -9961042320215041441491972
Y1: 39670776483097
rlim: 5900000
alim: 5900000
lpbr: 27
lpba: 27
mfbr: 54
mfba: 54
rlambda: 2.500000
alambda: 2.500000
Also, just a guess, are the *lambdas the bounds for rels/q at which to switch sides? I.e., the effectiveness-crossover point, so to speak? And the memory thing didn't occur to me, but of course it's obvious now (I guess I just need more experience in the comp sci world for that to be second nature :P)

Edit: The reason I went with such a large range is because despite getting 100K rels in the q range you quoted, filtering revealed that 60K were duplicates. However, I just ran a remdups on this most recent batch, and I got a very reasonable error rate, so I guess I'll just call it quits around 10M total.

Last fiddled with by Dubslow on 2012-05-03 at 05:19
Dubslow is offline   Reply With Quote
Old 2012-05-03, 18:35   #30
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

11100001101012 Posts
Default

Okay, did some sieving, now I've got slightly confusing output from yafu (don't think it's a bug though)
Code:
commencing relation filtering
estimated available RAM is 12019.7 MB
commencing duplicate removal, pass 1
error -15 reading relation 9990977
read 10M relations
found 419567 hash collisions in 10275539 relations
added 24 free relations
commencing duplicate removal, pass 2
found 53795 duplicates and 10221768 unique relations
memory use: 31.6 MB
reading ideals above 720000
commencing singleton removal, initial pass
memory use: 344.5 MB
reading all ideals from disk
memory use: 303.2 MB
commencing in-memory singleton removal
begin with 10221768 relations and 11294448 unique ideals
reduce to 3805227 relations <?> and 3786399 ideals in 26 passes
max relations containing the same ideal: 72
nfs: commencing algebraic side lattice sieving over range: 7919993 - 7959993
Umm... what's the "reduced to" rel count? Surely it doesn't need more relations? Would it be worth a shot to try and build the matrix manually?

Last fiddled with by Dubslow on 2012-05-03 at 18:36 Reason: s/out/output
Dubslow is offline   Reply With Quote
Old 2012-05-03, 18:50   #31
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

100100101011012 Posts
Default

This is filtering. You are almost there. (Very roughly speaking) The filterered set needs more excess of rels over ideals to make do.
Batalov is offline   Reply With Quote
Old 2012-05-03, 18:52   #32
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3·29·83 Posts
Default

What are ideals then?
Dubslow is offline   Reply With Quote
Old 2012-05-04, 02:53   #33
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

33·347 Posts
Default

Quote:
Originally Posted by Dubslow View Post
What are ideals then?
Delete it! Fast! Mr Silverman will be all on your head when he wakes up!

P.S.: ideals.
LaurV is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Running YAFU via Aliqueit doesn't find yafu.ini EdH YAFU 8 2018-03-14 17:22
Where to report bugs Matt Software 1 2007-02-20 19:13
Possible Prime95 bugs JuanTutors Software 9 2006-09-24 21:22
RMA 1.7 beta bugs TTn 15k Search 2 2004-11-24 22:11
RMA 1.6 fixes LLR bugs! TTn 15k Search 16 2004-06-16 01:22

All times are UTC. The time now is 21:12.

Wed Apr 14 21:12:45 UTC 2021 up 6 days, 15:53, 0 users, load averages: 2.06, 2.34, 2.41

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.