![]() |
![]() |
#23 | |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]() Quote:
For my own use: "ggnfs-13e -a nfs.job -f (q) -c (rangesize) -o rels0.dat" |
|
![]() |
![]() |
![]() |
#24 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
41·229 Posts |
![]()
Something like that, yes. There are many things that could be done, but I find it dangerous to give (too much) advice to a person who is already changing a tire - the end result could be that the car falls on the person under it.
Whenever you experiment - make a "cp -r dir dir2; cd dir2" and play around there. It is very painful when you type a few lines and suddently the whole data file is gone and the new factorization session is started; I've been there, it hurts. Anyway, with a backup, in another directory, you can try mv nfs.dat nfs0.dat remdups4 199 < nfs0.dat > nfs.dat and when it does what you'd expect remove qintsize line from .poly (for now), try again - it will not filter ad nauseum because you will now have a small yet solid set of relations. remdups4 code is here |
![]() |
![]() |
![]() |
#25 | |
"Ben"
Feb 2007
22×23×37 Posts |
![]() Quote:
If you then did "cat rels0.dat >> nfs.dat" to append the data to the current dataset, then when yafu restarts it doesn't know about or care about the behind the scenes sieving. When it restarts it will stomp on any rels*.dat file, so appending to nfs.dat is necessary (and you need not use rels0.dat as the temporary output - it could be anything). Yafu on restart will just count relations in nfs.dat and decide whether to proceed to filtering or not based on the count and minimum required. |
|
![]() |
![]() |
![]() |
#26 | ||
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts |
![]() Quote:
Quote:
(@B2: I copied the command from my system monitor because there's no help file/switch for these sievers, so I have so far learned by imitation :P. The file name was just shorthand.) Edit: There's a few nfs.dat.* files. What are those? Edit2: Code:
bill@Gravemind:~/yafu/derf∰∂ remdups4 199 < nfs0.dat > nfs.dat Found 5184959 unique, 11233589 duplicate, and 3 bad relations. Largest dimension used: 39 of 199 Average dimension used: 15.8 of 199 Code:
found 11233589 duplicates and 5184959 unique relations Last fiddled with by Dubslow on 2012-05-02 at 22:07 |
||
![]() |
![]() |
![]() |
#27 | |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts |
![]() Quote:
Code:
commencing relation filtering estimated available RAM is 12019.7 MB commencing duplicate removal, pass 1 found 271288 hash collisions in 5797077 relations added 107 free relations commencing duplicate removal, pass 2 found 41092 duplicates and 5756092 unique relations memory use: 16.3 MB reading ideals above 100000 commencing singleton removal, initial pass memory use: 188.2 MB reading all ideals from disk memory use: 211.2 MB keeping 8852912 ideals with weight <= 200, target excess is 29692 commencing in-memory singleton removal begin with 5756092 relations and 8852912 unique ideals reduce to 100 relations and 0 ideals in 7 passes max relations containing the same ideal: 0 nfs: commencing algebraic side lattice sieving over range: 5208517 - 5248517 total yield: 106230, q=5248519 (0.01106 sec/rel) found 17174848 relations, need at least 9036311, proceeding with filtering ... Edit: All but .p are binary, and .p appears to be just a list of polys leftover from poly select. Okay, at this point I'll run sieving manually, unless I can modify the rel count; the C130 required ~10.3M rels, whose matrix got 30 deps; I'll stab a guess at 10M rels would be sufficient here. I will also assume that rels/q it almost (but not quite) constant. How high on the q can I go? Edit2: Actually, I noticed I'm only getting around 2/5s unique rel per total rels at this point in sieving (max q is ~5.289M). Is that a function of going too high with the q? Is there something I can do on the "rational" side? (I have no idea what they/the differences are, but I know that -a and -r are separate things, and can possibly be used in tandem?) Probably dumb question: rels from different polys can't be combined, right? Edit3: "Back of the napkin" (as it were) calculation suggests that continuing as I have been is not efficient, however, per my questions above, I have no idea what to do about it ATM, so I have provisionally started this: Code:
gnfs-lasieve4I13e -a nfs.job -f 5288520 -c 3000000 -o rels0.dat Last fiddled with by Dubslow on 2012-05-03 at 01:10 |
|
![]() |
![]() |
![]() |
#28 | |||
"Ben"
Feb 2007
22×23×37 Posts |
![]() Quote:
![]() Quote:
Quote:
Code:
nfs: commencing algebraic side lattice sieving over range: 5208517 - 5248517 total yield: 106230, q=5248519 (0.01106 sec/rel) One can use -r and -a in tandem, but -r will have a much lower yield and won't be necessary here. And no, you can't combine relations from two different polynomials. |
|||
![]() |
![]() |
![]() |
#29 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts |
![]()
Sweet! Perfect, that was a very informative post for a n00b like me
![]() Re: q range, the rlim and alim in nfs.job are both at 5.9M, which is quite a bit smaller than the 8.2M I'm targeting right now. Code:
n: 182406447322336820309560461032830305181092209591614813564635719276441105985665782458025218176998282272069123864041991524112937073 skew: 591710.38 c0: 1528599633851371776329622734285 c1: 295884009843159380424326151 c2: -1406474204227932139749 c3: 458226357690021 c4: 3564267496 c5: 1860 Y0: -9961042320215041441491972 Y1: 39670776483097 rlim: 5900000 alim: 5900000 lpbr: 27 lpba: 27 mfbr: 54 mfba: 54 rlambda: 2.500000 alambda: 2.500000 ![]() Edit: The reason I went with such a large range is because despite getting 100K rels in the q range you quoted, filtering revealed that 60K were duplicates. However, I just ran a remdups on this most recent batch, and I got a very reasonable error rate, so I guess I'll just call it quits around 10M total. Last fiddled with by Dubslow on 2012-05-03 at 05:19 |
![]() |
![]() |
![]() |
#30 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
11100001101012 Posts |
![]()
Okay, did some sieving, now I've got slightly confusing output from yafu (don't think it's a bug though)
Code:
commencing relation filtering estimated available RAM is 12019.7 MB commencing duplicate removal, pass 1 error -15 reading relation 9990977 read 10M relations found 419567 hash collisions in 10275539 relations added 24 free relations commencing duplicate removal, pass 2 found 53795 duplicates and 10221768 unique relations memory use: 31.6 MB reading ideals above 720000 commencing singleton removal, initial pass memory use: 344.5 MB reading all ideals from disk memory use: 303.2 MB commencing in-memory singleton removal begin with 10221768 relations and 11294448 unique ideals reduce to 3805227 relations <?> and 3786399 ideals in 26 passes max relations containing the same ideal: 72 nfs: commencing algebraic side lattice sieving over range: 7919993 - 7959993 Last fiddled with by Dubslow on 2012-05-03 at 18:36 Reason: s/out/output |
![]() |
![]() |
![]() |
#31 |
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
100100101011012 Posts |
![]()
This is filtering. You are almost there. (Very roughly speaking) The filterered set needs more excess of rels over ideals to make do.
|
![]() |
![]() |
![]() |
#32 |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]()
What are ideals then?
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Running YAFU via Aliqueit doesn't find yafu.ini | EdH | YAFU | 8 | 2018-03-14 17:22 |
Where to report bugs | Matt | Software | 1 | 2007-02-20 19:13 |
Possible Prime95 bugs | JuanTutors | Software | 9 | 2006-09-24 21:22 |
RMA 1.7 beta bugs | TTn | 15k Search | 2 | 2004-11-24 22:11 |
RMA 1.6 fixes LLR bugs! | TTn | 15k Search | 16 | 2004-06-16 01:22 |