![]() |
[QUOTE=LaurV;418162]
b. you can not "optimize the parameters" of a poly you already have (I didn't look into the files, but it seems to be a poly, from the extension of the file names). However, you may find a better poly, if you search longer.[/QUOTE] :huh: ...uh, yeah you can. The siever, factor base bounds, large prime sizes, large prime counts, whatever those lambda things are... there's a whole bunch of things to tweak with a given polynomial. |
[QUOTE=Alfred;418116]Since sieving at the c163 of 30450:808 took a while, I would like to get some help on optimizing the parameters.
Thank you in advance. PS: The siever was gnfs-lasieve4i15e Alfred[/QUOTE] Alfred- The .poly file does not list the E-score for a polynomial by default, but your msieve log file will list the score. If you could post that, I can help you determine how much of your "took a while" was due to an insufficient poly-search phase, and how much was due to parameters being suboptimal. I've been posting recently about how a number this size is likely faster with lpbr and lpba at 32 rather than 31 when running with the 14e siever. Going up a siever rather than up a large-prime-bound (lpb) also results in improved yield, at the cost of higher time per relation (sec/rel). 15e/30 isn't that weird, but I think 15e/31 or 14e/32 are faster by something like 10-20% of project length. I would have run this with 14e siever, lpbr/lpba at 32, mfbr/mfba at the usual 2*lbpr/lbpa (64, in my case, though 31 and 62 would be roughly equal in project-length, I think), and alim = rlim = something around 45 or 50M. I would sieve from q = 10M to something like 60/65M, and expect a matrix to build with the filtering flag of target_density=90 (or 84 or 96 or 100, but not 70 because the matrix is too big and takes too long to solve). Given use of 15e siever, 31/62 bounds is probly faster than 32/64. I don't think the lambdas need be messed with for individual-sized projects; I use 2.6 also, though for a C166 I may try 3.6 for alambda to see what happens. My current C165 has a poly score of 6.67e-13, and Gimarel posted a poly for my next C166 at 6.46e-13. Scores drop 12-15% per digit usually, so I think a C163 should have a score above 8.0e-13. If your poly has a score down in the 6e-13 range, your factorization had the difficulty of a C165/166 with a good poly, 20-40% tougher than it should have been. How many raw relations did it take to build a matrix? (at what target-density did you have it build?) I've never run a 30-bit project for a number this big, curious how relations scales. |
[QUOTE=Dubslow;418177]:huh:
...uh, yeah you can. The siever, factor base bounds, large prime sizes, large prime counts, whatever those lambda things are... there's a whole bunch of things to tweak with a given polynomial.[/QUOTE] This. GNFS just works without worrying much about parameters for projects in the 130-140 digit length, while parameter choice can start to save useful time in the 140-150 digit length. I took to heart Jason's advice to "turn the knobs" on a handful of ~140 digit factorizations before trying something in the 150s, and then did 3 in the 150s before my current dabble in the 160s. An iffy parameter choice at 140 digits might take 35 hrs instead of 25, but it stinks when a C163 takes 6 quad-core-weeks instead of 4! That really low alim Alfred has might be what caused it to "take a while." Skew is higher than I'd like, but I don't have a good idea of how that influences yield; I don't do poly searches with a1 under 10,000 to reduce the chance of a wonky poly that looks good on a quick test-sieve but yields lower than desired. Alfred, can you also post what your yield was at a couple of different q's? Specifically, the number of relations found for a block of 10k or larger q, such as "at q = 10M, a 100k block of q yielded 240,000 relations." |
[CODE]c. you can request a poly (better or not than the one you already have, depends on your luck, you may have hit a very good poly already) in the "request polys" thread, here around. Some people have "dedicated" hardware and are able to get very good polys very fast. You just specify the number, and where it is coming from (like "C163 from aliquot xxxxx") and someone may take up the challenge to find the best poly. The "where it comes from" is necessary because we don't try to factor encryption keys for gaming sites, etc.[/CODE]
@LaurV This is a very time saving offering. Thank you. My knowledge (in order to find good polynomials is poor) my hardware is very poor. Alfred |
2 Attachment(s)
@ VBCurtis
Thank you for your answer. [QUOTE]The .poly file does not list the E-score for a polynomial by default, but your msieve log file will list the score. If you could post that, I can help you determine how much of your "took a while" was due to an insufficient poly-search phase, and how much was due to parameters being suboptimal.[/QUOTE] Two log files are attached. None of these files is the complete log file - I've lost it. But I think that I needed ~ 88 M relations (with duplicates). [QUOTE]I've been posting recently about how a number this size is likely faster with lpbr and lpba at 32 rather than 31 when running with the 14e siever. Going up a siever rather than up a large-prime-bound (lpb) also results in improved yield, at the cost of higher time per relation (sec/rel). 15e/30 isn't that weird, but I think 15e/31 or 14e/32 are faster by something like 10-20% of project length.[/QUOTE] I sieved a short q-range with the 14e and the same poly-file as for the 15e - not optimally. Due to your advice the next time I can use better parameters. [QUOTE]How many raw relations did it take to build a matrix? (at what target-density did you have it build?) I've never run a 30-bit project for a number this big, curious how relations scales.[/QUOTE] I needed nearly 88 M raw relations to build a matrix with target-density 70. |
[QUOTE]Originally posted by VBCurtis
Alfred, can you also post what your yield was at a couple of different q's? Specifically, the number of relations found for a block of 10k or larger q, such as "at q = 10M, a 100k block of q yielded 240,000 relations."[/QUOTE] I changed the parameters to values you suggested: rlim: staying 41500000 alim: from 21049999 to 41500000 lpbr: from 30 to 31 lpba: from 30 to 31 mfbr: from 30 to 62 mfba: from 30 to 62 rlambda: staying 2.6 alambda: staying 2.6 Testing these parameters with the 15e siever with "gnfs-lasieve4i15e -R -v -f 30000000 -c 10000 -o 30450.out -a 30450.poly" I get ~ 0.0543 sec/rel, yielding more than 56k rel. My bad choice lead to a doubling of the sieveing time. The next time I'll ask before starting the sieve job. |
Alfred-
No, it's not a doubling of the project time, because the number of relations needed to build a matrix jumps by approximately 75% for each step of lpba/lpbr you go up. You needed 88M raw relations (raw = including duplicates) for 30LP; I believe you would need 150 to 155M for 31LP, but the relations come twice as quickly. So the time savings for my suggested parameters might save you 13-15% of time (1.75x as many relations, divided by 2x speed = 0.87ish). I put a range, because we don't know precisely how many relations will be required to build a matrix. The score I requested is the "combined" number at the end of the poly in the log. 9.4e-13 is not a bad poly. The siever does not process q less than alim, so you'll see a message "reducing alim to 29999999" for the test-sieve line you wrote with q = 30m, This is not a permanent change; as q rises, the siever's treatment of alim also rises (though it resets only each time it is called, so if you sieved say a 5M block of q for a week, alim remains fixed during that time). Your yield with my params is 5.6 (56000 rels divided by -c of 10000), which is pretty high. That suggests it was 2.6 to 2.8 with your parameters, which is well within "reasonable" range. I don't think your choices were very bad, a C163 just takes a while ("a" in LaurV's response). Yields under 2.0 generally mean you should change parameters and start over, but 2.7 does *not* tell you that. If your test-sieve is representative of the whole region, I think my parameter suggestions would lead to a q-range of (155M / 5.6) = 27M or 28M; say,q from 25M to 53M. If I'm estimating correctly, you needed (88M/2.7) = 32M or 33M range of special q. Again, not a huge difference, nor one to suggest you should regret your choices. |
@VBCurtis
My sieving range was q=21M to 56M. I've learned a lot. Thank you for your very detailed answer. |
You're welcome, sir. I see threads like this as a chance for me to put into words what I think I've learned from 2 years of knob-turning (tinkering with parameters and the factmsieve.py script). I enjoy doing so, not least because quite often one of the true experts corrects my beliefs, and everyone gains.
One more idea for you: I found better results by starting the sieve region at alim/3 rather than alim/2 (the default in the script). This change reduces the frequency of going up into q's where yield (and sec/rel) fall off badly. On some factorizations it makes no difference, but on poor-yielding polys where the sieve has to go well above alim, the range from alim/3 to alim/2 is usually more fertile than the range from (for instance) 1.3*alim to 1.5*alim. In your case, supposing alim = rlim = 42M, I would aim for 14M to 49M rather than 21M to 56M. I think 14M to 21M is more efficient than 49M to 56M, but this varies *a lot* from job to job. |
Releasing 236840 856530 933564 570996 93000
|
Releasing 761516 832314
|
| All times are UTC. The time now is 23:14. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.