![]() |
This makes sense. Thanks for answering. So, at the end, it is still of matter of luck :smile: if filtering with many cores, it is possible that some cores go faster then the others, and a shorter matrix results. Well... it is a matter of luck from the beginning, if you step on the tail of the right snake, aaa... this...how it is called?... elliptic curve... than you may need no NFS at all... :razz:
|
QS, NFS and ECM are all probabilistic; there is a chance that they will all never work factoring a given number. You can choose the parameters when running each algorithm so that the probability of never working is arbitrarily small, but it's never zero.
It's like a reverse lottery :) |
[QUOTE=jasonp;336634]QS, NFS and ECM are all probabilistic; there is a chance that they will all never work factoring a given number. You can choose the parameters when running each algorithm so that the probability of never working is arbitrarily small, but it's never zero.
It's like a reverse lottery :)[/QUOTE] I thought that QS was deterministic if you add a test checking whether the input was a square. The runtime obviously is but we know the difference of squares exists. At the very worst using the basic poly x^2-N it can become fermat's method(x^2-N is a square) if you run it long enough. |
[QUOTE=henryzz;336635]I thought that QS was deterministic if you add a test checking whether the input was a square.[/QUOTE]Try factoring 1331 with QS.
8-) [spoiler]ITYM not a prime power[/spoiler] |
[QUOTE=xilman;336645]Try factoring 1331 with QS.
8-) [spoiler]ITYM not a prime power[/spoiler][/QUOTE] Yes I should have said prime power. |
-psearch deep
I'm running nfs() on a C164 via GNFS on an i7 with 8 threads, and I chose to use the -psearch deep option. It's been running for over 10 days now without yet producing the nfs.job file.
How long should it take to complete? 300 hours? I'm running YAFU 1.33 on a Linux64 box. I did run this poly on other machines with no flags and it only took a few days, presumably 300 hrs/number_of_threads. Thanks. |
[QUOTE=swellman;337004]I'm running nfs() on a C164 via GNFS on an i7 with 8 threads, and I chose to use the -psearch deep option. It's been running for over 10 days now without yet producing the nfs.job file.[/QUOTE]
I did that on a C141 on my i5 with three threads. It took several days. I'm not surprised that it's taken more than ten for a C164. |
[QUOTE=Mr. P-1;337214]I did that on a C141 on my i5 with three threads. It took several days. I'm not surprised that it's taken more than ten for a C164.[/QUOTE]
[url=http://www.mersenneforum.org/showpost.php?p=328879&postcount=133]This post[/url] lays out the poly search times used by msieve. Looks like 64 hours of deep poly search time for a C141. My C165 seems like it will take 12.5 days with the deep search flag which may be a bit of overkill but I'm interested in seeing how much better the ETA is over the poly found with a generic search (test sieving of which indicates an ETA of ~650 hours). Stuck with the generic block of 1-250 in the poly search. Not sure if shifting/widening this would have helped or diluted the effort. |
My poly search indeed took 300 hours, ultimately producing an inferior poly to that found with a default search. The Murphy e score was smaller and sieving ETA was twice as long. Simple bad luck?
I'm thinking for the next C160ish composite GNFS it might be worth trying a wider range of blocks beyond the default value of 1-250. |
[QUOTE=swellman;337463]My poly search indeed took 300 hours, ultimately producing an inferior poly to that found with a default search. The Murphy e score was smaller and sieving ETA was twice as long. Simple bad luck?
I'm thinking for the next C160ish composite GNFS it might be worth trying a wider range of blocks beyond the default value of 1-250.[/QUOTE] Possibly bad luck. But really hard to say for sure. Note that I coded the option to do a "deep" search because msieve allows it, but that doesn't mean that it is in some sense better. I haven't heard one way or another if it is expected to produce similar polynomials than a breadth-first search, neither do I know if the statistical properties of every range of leading coefficient is identical. An intermediate approach is the 'wide' search, which does N (threads) times the work on each coefficient range before moving to the next range. So you go deeper into each range, but still look at many ranges. |
There was a lot of uncertainty for large problems whether to expect better polynomials when starting from the smaller leading coefficients; Kleinjung's algorithm makes it feasible to find polynomials no matter how small the leading coeff is. However, now that we have GPU poly selection and can search ranges extremely thoroughly, the conclusion is that the polynomials that get found with extremely small leading coefficients (i.e. around 1) are not systematically better than polynomials whose high coeff is larger; and there are many many fewer of the former, so fewer chances for luck to turn up something good.
The problem is that the code has not been updated to reflect that, and will always start at 1. |
| All times are UTC. The time now is 15:02. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.