mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   who can factor 10^100+27? (https://www.mersenneforum.org/showthread.php?t=10821)

FactorEyes 2008-10-22 19:06

[QUOTE=bsquared;146167] Is the heuristic broken for jobs this tiny, or is msieve not handleing it right somehow?
[/QUOTE]

I recently had problems in post-processing with a 128-digit snfs job. I've been meaning to send Jason the logs. Sounds as if that is a good idea.

jasonp 2008-10-22 19:34

Probably 2^25 isn't "asymptotic" enough for the heuristic to be accurate, but that's no excuse for the filtering to not work. It's surprising how difficult it is to make the filtering work in the presence of a huge amount of excess; so much effort goes into preserving the available excess that the code has a difficult time "letting go", especially for small problems.

The basic problem is that when there are loads of relations, there are loads of large primes and this makes the automatically chosen filtering bounds too large to be useful. The code attempts to correct its initial guess but the process appears to need more work (see my talk on the CADO workshop site for much more detail)

bsquared 2008-10-22 20:00

[quote=jasonp;146172]Probably 2^25 isn't "asymptotic" enough for the heuristic to be accurate, but that's no excuse for the filtering to not work. It's surprising how difficult it is to make the filtering work in the presence of a huge amount of excess; so much effort goes into preserving the available excess that the code has a difficult time "letting go", especially for small problems.

The basic problem is that when there are loads of relations, there are loads of large primes and this makes the automatically chosen filtering bounds too large to be useful. The code attempts to correct its initial guess but the process appears to need more work (see my talk on the CADO workshop site for much more detail)[/quote]

Cool, thanks for the info.

By the way, I hope it didn't come across that I was bashing on you for this not *just working*. I know that I'm spoiled by exactly that being the case the vast majority of the time. I appreciate that there is a hard balance to achieve, and if anything, I would rather have difficulty for very small jobs like this when it's easy to guess wrong and end up with 3x the amount of relations than I really need, then have difficulty for large jobs. It's harder to end up with 3x the needed relations when one is gathering 200 million of them.

The workaround is trivial:
head -n 1200000 data.dat > msieve.dat

jasonp 2008-10-22 22:20

No bashing was inferred. In fact if the code could be made to work with an arbitrary amount of excess then perhaps larger jobs could be massively oversieved in order for the postprocessing to fit in smaller machines.

10metreh 2008-11-10 18:40

aaa120, you're worse than me! Everyone knows PARI shouldn't be used for factoring!

xilman 2008-11-11 08:56

[QUOTE=10metreh;148679]aaa120, you're worse than me! Everyone knows PARI shouldn't be used for factoring![/QUOTE]On the contrary, I use PARI for factoring quite often.

When extending tables, for instance, ECM, P+1 and P-1 quite often find composite factors of 25-50 digits. My usual way of splitting them is to extract them from the output of the previously run program and feed them into PARI. A quick cut and paste later and I've a list of prime factors to feed into subsequent scripts.

I think you meant to say something like "PARI shouldn't be used to find [b]large[/b] factors".


Paul

10metreh 2008-11-13 19:23

[quote=xilman;148802]I think you meant to say something like "PARI shouldn't be used to find [B]large[/B] factors".[/quote]

Correct. PARI only uses MPQS, not SIQS.


All times are UTC. The time now is 20:09.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.