mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2008-10-22, 19:06   #12
FactorEyes
 
FactorEyes's Avatar
 
Oct 2006
vomit_frame_pointer

5508 Posts
Default

Quote:
Originally Posted by bsquared View Post
Is the heuristic broken for jobs this tiny, or is msieve not handleing it right somehow?
I recently had problems in post-processing with a 128-digit snfs job. I've been meaning to send Jason the logs. Sounds as if that is a good idea.
FactorEyes is offline   Reply With Quote
Old 2008-10-22, 19:34   #13
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

67258 Posts
Default

Probably 2^25 isn't "asymptotic" enough for the heuristic to be accurate, but that's no excuse for the filtering to not work. It's surprising how difficult it is to make the filtering work in the presence of a huge amount of excess; so much effort goes into preserving the available excess that the code has a difficult time "letting go", especially for small problems.

The basic problem is that when there are loads of relations, there are loads of large primes and this makes the automatically chosen filtering bounds too large to be useful. The code attempts to correct its initial guess but the process appears to need more work (see my talk on the CADO workshop site for much more detail)

Last fiddled with by jasonp on 2008-10-22 at 19:36
jasonp is offline   Reply With Quote
Old 2008-10-22, 20:00   #14
bsquared
 
bsquared's Avatar
 
"Ben"
Feb 2007

3×1,171 Posts
Default

Quote:
Originally Posted by jasonp View Post
Probably 2^25 isn't "asymptotic" enough for the heuristic to be accurate, but that's no excuse for the filtering to not work. It's surprising how difficult it is to make the filtering work in the presence of a huge amount of excess; so much effort goes into preserving the available excess that the code has a difficult time "letting go", especially for small problems.

The basic problem is that when there are loads of relations, there are loads of large primes and this makes the automatically chosen filtering bounds too large to be useful. The code attempts to correct its initial guess but the process appears to need more work (see my talk on the CADO workshop site for much more detail)
Cool, thanks for the info.

By the way, I hope it didn't come across that I was bashing on you for this not *just working*. I know that I'm spoiled by exactly that being the case the vast majority of the time. I appreciate that there is a hard balance to achieve, and if anything, I would rather have difficulty for very small jobs like this when it's easy to guess wrong and end up with 3x the amount of relations than I really need, then have difficulty for large jobs. It's harder to end up with 3x the needed relations when one is gathering 200 million of them.

The workaround is trivial:
head -n 1200000 data.dat > msieve.dat
bsquared is offline   Reply With Quote
Old 2008-10-22, 22:20   #15
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,541 Posts
Default

No bashing was inferred. In fact if the code could be made to work with an arbitrary amount of excess then perhaps larger jobs could be massively oversieved in order for the postprocessing to fit in smaller machines.
jasonp is offline   Reply With Quote
Old 2008-11-10, 18:40   #16
10metreh
 
10metreh's Avatar
 
Nov 2008

1001000100102 Posts
Default

aaa120, you're worse than me! Everyone knows PARI shouldn't be used for factoring!
10metreh is offline   Reply With Quote
Old 2008-11-11, 08:56   #17
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

29·3·7 Posts
Default

Quote:
Originally Posted by 10metreh View Post
aaa120, you're worse than me! Everyone knows PARI shouldn't be used for factoring!
On the contrary, I use PARI for factoring quite often.

When extending tables, for instance, ECM, P+1 and P-1 quite often find composite factors of 25-50 digits. My usual way of splitting them is to extract them from the output of the previously run program and feed them into PARI. A quick cut and paste later and I've a list of prime factors to feed into subsequent scripts.

I think you meant to say something like "PARI shouldn't be used to find large factors".


Paul
xilman is online now   Reply With Quote
Old 2008-11-13, 19:23   #18
10metreh
 
10metreh's Avatar
 
Nov 2008

2×33×43 Posts
Default

Quote:
Originally Posted by xilman View Post
I think you meant to say something like "PARI shouldn't be used to find large factors".
Correct. PARI only uses MPQS, not SIQS.
10metreh is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
A new factor of F11?! siegert81 FermatSearch 2 2018-01-24 04:35
What a (TF) factor!!... lycorn PrimeNet 11 2013-01-12 12:07
New factor for F17 Buckle Factoring 15 2011-03-15 12:05
Bad Factor? nfortino Data 6 2004-12-14 19:25
Shortest time to complete a 2^67 trial factor (no factor) dsouza123 Software 12 2003-08-21 18:38

All times are UTC. The time now is 20:09.


Fri Jul 16 20:09:15 UTC 2021 up 49 days, 17:56, 1 user, load averages: 2.54, 2.36, 2.29

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.