![]() |
|
|
#12 |
|
Oct 2006
vomit_frame_pointer
16816 Posts |
|
|
|
|
|
|
#13 |
|
Tribal Bullet
Oct 2004
67258 Posts |
Probably 2^25 isn't "asymptotic" enough for the heuristic to be accurate, but that's no excuse for the filtering to not work. It's surprising how difficult it is to make the filtering work in the presence of a huge amount of excess; so much effort goes into preserving the available excess that the code has a difficult time "letting go", especially for small problems.
The basic problem is that when there are loads of relations, there are loads of large primes and this makes the automatically chosen filtering bounds too large to be useful. The code attempts to correct its initial guess but the process appears to need more work (see my talk on the CADO workshop site for much more detail) Last fiddled with by jasonp on 2008-10-22 at 19:36 |
|
|
|
|
|
#14 | |
|
"Ben"
Feb 2007
3×1,171 Posts |
Quote:
By the way, I hope it didn't come across that I was bashing on you for this not *just working*. I know that I'm spoiled by exactly that being the case the vast majority of the time. I appreciate that there is a hard balance to achieve, and if anything, I would rather have difficulty for very small jobs like this when it's easy to guess wrong and end up with 3x the amount of relations than I really need, then have difficulty for large jobs. It's harder to end up with 3x the needed relations when one is gathering 200 million of them. The workaround is trivial: head -n 1200000 data.dat > msieve.dat |
|
|
|
|
|
|
#15 |
|
Tribal Bullet
Oct 2004
DD516 Posts |
No bashing was inferred. In fact if the code could be made to work with an arbitrary amount of excess then perhaps larger jobs could be massively oversieved in order for the postprocessing to fit in smaller machines.
|
|
|
|
|
|
#16 |
|
Nov 2008
2×33×43 Posts |
aaa120, you're worse than me! Everyone knows PARI shouldn't be used for factoring!
|
|
|
|
|
|
#17 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
2A0016 Posts |
Quote:
When extending tables, for instance, ECM, P+1 and P-1 quite often find composite factors of 25-50 digits. My usual way of splitting them is to extract them from the output of the previously run program and feed them into PARI. A quick cut and paste later and I've a list of prime factors to feed into subsequent scripts. I think you meant to say something like "PARI shouldn't be used to find large factors". Paul |
|
|
|
|
|
|
#18 |
|
Nov 2008
91216 Posts |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| A new factor of F11?! | siegert81 | FermatSearch | 2 | 2018-01-24 04:35 |
| What a (TF) factor!!... | lycorn | PrimeNet | 11 | 2013-01-12 12:07 |
| New factor for F17 | Buckle | Factoring | 15 | 2011-03-15 12:05 |
| Bad Factor? | nfortino | Data | 6 | 2004-12-14 19:25 |
| Shortest time to complete a 2^67 trial factor (no factor) | dsouza123 | Software | 12 | 2003-08-21 18:38 |