![]() |
[QUOTE=jasonp;244121]Rho is there because it's very fast when you want 6-8 digit factors, faster than any of the other methods. One P+-1 curve is also much faster than one ECM curve, so if you're budgeting time in units of ECM curves, then P+-1 can be run with much larger B1 in the same time one ECM curve needs (the conventional wisdom uses a P-1 B1 of 10x the ECM B1 and a P+1 B1 of 5x the ECM B1). So for the same cost as an ECM curve you have a much better chance of finding factors that are susceptible to P+-1; it's extra 'resolving power' for free.
ECM will always be able to find factors that the other methods miss; the point is that you don't have to go to the trouble of using ECM when it is overkill. There's a minor additional point that when an input has factors of different sizes, using ECM with too large a B1 will find multiple factors at once. For example, if an input has a 6, 10 and 20 digit factor and you run ECM intended to find the 20-digit factor, you'll spend a lot of time on one curve and then probably find the product of all three of those factors. Which is fine, but if you want each factor individually you aren't don't yet. You could have found the 6-digit factor in 1% of the ECM time and the 10-digit factor in maybe 5% of the ECM time, leaving the 20-digit factor for ECM to find; knowing the other factors also makes the ECM arithmetic faster.[/QUOTE] These are exactly the reasons why I like to progress through rho, pp1 and pm1 first. From the point of view of a large factorization, a negligible amount of time is spent doing this before ecm, qs, nfs... As far as customizibility of factor() goes, it's fairly easy to add more command line switches which could modify its behavior, but there are already 30+ switches and I fear things are getting too complicated even as is. That said, this is good discussion to have. Maybe two slightly different factor() behaviors would be good enough: a "bare bones" preprocessing version which runs very light early stages of ecm and maybe skips entirely pp1, pm1, rho, etc. This would be useful for inputs where some preprocessing is known to have been done. Plus the standard version for running completely unprocessed "general" inputs. |
[QUOTE=bsquared;244141]It works without quotes for me in the .ini file. In the code, I copy whatever string follows the equals sign and directly paste it in front of the lasieve binary name. So the quotes gets in the way. No doubt I should do some smarter processing here. Relative path names also work in the .ini file, for me.
Good suggestions... I'll see what I can do to incorporate them.[/QUOTE] Confirmed, relative path works. If the full path has spaces, it doesnt work(ala Program Files), and since there's no current way to quote it[the path], then either no space folder, or folder inside Yafu's exec directory will do. Good. |
[QUOTE=Karl M Johnson;244151]Confirmed, relative path works.
If the full path has spaces, it doesnt work(ala Program Files), and since there's no current way to quote it[the path], then either no space folder, or folder inside Yafu's exec directory will do. Good.[/QUOTE] Just a thought, but would the older style (8.3) name work for those folders with spaces? i.e. progra~1 |
[QUOTE=bsquared;244143]These are exactly the reasons why I like to progress through rho, pp1 and pm1 first. From the point of view of a large factorization, a negligible amount of time is spent doing this before ecm, qs, nfs...
As far as customizibility of factor() goes, it's fairly easy to add more command line switches which could modify its behavior, but there are already 30+ switches and I fear things are getting too complicated even as is. That said, this is good discussion to have. Maybe two slightly different factor() behaviors would be good enough: a "bare bones" preprocessing version which runs very light early stages of ecm and maybe skips entirely pp1, pm1, rho, etc. This would be useful for inputs where some preprocessing is known to have been done. Plus the standard version for running completely unprocessed "general" inputs.[/QUOTE] Yeah, I think all algorithms in factor() have good reason for being there. I think that a somewhat customizable factor()-command, like you've described, would be appreciated. As for something new entirely; I think a progressive ECM-command would be nice. Having it's own settings for number of curves and bounds, and possibly other behavioral variables (like whether to raise bounds when a factor is found, or not). That's my two cents. |
Why does YAFU terminate at nfs?
[CODE]01/03/11 10:06:00 v1.21 @ PERIMETROS, Finished 328 curves using Lenstra ECM method on C98 input, B1 = 1000000, B2 = 100000000 01/03/11 10:08:11 v1.21 @ PERIMETROS, Finished 3 curves using Lenstra ECM method on C98 input, B1 = 10000000, B2 = 1000000000 01/03/11 10:14:55 v1.21 @ PERIMETROS, Finished 1 curves using Lenstra ECM method on C98 input, B1 = 100000000, B2 = 4000000000 01/03/11 10:14:55 v1.21 @ PERIMETROS, nfs: commencing gnfs on c98: 28374004359061337391115649396543332869469592210583250853338019699208899006550276044080060346570227[/CODE]Those are the last lines. The program shut itself down after those. (You can also see the B2 limit of 4e9 being enforced.) |
[QUOTE=lorgix;244407]Why does YAFU terminate at nfs?
[CODE]01/03/11 10:06:00 v1.21 @ PERIMETROS, Finished 328 curves using Lenstra ECM method on C98 input, B1 = 1000000, B2 = 100000000 01/03/11 10:08:11 v1.21 @ PERIMETROS, Finished 3 curves using Lenstra ECM method on C98 input, B1 = 10000000, B2 = 1000000000 01/03/11 10:14:55 v1.21 @ PERIMETROS, Finished 1 curves using Lenstra ECM method on C98 input, B1 = 100000000, B2 = 4000000000 01/03/11 10:14:55 v1.21 @ PERIMETROS, nfs: commencing gnfs on c98: 28374004359061337391115649396543332869469592210583250853338019699208899006550276044080060346570227[/CODE]Those are the last lines. The program shut itself down after those. (You can also see the B2 limit of 4e9 being enforced.)[/QUOTE] Possibly it can't find your lasieve binaries. That exit condition is reported to the screen, but not to the logfile. Are you running in a mode which prints status to the screen? By the way... I should clarify a bit about the B2 value used during ECM. Basically you can ignore the reported B2 value unless you are specifying it directly. If you run factor(), or if you run ecm and only specify B1, then B2 is chosen by default by GMP-ECM. However, I report the value B2 = 100 * B1 for two reasons. One, because I'm lazy - that's the value the yafu ecm routine would use. And two, because GMP-ECM's library entry point provides no mechanism for returning the B2 value actually used during a computation. I will change the screen/log output text to reflect this fact in the next release... |
[QUOTE=bsquared;244416]Possibly it can't find your lasieve binaries. That exit condition is reported to the screen, but not to the logfile. Are you running in a mode which prints status to the screen?
By the way... I should clarify a bit about the B2 value used during ECM. Basically you can ignore the reported B2 value unless you are specifying it directly. If you run factor(), or if you run ecm and only specify B1, then B2 is chosen by default by GMP-ECM. However, I report the value B2 = 100 * B1 for two reasons. One, because I'm lazy - that's the value the yafu ecm routine would use. And two, because GMP-ECM's library entry point provides no mechanism for returning the B2 value actually used during a computation. I will change the screen/log output text to reflect this fact in the next release...[/QUOTE] I'm not sure what the 'lasieve binaries' are. Maybe I don't have any. Not sure what different modes there are either. I run the .exe in Windows, and use the window that shows up for communicating with YAFU. I didn't actually see when the above happened. I just noticed it had terminated, so I looked in the log. When i run nfs(*) the program closes instantly. Oh, so YAFU [I]does[/I] use the standard values. Well that's good. Any idea when the next release will be out? Just curious. Btw, and this feels like a basic question but I simply don't know; Is there a simple way of knowing how much memory YAFU and/or prime95 will need to perform a certain ECM task optimally? |
[QUOTE=lorgix;244421] I'm not sure what the 'lasieve binaries' are. Maybe I don't have any.
[/QUOTE] They are the executables which actually do the sieving for gnfs jobs, and are part of the GGNFS suite. You can get them [URL="http://gilchrist.ca/jeff/factoring/index.html"]here[/URL]. Unzip and put them in a folder somewhere, then use the yafu.ini file to point to them by adding a line something like this to your yafu.ini file (assuming you put the executables in a folder adjacent to the directory containing the yafu executable): [CODE]ggnfs_dir=..\lasieve-bin\[/CODE] [QUOTE=lorgix;244421] Not sure what different modes there are either. I run the .exe in Windows, and use the window that shows up for communicating with YAFU. I didn't actually see when the above happened. I just noticed it had terminated, so I looked in the log. When i run nfs(*) the program closes instantly. [/QUOTE] Well, for example if you were running aliquot sequences, aliquiet.exe throws away screen print info by default (I think), so that you might not have seen a message printed only to the screen. Or, you could be running with the -silent flag. [QUOTE=lorgix;244421] Oh, so YAFU [I]does[/I] use the standard values. Well that's good. Any idea when the next release will be out? Just curious. [/QUOTE] Not exactly, but likely in the next couple days. [QUOTE=lorgix;244421] Btw, and this feels like a basic question but I simply don't know; Is there a simple way of knowing how much memory YAFU and/or prime95 will need to perform a certain ECM task optimally? [/QUOTE] Simple way? Maybe... I'm not really sure either. Stage 1 of gmp-ecm has fairly low memory footprint, but stage 2 can be pretty high. I don't have a good way to estimate what it would be. No idea about prime95. |
1 Attachment(s)
Hi Ben!
I have tried to factor a c96 digit with the new Yafu-64k-Win32 V1.21 using nfs. The command was factor () This is my yafu.ini file and it works. [CODE]B1pm1=100000 B1pp1=20000 B1ecm=11000 rhomax=1000 threads=4 ggnfs_dir=C:\Faktorisierung\tools\ggnfs\[/CODE]But after the lasieve.exe starts i become a message with warning lowering the FB_bound. And the linear algebra failed again and again. [CODE]Mon Jan 03 18:41:38 2011 commencing number field sieve (96-digit input) Mon Jan 03 18:41:38 2011 commencing number field sieve polynomial selection Mon Jan 03 18:41:38 2011 time limit set to 0.23 hours Mon Jan 03 18:41:38 2011 searching leading coefficients from 1 to 8713103 Mon Jan 03 18:57:08 2011 polynomial selection complete Mon Jan 03 18:57:08 2011 R0: -145095765129037275116876 Mon Jan 03 18:57:08 2011 R1: 2931961006859 Mon Jan 03 18:57:08 2011 A0: 659989622761430109521353155 Mon Jan 03 18:57:08 2011 A1: 1337030203483287072670 Mon Jan 03 18:57:08 2011 A2: -2347696828552447 Mon Jan 03 18:57:08 2011 A3: -1378773698 Mon Jan 03 18:57:08 2011 A4: 432 Mon Jan 03 18:57:08 2011 skew 1561910.71, size 5.010e-013, alpha -4.883, combined = 2.464e-008 rroots = 4 Mon Jan 03 19:31:47 2011 Mon Jan 03 19:31:47 2011 commencing relation filtering Mon Jan 03 19:31:47 2011 estimated available RAM is 2048.0 MB Mon Jan 03 19:31:47 2011 commencing duplicate removal, pass 1 Mon Jan 03 19:32:00 2011 found 58195 hash collisions in 1499389 relations Mon Jan 03 19:32:24 2011 added 77005 free relations Mon Jan 03 19:32:24 2011 commencing duplicate removal, pass 2 Mon Jan 03 19:32:25 2011 found 28232 duplicates and 1548162 unique relations Mon Jan 03 19:32:25 2011 memory use: 6.1 MB Mon Jan 03 19:32:25 2011 reading ideals above 30000 Mon Jan 03 19:32:25 2011 commencing singleton removal, initial pass Mon Jan 03 19:32:42 2011 memory use: 47.1 MB Mon Jan 03 19:32:42 2011 reading all ideals from disk Mon Jan 03 19:32:48 2011 memory use: 52.5 MB Mon Jan 03 19:32:48 2011 keeping 2380516 ideals with weight <= 200, target excess is 12793 Mon Jan 03 19:32:48 2011 commencing in-memory singleton removal Mon Jan 03 19:32:48 2011 begin with 1548162 relations and 2380516 unique ideals Mon Jan 03 19:32:49 2011 reduce to 131 relations and 0 ideals in 7 passes Mon Jan 03 19:32:49 2011 max relations containing the same ideal: 0 Mon Jan 03 20:09:05 2011 Mon Jan 03 20:09:05 2011 commencing relation filtering Mon Jan 03 20:09:05 2011 estimated available RAM is 2048.0 MB Mon Jan 03 20:09:05 2011 commencing duplicate removal, pass 1 Mon Jan 03 20:09:29 2011 found 220835 hash collisions in 3076416 relations Mon Jan 03 20:09:36 2011 added 7296 free relations Mon Jan 03 20:09:36 2011 commencing duplicate removal, pass 2 Mon Jan 03 20:09:38 2011 found 106240 duplicates and 2977472 unique relations Mon Jan 03 20:09:38 2011 memory use: 12.3 MB Mon Jan 03 20:09:38 2011 reading ideals above 100000 Mon Jan 03 20:09:38 2011 commencing singleton removal, initial pass Mon Jan 03 20:10:29 2011 memory use: 86.1 MB Mon Jan 03 20:10:29 2011 reading all ideals from disk Mon Jan 03 20:10:35 2011 memory use: 90.0 MB Mon Jan 03 20:10:36 2011 keeping 3111929 ideals with weight <= 200, target excess is 31490 Mon Jan 03 20:10:36 2011 commencing in-memory singleton removal Mon Jan 03 20:10:37 2011 begin with 2977472 relations and 3111929 unique ideals Mon Jan 03 20:10:41 2011 reduce to 1333900 relations and 1181980 ideals in 18 passes Mon Jan 03 20:10:41 2011 max relations containing the same ideal: 115 Mon Jan 03 20:10:42 2011 removing 330625 relations and 272929 ideals in 57696 cliques Mon Jan 03 20:10:42 2011 commencing in-memory singleton removal Mon Jan 03 20:10:42 2011 begin with 1003275 relations and 1181980 unique ideals Mon Jan 03 20:10:43 2011 reduce to 947719 relations and 850484 ideals in 10 passes Mon Jan 03 20:10:43 2011 max relations containing the same ideal: 87 Mon Jan 03 20:10:44 2011 removing 246014 relations and 188318 ideals in 57696 cliques Mon Jan 03 20:10:44 2011 commencing in-memory singleton removal Mon Jan 03 20:10:44 2011 begin with 701705 relations and 850484 unique ideals Mon Jan 03 20:10:44 2011 reduce to 654056 relations and 611508 ideals in 10 passes Mon Jan 03 20:10:44 2011 max relations containing the same ideal: 66 Mon Jan 03 20:10:45 2011 relations with 0 large ideals: 592 Mon Jan 03 20:10:45 2011 relations with 1 large ideals: 3827 Mon Jan 03 20:10:45 2011 relations with 2 large ideals: 23516 Mon Jan 03 20:10:45 2011 relations with 3 large ideals: 83669 Mon Jan 03 20:10:45 2011 relations with 4 large ideals: 165734 Mon Jan 03 20:10:45 2011 relations with 5 large ideals: 196446 Mon Jan 03 20:10:45 2011 relations with 6 large ideals: 123892 Mon Jan 03 20:10:45 2011 relations with 7+ large ideals: 56380 Mon Jan 03 20:10:45 2011 commencing 2-way merge Mon Jan 03 20:10:45 2011 reduce to 396601 relation sets and 354053 unique ideals Mon Jan 03 20:10:45 2011 commencing full merge Mon Jan 03 20:10:53 2011 memory use: 30.5 MB Mon Jan 03 20:10:53 2011 found 176989 cycles, need 168253 Mon Jan 03 20:10:53 2011 weight of 168253 cycles is about 12020439 (71.44/cycle) Mon Jan 03 20:10:53 2011 distribution of cycle lengths: Mon Jan 03 20:10:53 2011 1 relations: 13750 Mon Jan 03 20:10:53 2011 2 relations: 14843 Mon Jan 03 20:10:53 2011 3 relations: 15715 Mon Jan 03 20:10:53 2011 4 relations: 15103 Mon Jan 03 20:10:53 2011 5 relations: 14689 Mon Jan 03 20:10:53 2011 6 relations: 13664 Mon Jan 03 20:10:53 2011 7 relations: 12392 Mon Jan 03 20:10:53 2011 8 relations: 11277 Mon Jan 03 20:10:53 2011 9 relations: 9878 Mon Jan 03 20:10:53 2011 10+ relations: 46942 Mon Jan 03 20:10:53 2011 heaviest cycle: 22 relations Mon Jan 03 20:10:53 2011 commencing cycle optimization Mon Jan 03 20:10:53 2011 start with 1188434 relations Mon Jan 03 20:10:57 2011 pruned 36610 relations Mon Jan 03 20:10:57 2011 memory use: 29.5 MB Mon Jan 03 20:10:57 2011 distribution of cycle lengths: Mon Jan 03 20:10:57 2011 1 relations: 13750 Mon Jan 03 20:10:57 2011 2 relations: 15215 Mon Jan 03 20:10:57 2011 3 relations: 16320 Mon Jan 03 20:10:57 2011 4 relations: 15610 Mon Jan 03 20:10:57 2011 5 relations: 15203 Mon Jan 03 20:10:57 2011 6 relations: 14145 Mon Jan 03 20:10:57 2011 7 relations: 12726 Mon Jan 03 20:10:57 2011 8 relations: 11424 Mon Jan 03 20:10:57 2011 9 relations: 9993 Mon Jan 03 20:10:57 2011 10+ relations: 43867 Mon Jan 03 20:10:57 2011 heaviest cycle: 22 relations Mon Jan 03 20:10:57 2011 RelProcTime: 112 Mon Jan 03 20:10:57 2011 Mon Jan 03 20:10:57 2011 commencing linear algebra Mon Jan 03 20:10:57 2011 read 168253 cycles Mon Jan 03 20:10:57 2011 cycles contain 589128 unique relations Mon Jan 03 20:11:33 2011 read 589128 relations Mon Jan 03 20:11:34 2011 using 20 quadratic characters above 33553002 Mon Jan 03 20:11:38 2011 building initial matrix Mon Jan 03 20:11:47 2011 memory use: 65.1 MB Mon Jan 03 20:11:48 2011 read 168253 cycles Mon Jan 03 20:11:48 2011 matrix is 168057 x 168253 (48.0 MB) with weight 16082033 (95.58/col) Mon Jan 03 20:11:48 2011 sparse part has weight 11409471 (67.81/col) Mon Jan 03 20:11:50 2011 filtering completed in 2 passes Mon Jan 03 20:11:50 2011 matrix is 167381 x 167576 (47.9 MB) with weight 16041370 (95.73/col) Mon Jan 03 20:11:50 2011 sparse part has weight 11388987 (67.96/col) Mon Jan 03 20:11:51 2011 matrix starts at (0, 0) Mon Jan 03 20:11:51 2011 matrix is 167381 x 167576 (47.9 MB) with weight 16041370 (95.73/col) Mon Jan 03 20:11:51 2011 sparse part has weight 11388987 (67.96/col) Mon Jan 03 20:11:51 2011 saving the first 48 matrix rows for later Mon Jan 03 20:11:51 2011 matrix includes 64 packed rows Mon Jan 03 20:11:51 2011 matrix is 167333 x 167576 (45.9 MB) with weight 12699887 (75.79/col) Mon Jan 03 20:11:51 2011 sparse part has weight 11016727 (65.74/col) Mon Jan 03 20:11:51 2011 using block size 65536 for processor cache size 3072 kB Mon Jan 03 20:11:53 2011 commencing Lanczos iteration Mon Jan 03 20:11:53 2011 memory use: 36.4 MB Mon Jan 03 20:11:53 2011 lanczos error: submatrix is not invertible Mon Jan 03 20:11:53 2011 lanczos halted after 4 iterations (dim = 189) Mon Jan 03 20:11:53 2011 linear algebra failed; retrying... Mon Jan 03 20:11:53 2011 commencing Lanczos iteration Mon Jan 03 20:11:53 2011 memory use: 36.4 MB Mon Jan 03 20:11:54 2011 lanczos error: submatrix is not invertible Mon Jan 03 20:11:54 2011 lanczos halted after 3 iterations (dim = 127) Mon Jan 03 20:11:54 2011 linear algebra failed; retrying... Mon Jan 03 20:11:54 2011 commencing Lanczos iteration Mon Jan 03 20:11:54 2011 memory use: 36.4 MB Mon Jan 03 20:11:54 2011 lanczos error: submatrix is not invertible Mon Jan 03 20:11:54 2011 lanczos halted after 2 iterations (dim = 64) Mon Jan 03 20:11:54 2011 linear algebra failed; retrying... Mon Jan 03 20:11:54 2011 commencing Lanczos iteration Mon Jan 03 20:11:54 2011 memory use: 36.4 MB Mon Jan 03 20:11:54 2011 lanczos error: submatrix is not invertible Mon Jan 03 20:11:54 2011 lanczos halted after 1 iterations (dim = 0) Mon Jan 03 20:11:54 2011 linear algebra failed; retrying... Mon Jan 03 20:11:54 2011 commencing Lanczos iteration Mon Jan 03 20:11:54 2011 memory use: 36.4 MB Mon Jan 03 20:11:54 2011 lanczos error: submatrix is not invertible Mon Jan 03 20:11:54 2011 lanczos halted after 2 iterations (dim = 64) Mon Jan 03 20:11:54 2011 linear algebra failed; retrying... ............................[/CODE]Happy new year to all Mersenne-Forum Members :) Regards Andi_HB |
Hi Andi_HB,
The "warning lowering the FB_bound" messages are harmless... it is a notification by the lattice siever that we are sieving a range below rlim or alim, but that is perfectly fine. The failure to properly solve a matrix for Win32 is a known issue... I'm still working on it. It is discussed a little more over here: [url]http://www.mersenneforum.org/showthread.php?t=14482[/url] regards, - ben. |
1 Attachment(s)
O i have forgotten that i become Messages after the first sieving about the cat.exe.
But after the messages msieve filtering was running. |
| All times are UTC. The time now is 23:01. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.