![]() |
I've got another odd(!) one. The log file says
[CODE]factoring 21374706000067961931609331422841414635601 using pretesting plan: normal div: primes less than 10000 fmt: 1000000 iterations rho: x^2 + 1, starting 1000 iterations on C39 rho: x^2 + 3, starting 1000 iterations on C39 rho: x^2 + 2, starting 1000 iterations on C39 input must be odd[/CODE] and factor.log says [CODE]06/25/11 09:32:53 v1.27 @ JANE2, **************************** 06/25/11 09:32:53 v1.27 @ JANE2, Starting factorization of 21374706000067961931609331422841414635601 06/25/11 09:32:53 v1.27 @ JANE2, **************************** 06/25/11 09:32:53 v1.27 @ JANE2, rho: x^2 + 1, starting 1000 iterations on C39 06/25/11 09:32:53 v1.27 @ JANE2, rho: x^2 + 3, starting 1000 iterations on C39 06/25/11 09:32:53 v1.27 @ JANE2, rho: x^2 + 2, starting 1000 iterations on C39 06/25/11 09:32:53 v1.27 @ JANE2, starting MPQS on c(39): 196098220184109742491828728649921235189 06/25/11 09:32:53 v1.27 @ JANE2, n = 39 digits, 131 bits 06/25/11 09:32:53 v1.27 @ JANE2, ==== sieve params ==== 06/25/11 09:32:53 v1.27 @ JANE2, factor base: 806 primes (max prime = 13007) 06/25/11 09:32:53 v1.27 @ JANE2, large prime cutoff: 390210 (30 * pmax) 06/25/11 09:32:53 v1.27 @ JANE2, sieve interval: 8 blocks of size 65536 06/25/11 09:32:53 v1.27 @ JANE2, multiplier is 13 06/25/11 09:32:53 v1.27 @ JANE2, trial factoring cutoff at 59 bits 06/25/11 09:32:53 v1.27 @ JANE2, ==== sieving started ==== 06/25/11 09:32:54 v1.27 @ JANE2, 939 relations found: 480 full + 459 from 3098 partial, using 12 polys 06/25/11 09:32:54 v1.27 @ JANE2, prp1 = 5 06/25/11 09:32:54 v1.27 @ JANE2, prp21 = 369635504319950164003 06/25/11 09:32:54 v1.27 @ JANE2, prp18 = 530517815232301063 06/25/11 09:32:54 v1.27 @ JANE2, Gauss elapsed time = 0.1250 seconds. 06/25/11 09:32:54 v1.27 @ JANE2, 06/25/11 09:32:54 v1.27 @ JANE2, [/CODE] |
With yafu V1.26 I got:
[code] >> factor(21374706000067961931609331422841414635601) factoring 21374706000067961931609331422841414635601 using pretesting plan: normal div: primes less than 10000 fmt: 1000000 iterations rho: x^2 + 1, starting 1000 iterations on C39 rho: x^2 + 3, starting 1000 iterations on C39 rho: x^2 + 2, starting 1000 iterations on C39 starting MPQS on c(39): 196098220184109742491828728649921235189 n = 39 digits, 131 bits ==== sieve params ==== factor base: 806 primes (max prime = 13007) large prime cutoff: 390210 (30 * pmax) sieve interval: 16 blocks of size 32768 multiplier is 13 trial factoring cutoff at 59 bits ==== sieving in progress ==== 1029 relations found: 120 full + 909 from 6313 partial, using 44 polys QS elapsed time = 0.2964 seconds. starting block64 gaussian elimination matrix exhausted Gauss elapsed time = 0.9204 seconds. ECM/SIQS ratio was = 0.122483 starting MPQS on c(39): 196098220184109742491828728649921235189 n = 39 digits, 131 bits ==== sieve params ==== factor base: 806 primes (max prime = 13007) large prime cutoff: 390210 (30 * pmax) sieve interval: 16 blocks of size 32768 ... [/code] and so on... no stopping! |
1.27 works for me.
The SIQS speedup is amazing btw. |
Thanks everyone for the testing.
It appears to only be an issue with windows 64k binaries, again. Maybe I should try to get siqs working a little lower in digit count rather than keep relying on my old and tired mpqs implementation for jobs in this size range. Seems to be lacking, a bit. |
[QUOTE=bsquared;147041]Just curious, has anyone had a chance to confirm/deny the timings of my SIQS relative to msieve? I'm mostly interested in the performance on 64bit linux systems, core2 or otherwise.
Thanks, - ben.[/QUOTE] before to make a new mistake: You want tto have performance of your new version (64b) to factorihn a numbber oor a set of number usi,ng Linux? It' right. John |
My 64-bit Windows binaries work fine.
Brian |
Today, I've used the tune() function of yafu-64k-linux64 1.27 on an Athlon II X4 640 (3 GHz, 64 KB L1 cache, 512 KB L2 cache). The time for NFS and QS now seems to be severely underestimated. For example (Aliquot 563008, index 1431):
[code]$ ./yafu-64k-linux64 -v -v 06/26/11 20:48:47 v1.27 @ athlon2, System/Build Info: Using GMP-ECM 6.3, Powered by GMP 5.0.1 detected AMD Athlon(tm) II X4 640 Processor detected L1 = 65536 bytes, L2 = 524288 bytes, CL = 64 bytes measured cpu frequency ~= 3000.303970 =============================================================== ======= Welcome to YAFU (Yet Another Factoring Utility) ======= ======= bbuhrow@gmail.com ======= ======= Type help at any time, or quit to quit ======= =============================================================== cached 78504 primes. pmax = 1000099 >> factor(26414109977872226362484078572433136447675836109909673081832686497082876522105573248107042081) factoring 26414109977872226362484078572433136447675836109909673081832686497082876522105573248107042081 using pretesting plan: normal div: primes less than 10000 fmt: 1000000 iterations rho: x^2 + 1, starting 10000 iterations on C92 rho: x^2 + 3, starting 10000 iterations on C92 rho: x^2 + 2, starting 10000 iterations on C92 ***** using tuning data for QS time estimation [B]***** qs time estimate = 239.299 seconds[/B] ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds pp1: starting B1 = 20K, B2 = gmp-ecm default on C92 pp1: starting B1 = 20K, B2 = gmp-ecm default on C92 pp1: starting B1 = 20K, B2 = gmp-ecm default on C92 ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds pm1: starting B1 = 100K, B2 = gmp-ecm default on C92 ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ecm: 25/25 curves on C92 input, at B1 = 2K, B2 = gmp-ecm default ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 90 more curves can be run at 20 digit level ecm: 92/92 curves on C92 input, at B1 = 11K, B2 = gmp-ecm default ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 200 more curves can be run at 25 digit level ecm: 200/200 curves on C92 input, at B1 = 50K, B2 = gmp-ecm default ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 3 more curves can be run at p+1 level 2 pp1: starting B1 = 200K, B2 = gmp-ecm default on C92 pp1: starting B1 = 200K, B2 = gmp-ecm default on C92 pp1: starting B1 = 200K, B2 = gmp-ecm default on C92 ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 1 more curve can be run at p-1 level 2 pm1: starting B1 = 1M, B2 = gmp-ecm default on C92 ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 217 more curves can be run at 30 digit level ecm: 220/220 curves on C92 input, at B1 = 250K, B2 = gmp-ecm default ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 3 more curves can be run at p+1 level 3 pp1: starting B1 = 2M, B2 = gmp-ecm default on C92 pp1: starting B1 = 2M, B2 = gmp-ecm default on C92 pp1: starting B1 = 2M, B2 = gmp-ecm default on C92 ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 1 more curve can be run at p-1 level 3 pm1: starting B1 = 10M, B2 = gmp-ecm default on C92 ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds ***** estimating 1 more curves can be run at 35 digit level ecm: 1/1 curves on C92 input, at B1 = 1M, B2 = gmp-ecm default ***** using tuning data for QS time estimation ***** qs time estimate = 239.299 seconds ***** using tuning data for GNFS time estimation ***** gnfs time estimate = 1171.45 seconds starting SIQS on c92: 26414109977872226362484078572433136447675836109909673081832686497082876522105573248107042081 fb bounds small: 1024 13bit: 512 14bit: 944 15bit: 1728 med: 3200 large: 21792 all: 78928 start primes 13bit: 8219 14bit: 16369 15bit: 32749 med: 65407 large: 524113 ==== sieve params ==== n = 92 digits, 304 bits factor base: 78928 primes (max prime = 2116027) single large prime cutoff: 253923240 (120 * pmax) double large prime range from 44 to 51 bits double large prime cutoff: 1344201611413435 allocating 13 large prime slices of factor base buckets hold 2048 elements sieve interval: 8 blocks of size 65536 polynomial A has ~ 12 factors using multiplier of 1 using SPV correction of 19 bits, starting at offset 29 using SSE2 for trial division and x128 sieve scanning using SSE2 for resieving 14-16 bit primes trial factoring cutoff at 97 bits ==== sieving in progress ( 4 threads): 78992 relations needed ==== ==== Press ctrl-c to abort and save state ==== 79187 rels found: 22007 full + 57180 from 978447 partial, (986.45 rels/sec) sieving required 681330 total polynomials trial division touched 35614754 sieve locations out of 714426286080 [B]QS elapsed time = 1014.9164 seconds.[/B] ==== post processing stage (msieve-1.38) ==== begin with 1001957 relations reduce to 190451 relations in 10 passes attempting to read 190451 relations recovered 190451 relations recovered 166279 polynomials attempting to build 79485 cycles found 79485 cycles in 6 passes distribution of cycle lengths: length 1 : 22050 length 2 : 16341 length 3 : 14065 length 4 : 10331 length 5 : 7012 length 6 : 4360 length 7 : 2475 length 9+: 2851 largest cycle: 19 relations matrix is 78928 x 79485 (20.4 MB) with weight 4698949 (59.12/col) sparse part has weight 4698949 (59.12/col) filtering completed in 3 passes matrix is 73276 x 73340 (18.9 MB) with weight 4362307 (59.48/col) sparse part has weight 4362307 (59.48/col) saving the first 48 matrix rows for later matrix is 73228 x 73340 (16.5 MB) with weight 3853727 (52.55/col) sparse part has weight 3589758 (48.95/col) matrix includes 64 packed rows using block size 21845 for processor cache size 512 kB commencing Lanczos iteration memory use: 13.1 MB lanczos halted after 1159 iterations (dim = 73222) recovered 14 nontrivial dependencies Lanczos elapsed time = 33.8700 seconds. Sqrt elapsed time = 0.3700 seconds. SIQS elapsed time = 1049.1718 seconds. ECM/SIQS ratio was = 0.093964 [B](I raised the ecm_qs_ratio in yafu.ini to 0.40)[/B] Total factoring time = 1147.9771 seconds ***factors found*** PRP60 = 476763437728058194119203006755177157024995248033145667758927 PRP32 = 55402969035848360950133428853903 (a fairly lackluster factor that may have been found by a bit more ECM work) ans = 1 >> quit [/code]I'll PM you the output from tune, as it's way over the post size limit. EDIT: but that the size limit for PMs is lower than that for posts, so that won't work :grin: I've sent an e-mail instead :smile: |
[QUOTE=debrouxl;264715]Today, I've used the tune() function of yafu-64k-linux64 1.27 on an Athlon II X4 640 (3 GHz, 64 KB L1 cache, 512 KB L2 cache). The time for NFS and QS now seems to be severely underestimated. For example (Aliquot 563008, index 1431):
[/QUOTE] For the benefit of others who haven't seen our emails... the problem is that the QS/NFS time estimation code expects tune() to have been run with one thread. In the next version I'll add a check for this, but for now, be sure to run tune() with one thread. |
Sorry about this, but factoring 5192296858534354858656013527078721 (34 digits) causes yafu to crash. I ran this through 1.27, both yafu-32k-win32 and yafu-64k-win32 with the same result.
If I say factor (2147483647*5192296858534354858656013527078721) it works. Jane |
[QUOTE=BudgieJane;264838]Sorry about this, but factoring 5192296858534354858656013527078721 (34 digits) causes yafu to crash. I ran this through 1.27, both yafu-32k-win32 and yafu-64k-win32 with the same result.
If I say factor (2147483647*5192296858534354858656013527078721) it works. Jane[/QUOTE] No need to apologize... this is a fantastic test case. It uncovered a bug that has been sitting there for 3+ years without being found, despite continual usage of the routine that it was in - that's how rare the input conditions are that cause the crash. It seems to be fixed now, but I'm wrapping in several other minor things before releasing a bug fix version. |
[QUOTE=bsquared;264886]No need to apologize... this is a fantastic test case. It uncovered a bug that has been sitting there for 3+ years without being found, despite continual usage of the routine that it was in - that's how rare the input conditions are that cause the crash.
It seems to be fixed now, but I'm wrapping in several other minor things before releasing a bug fix version.[/QUOTE] A similar bug has just been fixed in msieve. Strange how two rare but longstanding bugs were fixed so close together.:smile: |
| All times are UTC. The time now is 23:05. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.