![]() |
Agreed - it's no doubt faster to use your native rho. I don't mind the discussion, but didn't mean to bring up serious consideration at abandoning your rho... I just couldn't resist the jab at ya... :smile:
|
[QUOTE=bsquared;175394]Agreed - it's no doubt faster to use your native rho. I don't mind the discussion, but didn't mean to bring up serious consideration at abandoning your rho... I just couldn't resist the jab at ya... :smile:[/QUOTE]
Want some of this, huh? :flex: I gotta warn you though, I jabber all day long. |
here is another rho-error in yafu V1.10:
[code] *** 1414976706866028275555956693125070 (34 digits) *** prp1 = 2 *** prp1 = 5 *** prp3 = 181 *** prp7 = 1061087 *** prp6 = 545429 Cofactor 1350770280280798189 (19 digits) c19: running rho... c19: running qs (yafu)... 06/20/09 01:41:15 v1.10 @ HOME, Starting SQUFOF on 1350770280280798189 06/20/09 01:41:15 v1.10 @ HOME, prp1 = 3 06/20/09 01:41:15 v1.10 @ HOME, C18 = 450256760093599396 Cofactor 1350770280280798189 (19 digits) c19: running rho... c19: running qs (yafu)... 06/20/09 01:41:15 v1.10 @ HOME, Starting SQUFOF on 1350770280280798189 06/20/09 01:41:15 v1.10 @ HOME, prp1 = 3 06/20/09 01:41:15 v1.10 @ HOME, C18 = 450256760093599396 Cofactor 1350770280280798189 (19 digits) ... and so on! [/code] so 1350770280280798189 is not divisible by 3, it's 264035689 * 5115862501! |
Ben, yafu just crashed on this:
[code] factor.log: 07/15/09 15:18:57 v1.10 @ FLUFFPUTER, starting SIQS on c55: 4687042693576164875874398554671921091471711359662036849 07/15/09 15:18:57 v1.10 @ FLUFFPUTER, random seeds: 1573986695, 412485800 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, ==== sieve params ==== 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, n = 55 digits, 182 bits 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, factor base: 2320 primes (max prime = 43261) 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, single large prime cutoff: 2379355 (55 * pmax) 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, sieve interval: 4 blocks of size 32768 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, polynomial A has ~ 7 factors 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, using multiplier of 1 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, using small prime variation correction of 18 bits 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, using SSE2 for trial division and x32 sieve scanning 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, trial factoring cutoff at 61 bits 07/15/09 15:18:58 v1.10 @ FLUFFPUTER, ==== sieving started ==== 07/15/09 15:18:59 v1.10 @ FLUFFPUTER, sieve time = 0.5150, relation time = 0.6960, poly_time = 0.0890 07/15/09 15:18:59 v1.10 @ FLUFFPUTER, 2603 relations found: 1273 full + 1330 from 11326 partial, using 3144 polys (49 A polys) 07/15/09 15:18:59 v1.10 @ FLUFFPUTER, on average, sieving found 4.01 rels/poly and 9209.80 rels/sec 07/15/09 15:18:59 v1.10 @ FLUFFPUTER, trial division touched 116561 sieve locations out of 824180736 07/15/09 15:18:59 v1.10 @ FLUFFPUTER, ==== post processing stage (msieve-1.38) ==== [/code] I've tried recreating it with no success, but I can't figure out how to specify both random seeds so maybe that's it. The siqs.dat can be found at [url]http://mklasson.com/yafu_crash2.zip[/url] should you want it. |
Thanks. That data does repeatably cause a crash, during filtering of all places, so it should be enough for me to figure out what's going on... when I get time. I haven't had much of that lately for working on yafu, but I'll try to get these bugs fixed soon.
|
Just a minor thing, but "rels/sec" goes completely mad when resuming from an earlier run - it counts the relations from the beginning and not the restart, and comes up with incredibly high speeds.
|
[quote=10metreh;183261]Just a minor thing, but "rels/sec" goes completely mad when resuming from an earlier run - it counts the relations from the beginning and not the restart, and comes up with incredibly high speeds.[/quote]
Yep, this is somewhere on my list already... figured it was pretty minor, but it'll get fixed sometime. |
Yafu 1.11
After a long period of good weather and long work hours, I finally was able to spend some more time with this.
Yafu-1.11 now available [URL="http://bbuhrow.googlepages.com/home"]here[/URL] This version is actually a bit slower than the last by a few percent, but it allows for multi-threading the sieving in SIQS. I've only done a few experiments, so any input on scalability that people see would be interesting. On an 8 core xeon box, I've seen about 1.9x with 2 threads, 3.2x with 4 threads, and 5.1x with 8 threads. Also let me know if it seems significantly slower with 1 thread on any architecture or if anything breaks. There have been significant changes to the code to make this happen, and while I've done some testing I'm not 100% sure I didn't break something somewhere I didn't look. [edit] in case you don't want to RTFM, use "-threads" switch on the command line for threading... - ben. |
[QUOTE=bsquared;190259]This version is actually a bit slower than the last by a few percent, but it allows for multi-threading the sieving in SIQS. I've only done a few experiments, so any input on scalability that people see would be interesting. On an 8 core xeon box, I've seen about 1.9x with 2 threads, 3.2x with 4 threads, and 5.1x with 8 threads. [/QUOTE]
How did you do threading, using OpenMP or pthreads? Pthreads isn't supported directly in Visual Studio that might make a Win64 build difficult unless you used OpenMP which has been support by both gcc and VS for some time now. Jeff. |
[quote=Jeff Gilchrist;190265]How did you do threading, using OpenMP or pthreads? Pthreads isn't supported directly in Visual Studio that might make a Win64 build difficult unless you used OpenMP which has been support by both gcc and VS for some time now.
Jeff.[/quote] I used pthreads and the WinAPI calls (pre-processor directive controlled). The win32 binaries were created with visual studio EE 08, so I hope they also work with your professional edition. Sometime this weekend I hope to get the source posted. |
Ben, are you going to do the c149 2,1103+ by parallelized SIQS, then?
:whistle: |
| All times are UTC. The time now is 22:03. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.