![]() |
Dear Brian,
Two RSALS tasks were started, one using the 32 bits version and the other using the 64 bits version. Both reached LA phase and now running smoothly. Carlos |
[QUOTE=pinhodecarlos;277987]Two RSALS tasks were started, one using the 32 bits version and the other using the 64 bits version. Both reached LA phase and now running smoothly.[/QUOTE]
Just to confirm, that is using gzip compressed relation files? Does it still work if the files are not compressed? |
[QUOTE=Brian Gladman;277901]Thanks for the offer. Can you send me a private message to give me an email address to send the binary to? I also need to know whether you will use the 32 or 64 bit version and whether you want the normal or the CUDA enabled one.[/QUOTE]
First problem, I updated to SVN 665 and tried to load the non-CUDA vc10 solution and got this error: D:\Data\Jeff_Documents\Visual Studio 2010\Projects\msieve-svn665\build.vc10\zlib\zlib.vcxproj : [B]error : Project "D:\Data\Jeff_Documents\Visual Studio 2010\Projects\msieve-svn665\build.vc10\zlib\zlib.vcxproj" could not be found.[/B] I have a zlib directory in the root of trunk but nothing in build.vc10 so not sure if there is a link missing or you forgot to add that directory and project file to SVN? Jeff. |
[QUOTE=Jeff Gilchrist;278058]First problem, I updated to SVN 665 and tried to load the non-CUDA vc10 solution and got this error:
D:\Data\Jeff_Documents\Visual Studio 2010\Projects\msieve-svn665\build.vc10\zlib\zlib.vcxproj : [B]error : Project "D:\Data\Jeff_Documents\Visual Studio 2010\Projects\msieve-svn665\build.vc10\zlib\zlib.vcxproj" could not be found.[/B] I have a zlib directory in the root of trunk but nothing in build.vc10 so not sure if there is a link missing or you forgot to add that directory and project file to SVN? Jeff.[/QUOTE] I'm sorry Jeff - I forgot to put the ZLIB build project in the msieve SVN - I've just done this now. |
I noticed in the SVN log a recent change for making the root sieve run longer, which made me curious :smile:
So I gave msieve's polynomial selection a large input, RSA-704 = 7403756347956171282804679609742957314259318888923128908493623263897276 5034028266276891996419625117843995894330502127585370118968098286733173 273108930900552505116877063299072396380786710086096962537934650563796359 . Several hours of msieve -np1 produced (among other hits) the 11 following stage 1 hits with leading coefficient 12: [code]12 22040781136650568780279 1438978066838703563988705657821873735469607 12 31947161119025617178549 1438978066838912259833896535126718769144708 12 20801030658983347055789 1438978066838809567675467436006554512011856 12 27436411440662280626339 1438978066838839531231086232246183407326738 12 31351870896059593272151 1438978066838678563312095416301039040179460 12 25845594875685324100163 1438978066838937859565130672382645578666998 12 19284487078999051226999 1438978066838721002401793330988490501011700 12 25322960335501124527121 1438978066838693039663881637936072546869718 12 22319746157948841379547 1438978066838739013213932510370077390883256 12 29499451924459034603723 1438978066838684237965410141046285761071174 12 22298794956986461065439 1438978066838885432614417842716273187929700[/code]msieve -np2 on those took ~2h30 of CPU time on Core 2 (Duo) T7200, producing no less than 712 polynomials. The 10 polynomials with highest Murphy E values are: [code]$ grep " e " msieve.dat.p | sort -k 7 | tail # norm 8.051574e-22 alpha -6.736463 e 1.935e-16 rroots 5 # norm 7.942465e-22 alpha -6.772168 e 1.941e-16 rroots 5 # norm 7.830487e-22 alpha -6.809645 e 1.948e-16 rroots 5 # norm 7.927052e-22 alpha -6.824428 e 1.958e-16 rroots 5 # norm 7.950143e-22 alpha -6.830858 e 1.961e-16 rroots 5 # norm 7.968292e-22 alpha -6.853628 e 1.969e-16 rroots 5 # norm 8.098272e-22 alpha -6.873070 e 1.980e-16 rroots 5 # norm 8.302172e-22 alpha -6.940746 e 2.012e-16 rroots 5 # norm 8.312664e-22 alpha -6.970431 e 2.022e-16 rroots 5 # norm 8.661624e-22 alpha -7.019718 e 2.054e-16 rroots 5[/code]I'll now make msieve boil 99 other stage 1 hits with leading coefficient < 100, so as to have a sample of a slightly more scientific size. If the rate of -np1 vs. -np2 holds on that larger sample, I'll have the results tomorrow evening (European time). |
The 99 hits were boiled in less than 1h30, producing 3792 polynomials.
[code]# norm 1.002092e-21 alpha -7.387111 e 2.136e-16 rroots 3 # norm 9.698142e-22 alpha -7.546286 e 2.146e-16 rroots 5 # norm 9.838061e-22 alpha -7.475656 e 2.154e-16 rroots 3 # norm 1.046654e-21 alpha -7.453788 e 2.177e-16 rroots 1 # norm 9.990670e-22 alpha -7.632101 e 2.188e-16 rroots 5 # norm 1.042611e-21 alpha -7.531854 e 2.199e-16 rroots 1 # norm 9.996212e-22 alpha -7.767686 e 2.207e-16 rroots 3 # norm 1.005074e-21 alpha -7.619491 e 2.212e-16 rroots 3 # norm 1.141891e-21 alpha -7.635275 e 2.282e-16 rroots 3 # norm 1.149194e-21 alpha -7.821295 e 2.348e-16 rroots 1[/code] |
[QUOTE=Jeff Gilchrist;278057]Just to confirm, that is using gzip compressed relation files? Does it still work if the files are not compressed?[/QUOTE]
Sorry for the late reply. I always use the dat file not compressed. I think I had a CPU issue but I was glad this came up with this ZLIB problem. Sorry. |
I am keen to test it both with and without compression on Windows and to test that the Linux version is still OK.
|
[QUOTE=Brian Gladman;278109]I am keen to test it both with and without compression on Windows and to test that the Linux version is still OK.[/QUOTE]
Ok so I have grabbed SVN 666 (yikes) and had no problems compiling it for 64bit Windows. I tried with a small 100 digit number with both msieve.dat and another time with msieve.dat.gz and both worked fine. Using the gzip relations was actually faster overall: elapsed time 00:06:17 [B]msieve.dat.gz[/B] elapsed time 00:07:15 msieve.dat So on Windows it seems to work fine. I haven't had a chance to try the Linux version. |
Thanks Jeff (and others who have helped in testing) - its nice to see that it is also faster as well.
Brian |
Nice indeed. On commodity hard drives (not RAIDed or NAS), it was expected to be the case (smaller files, faster read time + reasonably small CPU overhead vs bzip2 or even more aggressive compression libraries). On RAID, it could be par for the course. But the .dat file been almost twice smaller is the invariable benefit. All other files (.mat, .cyc) won't compress well - they are nearly random data.
I'll check linux now, but wouldn't expect any surprises. (Except conditionals been crossed over.) |
| All times are UTC. The time now is 04:52. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.