![]() |
Henry, try changing the "g"(b) to "m"(b) in the asm block, and repeating the exercise in common/mp.c on line 368. It compiles for me once I do that.
Alex, the modadd and modsub routines have already needed changes in this regard, I guess I have to do the same with any asm block that uses doubling multiply or halving divide. I suppose I should also switch to doing daily compiles with gcc 4.x, since just about everybody except me uses that now (MinGW will switch over to gcc 4.x someday, but someday has already been years). |
[quote=jasonp;164178]That code did change in v1.39 but it doesn't cause problems in MinGW for me. Could you run 'gcc -v' and 'uname -a' ?[/quote]
[code]david@Ubuntu8Jimmy:~/Desktop/msieve-1.39$ gcc -v Using built-in specs. Target: i486-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-targets=all --enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu --target=i486-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu3) david@Ubuntu8Jimmy:~/Desktop/msieve-1.39$ uname -a Linux Ubuntu8Jimmy 2.6.24-19-generic #1 SMP Wed Jun 18 14:43:41 UTC 2008 i686 GNU/Linux [/code] it worked with the changes in your last post |
i have just remembered that i never posted the binary i compiled
it will have to wait until tommorrow unfortunately as i am going out soon |
SNFS and GGNFS
I am running a 137 digit SNFS with GGNFS. I planned about 3.5 million relations. Actually, I reached 5637135 relations but the LA doesn't start.
Here is my poly: [code] n: 27947058444735539012965143631675640717984624676970856746909322865014587987121750699821648616857592608427104241739512228516510591633269 m: 100000000000000000000000000000000000 deg: 5 c5: 2 c0: 205 skew: 2.52 type: snfs lss: 1 rlim: 5800000 alim: 5800000 lpbr: 28 lpba: 28 mfbr: 52 mfba: 52 rlambda: 2.5 alambda: 2.5 [/code] and my .job file: [code] n: 27947058444735539012965143631675640717984624676970856746909322865014587987121750699821648616857592608427104241739512228516510591633269 m: 100000000000000000000000000000000000 c5: 2 c0: 205 skew: 2.52 rlim: 5800000 alim: 4299999 lpbr: 28 lpba: 28 mfbr: 52 mfba: 52 rlambda: 2.5 alambda: 2.5 q0: 4300000 qintsize: 100000 #q1:4400000 [/code] I started with q0=2900000 and factLat.pl, adding 100.000 q each run. Sadly enough, when procrels starts, it soon aborts. Here is the trace: [code] Cygni_61@linux-cygni61:~/ggnfs/ggnfs/msieve> perl factLat.pl 44449_174 -> ___________________________________________________________ -> | This is the factLat.pl script for GGNFS. | -> | This program is copyright 2004, Chris Monico, and subject| -> | to the terms of the GNU General Public License version 2.| -> |__________________________________________________________| -> This is client 1 of 1 -> Working with NAME=44449_174... -> SNFS_DIFFICULTY is about 175.3010299956639811952137388947244930268. -> Selected default factorization parameters for 175 digit level. -> Selected lattice siever: ../bin/gnfs-lasieve4I13e -> No parameter change detected. Resuming. -> minimum number of FF's: 896819 -> Q0=4300000, QSTEP=100000. -> makeJobFile(): q0=4300000, q1=4400000. -> makeJobFile(): Adjusted to q0=4300000, q1=4400000. -> Lattice sieving q-values from q=4300000 to 4400000. => "../bin/gnfs-lasieve4I13e" -k -o spairs.out -v -n0 -a 44449_174.job FBsize 303720+0 (deg 5), 399992+0 (deg 1) total yield: 155862, q=4340773 (0.03346 sec/rel)warning: too many relations in mpqs total yield: 384053, q=4400021 (0.03320 sec/rel) 6588 Special q, 40185 reduction iterations reports: 580388540->80168677->74483181->41296493->25317677->19607599 Number of relations with k rational and l algebraic primes for (k,l)=: Total yield: 384053 0/0 mpqs failures, 22009/25176 vain mpqs milliseconds total: Sieve 4901210 Sched 0 medsched 1440070 TD 1701820 (Init 34060, MPQS 141110) Sieve-Change 3176340 TD side 0: init/small/medium/large/search: 28150 143910 52280 114750 556330 sieve: init/small/medium/large/search: 64550 539660 57970 1603180 81490 TD side 1: init/small/medium/large/search: 34780 240850 50790 129780 306620 sieve: init/small/medium/large/search: 77920 545920 58260 1614710 257550 =>"cat" spairs.add >> spairs.out => "../bin/procrels" -fb 44449_174.fb -prel rels.bin -newrel spairs.out __________________________________________________________ | This is the procrels program for GGNFS. | | Version: 0.77.1-20060513-nocona | | This program is copyright 2004, Chris Monico, and subject| | to the terms of the GNU General Public License version 2.| |__________________________________________________________| done. Monic polynomial: T=3280 + 1X^5 Obtained integral basis: W = 8 0 0 0 0 0 8 0 0 0 0 0 4 0 0 0 0 0 2 0 0 0 0 0 1 denominator = 8 Checking file rels.bin.0 ... Largest prel file size is 0 versus max allowed of 128000000. Warning: Could not stat processed file rels.bin.0. Is this the first run?. New file is 576.84794MB. New file appears to have 5637135 relations. Building (a,b) hash table...0..makeABList() Failed to open rels.bin.0 for read! makeABLookup() : Sorting abList...Done. Before processing new relations, there are 0 total. Return value 11. Terminating...ns from spairs.out... (at 18122.33 rels/sec) Cygni_61@linux-cygni61:~/ggnfs/ggnfs/msieve> [/code] Where do I go wrong? :redface::down: Luigi |
[quote=ET_;165129]
Where do I go wrong? :redface::down: Luigi[/quote] I can't comment on the procrels errors, but with using lpbr = lpba = 28 you'll need quite a bit more than 3.5 million relations. A rule of thumb is you'll need about [code]0.8 * (pi(2^lpbr) + pi(2^lpba)) = 1.6 * (2^28 / ln(2^28)) = 22 million [/code] relations + say 15% more to account for duplicates. |
[QUOTE=ET_;165129]
Where do I go wrong? :redface::down: Luigi[/QUOTE]Never having run that many relations through procrels at one time, I have an idea procrels is the problem. I would bet that >5.6 million [B]all at once[/B] is choking it.... Can you try splittling the file up into more bite-sized chucks? Or, just feed the file to msieve. Procrels is only used by the factlat.pl script to eliminate duplicates and sort them into buckets (the "rels.bin.*" files). Msieve doesn't need the relations fed to it that way.... |
[quote=ET_;165129]I am running a 137 digit SNFS with GGNFS. I planned about 3.5 million relations. Actually, I reached 5637135 relations but the LA doesn't start.
Where do I go wrong? :redface::down: Luigi[/quote] Be aware that the difficulty of an SNFS number is based on the value of the polynomial. With this number, 2*m^5+205 is a 176 digit number. This means the number has an SNFS difficulty of 176 digits. This will be about as hard as a 125 digit GNFS. There is an unreserved number (as of 5 minutes ago) which has SNFS difficulty 142, and I would have taken it had I not been working on an aliquot sequence. That would be [U]much[/U] easier. |
[QUOTE=10metreh;165137]Be aware that the difficulty of an SNFS number is based on the value of the polynomial. With this number, 2*m^5+205 is a 176 digit number. This means the number has an SNFS difficulty of 176 digits. This will be about as hard as a 125 digit GNFS. There is an unreserved number (as of 5 minutes ago) which has SNFS difficulty 142, and I would have taken it had I not been working on an aliquot sequence. That would be [U]much[/U] easier.[/QUOTE]
I took the values from the site held by Makoto Kamada. His site said the number could be finished in 4 days :-( and for my first SNFS work it seemed affordable. Oh, well, I will go on. :smile: Luigi |
[QUOTE=schickel;165134]Never having run that many relations through procrels at one time, I have an idea procrels is the problem. I would bet that >5.6 million [B]all at once[/B] is choking it....
Can you try splittling the file up into more bite-sized chucks? Or, just feed the file to msieve. Procrels is only used by the factlat.pl script to eliminate duplicates and sort them into buckets (the "rels.bin.*" files). Msieve doesn't need the relations fed to it that way....[/QUOTE] Well, I had that same message every 100000 q, so I thought there was something missing in my environment. As for Msieve, can it work with SNFS relations? If so, I will use nfs2ms.pl script after the sieving phase. Luigi |
[QUOTE=bsquared;165133]I can't comment on the procrels errors, but with using lpbr = lpba = 28 you'll need quite a bit more than 3.5 million relations.
A rule of thumb is you'll need about [code]0.8 * (pi(2^lpbr) + pi(2^lpba)) = 1.6 * (2^28 / ln(2^28)) = 22 million [/code] relations + say 15% more to account for duplicates.[/QUOTE] I will reach 22 million relations, testing with Msieve after 15M, 18M and 21M, no problems. Thanks for the hint, I just took the .poly from Kamada studio. It seems that I'll have to improve my (low) knowledge on setting up better values. Links (apart from MersenneForum)? :rolleyes: Luigi |
1 Attachment(s)
[quote=ET_;165140]I will reach 22 million relations, testing with Msieve after 15M, 18M and 21M, no problems.
Thanks for the hint, I just took the .poly from Kamada studio. It seems that I'll have to improve my (low) knowledge on setting up better values. Links (apart from MersenneForum)? :rolleyes: Luigi[/quote] I have attached def-par.txt, which has estimates for parameters for C70-C140 GNFS and S100-S175 SNFS. Bear in mind that it is not exact, and for a GNFS C140, you might want to do some parameter optimization. |
| All times are UTC. The time now is 22:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.