![]() |
|
|
#1 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
I've just spotted something very curious which I can't explain after investigating.
Background information is that I'm using factMsieve.pl on two machines, one with six cores and the other with eight. The only difference between the two Perl scripts is that one has $NUM_CPUS=6 and the other $NUM_CPUS=8. The six-core machine is running ../factMsieve.pl c738.poly 2 2 & and the other ../factMsieve.pl c738.poly 1 2 & and, of course each copy of the configuration files (fb, poly and ini) are the same on each system. The Perl scriot correctly allocates interleaved ranges of special-q of the size correct for each system and the ggnfs diagnostic output indicates that each thread on each machine is sieving the correct and non-overlapping range. So all appears to be working perfectly and there should be nothing to worry about. Except consider this output from the 8-core system Code:
=>"cat" spairs.out >> c738.dat Found 10576089 relations, need at least 92944917 to proceed. -> Q0=31200001, QSTEP=500000. -> makeJobFile(): q0=31700000, q1=32200000. -> makeJobFile(): Adjusted to q0=31700000, q1=32200000. -> Lattice sieving rational q-values from q=31700000 to 32200000. => "../bin//gnfs-lasieve4I14e" -k -o spairs.out.T1 -v -n1 -r c738.job.T1 Code:
=>"cat" spairs.out2 >> spairs.add.2 -> Q0=27700001, QSTEP=500000. -> makeJobFile(): q0=28200000, q1=28700000. -> makeJobFile(): Adjusted to q0=28200000, q1=28700000. -> Lattice sieving rational q-values from q=28200000 to 28700000. => "..//gnfs-lasieve4I14e" -k -o spairs.out2.T1 -v -n2 -r c738.job.2.T1 # Deletia ... wc spairs.add.2 31727093 31727093 3319325527 spairs.add.2 Any ideas? |
|
|
|
|
|
#2 | |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
947710 Posts |
Different siever binaries?
Quote:
|
|
|
|
|
|
|
#3 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Not as far as I know. The paths on each machine are different for hysterical reasons. Both are 64-bit sievers, both were built from source with $CFLAGS appropriate for the architectures (Xeon and Phenom-II). This is the first time I've ever run multi-system multi-core using the client n of m mechanism but in the past have run the two machines on the same factorization by hand-choosing disjoint ranges of special-q without ever seeing this asymmetry.
Still mysterious. Perhaps very close examination of the relations themselves may turn up something. |
|
|
|
|
|
#4 |
|
Sep 2009
2·1,039 Posts |
The screen output from the sieves should be something like:
Code:
gnfs-lasieve4I12e (with asm64): L1_BITS=15, SVN $Revision$ FBsize 52010+0 (deg 5), 63950+0 (deg 1) total yield: 84447, q=660001 (0.00211 sec/rel) 1501 Special q, 8512 reduction iterations reports: 242371341->26536062->22669357->5220418->4101959->3636891 Number of relations with k rational and l algebraic primes for (k,l)=: Total yield: 84447 milliseconds total: Sieve 64420 Sched 0 medsched 41390 TD 35820 (Init 3600, MPQS 9720) Sieve-Change 36340 TD side 0: init/small/medium/large/search: 720 1340 2110 1960 4220 sieve: init/small/medium/large/search: 2850 12670 2370 9580 4270 TD side 1: init/small/medium/large/search: 1150 2520 2570 2270 3400 sieve: init/small/medium/large/search: 3100 13470 2490 9900 3720 The 8 core systems has larger Q0, which could produce a lower yield, but probably not 3 times lower. Are they both working on their own drives, or in a shared drive on the network? In a shared drive the master system should be gathering relations from both systems. I've written later versions of factMsieve.pl designed for several systems to work in a shared directory. The biggest benefit is that they can share polynomial searching. But it also lets systems with different speeds easily co-ordinate sieving. I posted them in the factoring projects forum: http://mersenneforum.org/showthread.php?t=15662&page=2 Chris |
|
|
|
|
|
#5 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
The two factMsieve.pl scripts claimed to have found enough relations between them while I was away at the weekend. Filtering indicated 60M duplicates and 29M uniques after 0.7M free relations had been added. Although the distribution of dups hasn't yet been analysed I strongly suspect that the second client of two, the one fired up with 6 cores.
Not sure whether to analyse more deeply or to write it off to experience. To be fair, the script does say that the multi-host mechanism isn't known to be highly reliable. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Curious about different progress | stebbo | Software | 24 | 2016-09-24 17:41 |
| Just curious | houding | Information & Answers | 16 | 2014-07-19 08:32 |
| Just curious... | NBtarheel_33 | Information & Answers | 0 | 2011-02-20 09:07 |
| Just curious.... | schickel | Lounge | 13 | 2009-01-06 08:56 |
| Curious about iteration | Unregistered | Software | 3 | 2004-05-30 17:38 |