![]() |
|
|
#496 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
11001000110002 Posts |
Code:
Sat Nov 28 13:19:23 2015 p91 factor: 1455229648108768594552694966205142453019168989838313852033836936726828258873327877743335007 Sat Nov 28 13:19:23 2015 p98 factor: 60531294671960669735077411626867118292354916934357577613958209636477806174025280318238132795431621 Log attached |
|
|
|
|
|
#497 |
|
"Victor de Hollander"
Aug 2011
the Netherlands
23×3×72 Posts |
Taking
C154_P182_plus_1 C155_P209_plus_1 C153_P233_plus_1 for post-processing. They probably take only about a day each. |
|
|
|
|
|
#498 |
|
Sep 2009
977 Posts |
You have decent horsepower, so I'd say much less than a day for each of them, though one day for all three might be a stretch.
|
|
|
|
|
|
#499 |
|
"Mike"
Aug 2002
5×17×97 Posts |
Code:
prp88 factor: 7672236958518363816567697109832643079838636749631210749387737419588642636816013574367653 prp100 factor: 6562282240936037358815422484225511323795135516019451448000923739689561470840832334996806359588917617 |
|
|
|
|
|
#500 |
|
I moo ablest echo power!
May 2013
13×137 Posts |
The C184 blocking HP2(4496) splits as:
Code:
prp83 factor: 18629234615651511444939975064252892061546608854100819750912782705794351301276248439 prp101 factor: 90074244593568724732372840988999455713334654010427512152214073432054138168692117445761801991568405847 elapsed time 110:46:38 Matrix was 15331633 x 15331859 with TD=128. |
|
|
|
|
|
#501 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
23·11·73 Posts |
Running 4261-67; ETA is 21 hours, but that's the weekend so I won't see the answer until Monday.
Also taking 2269-67 and 2789-67 which should fit in over the weekend. Last fiddled with by fivemack on 2015-12-04 at 14:14 |
|
|
|
|
|
#502 |
|
May 2009
Russia, Moscow
A2116 Posts |
What are best parameters to run MPI version of msieve on single computer? Have Dual Xeon E5-2620, so 6cores/12threads * 2. I compiled msieve v1.52 with OpenMPI 1.8.1 and newest GMP 6.1.0.
I tried MPI version on C165 GNFS-job with many different options (-bind-to-core/-bind-to-socket, -bycore/-bysocket, -cpu-set, etc, running with and without taskset command) and the best what I could receive was 60 hours (taskset -c 0-11 mpirun -np 12 -bind-to-core msieve -t 12 -nc2 2,6). Running without MPI I got much better results: taskset -c 0-11 msieve -t 12 -> 36 hours (running only on one cpu) taskset -c 0-5,12-17 msieve -t 12 -> 40 hours (running on both cpus) msieve -t 12 -> 43 hours (without taskset command) taskset -c 0-5 msieve -t 6 -> 63 hours (only 6 threads) I'm disappointed with these results, what's I'm doing wrong? |
|
|
|
|
|
#503 |
|
Jul 2003
So Cal
212810 Posts |
Try, perhaps with a -bysocket if it helps,
mpirun -np 2 msieve -nc2 1,2 -t 12 |
|
|
|
|
|
#504 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
23·11·73 Posts |
On a 48-core (4 sockets x 2 chips per socket x 6 cores per chip) Opteron machine I found that it was very helpful to have a 'numactl -l' in the command line, as well as the taskset, to ensure that the memory was allocated on the node on which the process was running. I got mpirun to run a script which contained a taskset command, rather than trying to taskset the mpirun itself - see next post.
I am slightly surprised that you're finding -t12 faster than -t6 on a hyperthreaded system, I should redo that measurement with the next linear algebra job I run. Last fiddled with by fivemack on 2015-12-05 at 00:34 |
|
|
|
|
|
#505 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
23×11×73 Posts |
For the two-layer approach I did something like
Code:
mpirun -n 8 run.2,4.6.sh Code:
msieve_real='/home/nfsworld/msieve-svn-again-mpi/trunk/msieve -v' CPUL=$[6*$OMPI_COMM_WORLD_RANK] CPUR=$[6*$OMPI_COMM_WORLD_RANK+5] taskset -c $CPUL-$CPUR numactl --cpunodebind=$OMPI_COMM_WORLD_RANK -l $msieve_real -t 6 -nc2 2,4 |
|
|
|
|
|
#506 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
642410 Posts |
I'm surprised that worked at all in as little as 60 hours; it causes mpirun to start up 12 copies of sieve each of which tries to use twelve threads, so I'd have thought you'd see the machine load average going into the hundreds.
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| restarting nfs linear algebra | cubaq | YAFU | 2 | 2017-04-02 11:35 |
| Linear algebra at 600% | CRGreathouse | Msieve | 8 | 2009-08-05 07:25 |
| Linear algebra crashes | 10metreh | Msieve | 3 | 2009-02-02 08:34 |
| Linear algebra proof | Damian | Math | 8 | 2007-02-12 22:25 |
| Linear algebra in MPQS | R1zZ1 | Factoring | 2 | 2007-02-02 06:45 |