mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > NFS@Home

Reply
 
Thread Tools
Old 2012-01-01, 11:27   #166
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

242408 Posts
Default

251_97_minus1 is 74 % done and factors should appear in 3 days.#

Paul
xilman is offline   Reply With Quote
Old 2012-01-01, 11:56   #167
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

478610 Posts
Default

Quote:
Originally Posted by xilman View Post
251_97_minus1 is 74 % done and factors should appear in 3 days.#

Paul
What processor are you using for this one?
457_83_minus1 LA ETA is 25 hours.

Last fiddled with by pinhodecarlos on 2012-01-01 at 11:56
pinhodecarlos is offline   Reply With Quote
Old 2012-01-01, 12:05   #168
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

25×52×13 Posts
Default

Quote:
Originally Posted by pinhodecarlos View Post
What processor are you using for this one?
457_83_minus1 LA ETA is 25 hours.
Eight of these things
Code:
vendor_id       : AuthenticAMD
cpu family      : 16
model           : 4
model name      : Quad-Core AMD Opteron(tm) Processor 2380
stepping        : 2
cpu MHz         : 2500.000
cache size      : 512 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 5
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save
bogomips        : 5000.79
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
shared with 8 ecmclient processes. The LA is running at normal priority, the ECM at nice 19.

The start of the LA output reads:
Code:
commencing linear algebra
read 8832048 cycles
cycles contain 24408000 unique relations
read 24408000 relations
using 20 quadratic characters above 1073740964
building initial matrix
memory use: 3150.3 MB
read 8832048 cycles
matrix is 8831870 x 8832048 (2613.0 MB) with weight 771141617 (87.31/col)
sparse part has weight 596661272 (67.56/col)
filtering completed in 2 passes
matrix is 8829026 x 8829204 (2612.8 MB) with weight 771055879 (87.33/col)
sparse part has weight 596631697 (67.57/col)
matrix starts at (0, 0)
matrix is 8829026 x 8829204 (2612.8 MB) with weight 771055879 (87.33/col)
sparse part has weight 596631697 (67.57/col)
saving the first 48 matrix rows for later
matrix includes 64 packed rows
matrix is 8828978 x 8829204 (2515.9 MB) with weight 613234342 (69.46/col)
sparse part has weight 571226082 (64.70/col)
using block size 65536 for processor cache size 6144 kB
commencing Lanczos iteration (8 threads)
memory use: 2538.1 MB
linear algebra at 0.0%, ETA 230h 8m29204 dimensions (0.0%, ETA 230h 8m)
Paul
xilman is offline   Reply With Quote
Old 2012-01-01, 12:09   #169
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

12B216 Posts
Default

Thank you. That explains why the integer is taking so long...
pinhodecarlos is offline   Reply With Quote
Old 2012-01-01, 12:33   #170
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

143508 Posts
Default

xilman: You are obviously more than capable of doing the experiments yourself, but I found that it was distinctly more efficient to run MPI rather than msieve -t8 (see various posts of mine containing the string 'mpirun') on that bit of hardware when I owned it, and I think I convinced myself that it was worth stopping even nice-19 processes while running parallel LA jobs.

The machine was distinctly slower for 'msieve -t4' than the i7/920 I also had at the time - for 5.5M matrices I saw 36-hour runtime on the i7 and 64-hour on the Opteron - so I don't have many timing statistics for the Opteron because I tended to use the i7. I think MPI comes some way towards closing the gap, but I sold the machine to you shortly after getting MPI to work and never ran a full-scale MPI job on it.
fivemack is offline   Reply With Quote
Old 2012-01-01, 12:58   #171
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

242408 Posts
Default

Quote:
Originally Posted by fivemack View Post
xilman: You are obviously more than capable of doing the experiments yourself, but I found that it was distinctly more efficient to run MPI rather than msieve -t8 (see various posts of mine containing the string 'mpirun') on that bit of hardware when I owned it, and I think I convinced myself that it was worth stopping even nice-19 processes while running parallel LA jobs.

The machine was distinctly slower for 'msieve -t4' than the i7/920 I also had at the time - for 5.5M matrices I saw 36-hour runtime on the i7 and 64-hour on the Opteron - so I don't have many timing statistics for the Opteron because I tended to use the i7. I think MPI comes some way towards closing the gap, but I sold the machine to you shortly after getting MPI to work and never ran a full-scale MPI job on it.
Maybe. I've not installed MPI either and have relatively little interest in doing so at the moment. It's primarily an ECM box, and a very productive one. LA is running on it only because Lionel called for volunteers to help reduce the OPN backlog.

My own NFS LA is run on an entirely different dual-Xeon box which also has 8 cores and 16G RAM. In recent months, and probably will be for most of 2012, it has been running SNFS in the 190-200 digit range.

Paul
xilman is offline   Reply With Quote
Old 2012-01-02, 07:20   #172
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

22×33×19 Posts
Default

I'll take 20003_245.
frmky is offline   Reply With Quote
Old 2012-01-02, 07:33   #173
Mathew
 
Mathew's Avatar
 
Nov 2009

2·52·7 Posts
Default

I would like to reserve 431_83_minus_1
Mathew is offline   Reply With Quote
Old 2012-01-02, 09:38   #174
debrouxl
 
debrouxl's Avatar
 
Sep 2009

3D116 Posts
Default

Reserved to both of you, thanks

Greg: at less than 14e9 bytes of raw relations (and probably few missing results, by now), 20003_245 is somewhat less oversieved than RSALS 31-bit LPs tasks usually are. A number of RSALS tasks can be filtered at target density 90 or above, but probably not this one.
debrouxl is offline   Reply With Quote
Old 2012-01-02, 10:17   #175
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

22·33·19 Posts
Default

Quote:
Originally Posted by debrouxl View Post
Greg: at less than 14e9 bytes of raw relations (and probably few missing results, by now), 20003_245 is somewhat less oversieved than RSALS 31-bit LPs tasks usually are. A number of RSALS tasks can be filtered at target density 90 or above, but probably not this one.
Worse than that, the gz file is corrupt. I only got 98.5 million valid unique relations from it before the error. gzrecover unfortunately didn't help.

Last fiddled with by frmky on 2012-01-02 at 10:17
frmky is offline   Reply With Quote
Old 2012-01-02, 11:25   #176
em99010pepe
 
em99010pepe's Avatar
 
Sep 2004

54168 Posts
Default

Quote:
Originally Posted by frmky View Post
Worse than that, the gz file is corrupt. I only got 98.5 million valid unique relations from it before the error. gzrecover unfortunately didn't help.
Did you manage to download all the gz file? Often the server goes down while downloading is in progress...that already happened to me more than twice...In your case you should have a 13 GB gz file, do you have it? Checking here the file was last updated at 02-Jan-2012 09:18 so two things could happen: server down while you were downloading or file was updated while you were downloading it and therefore you have a smaller one, corrupted.

Last fiddled with by em99010pepe on 2012-01-02 at 11:33
em99010pepe is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
BOINC NFS sieving - NFS@Home frmky NFS@Home 1828 2020-11-14 04:57
BOINC Unregistered Information & Answers 6 2010-09-21 03:31
BOINC.BE BATKrikke Teams 2 2010-03-05 18:57
Boinc Xentar Sierpinski/Riesel Base 5 4 2009-04-25 10:26
BOINC? masser Sierpinski/Riesel Base 5 1 2009-02-09 01:10

All times are UTC. The time now is 02:12.

Sun Dec 6 02:12:11 UTC 2020 up 2 days, 22:23, 0 users, load averages: 1.64, 1.95, 2.32

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.