![]() |
251_97_minus1 is 74 % done and factors should appear in 3 days.#
Paul |
[QUOTE=xilman;284339]251_97_minus1 is 74 % done and factors should appear in 3 days.#
Paul[/QUOTE] What processor are you using for this one? 457_83_minus1 LA ETA is 25 hours. |
[QUOTE=pinhodecarlos;284341]What processor are you using for this one?
457_83_minus1 LA ETA is 25 hours.[/QUOTE]Eight of these things[code] vendor_id : AuthenticAMD cpu family : 16 model : 4 model name : Quad-Core AMD Opteron(tm) Processor 2380 stepping : 2 cpu MHz : 2500.000 cache size : 512 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 5 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save bogomips : 5000.79 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm stc 100mhzsteps hwpstate[/code]shared with 8 ecmclient processes. The LA is running at normal priority, the ECM at nice 19. The start of the LA output reads:[code]commencing linear algebra read 8832048 cycles cycles contain 24408000 unique relations read 24408000 relations using 20 quadratic characters above 1073740964 building initial matrix memory use: 3150.3 MB read 8832048 cycles matrix is 8831870 x 8832048 (2613.0 MB) with weight 771141617 (87.31/col) sparse part has weight 596661272 (67.56/col) filtering completed in 2 passes matrix is 8829026 x 8829204 (2612.8 MB) with weight 771055879 (87.33/col) sparse part has weight 596631697 (67.57/col) matrix starts at (0, 0) matrix is 8829026 x 8829204 (2612.8 MB) with weight 771055879 (87.33/col) sparse part has weight 596631697 (67.57/col) saving the first 48 matrix rows for later matrix includes 64 packed rows matrix is 8828978 x 8829204 (2515.9 MB) with weight 613234342 (69.46/col) sparse part has weight 571226082 (64.70/col) using block size 65536 for processor cache size 6144 kB commencing Lanczos iteration (8 threads) memory use: 2538.1 MB linear algebra at 0.0%, ETA 230h 8m29204 dimensions (0.0%, ETA 230h 8m) [/code] Paul |
Thank you. That explains why the integer is taking so long...
|
xilman: You are obviously more than capable of doing the experiments yourself, but I found that it was distinctly more efficient to run MPI rather than msieve -t8 (see various posts of mine containing the string 'mpirun') on that bit of hardware when I owned it, and I think I convinced myself that it was worth stopping even nice-19 processes while running parallel LA jobs.
The machine was distinctly slower for 'msieve -t4' than the i7/920 I also had at the time - for 5.5M matrices I saw 36-hour runtime on the i7 and 64-hour on the Opteron - so I don't have many timing statistics for the Opteron because I tended to use the i7. I think MPI comes some way towards closing the gap, but I sold the machine to you shortly after getting MPI to work and never ran a full-scale MPI job on it. |
[QUOTE=fivemack;284346]xilman: You are obviously more than capable of doing the experiments yourself, but I found that it was distinctly more efficient to run MPI rather than msieve -t8 (see various posts of mine containing the string 'mpirun') on that bit of hardware when I owned it, and I think I convinced myself that it was worth stopping even nice-19 processes while running parallel LA jobs.
The machine was distinctly slower for 'msieve -t4' than the i7/920 I also had at the time - for 5.5M matrices I saw 36-hour runtime on the i7 and 64-hour on the Opteron - so I don't have many timing statistics for the Opteron because I tended to use the i7. I think MPI comes some way towards closing the gap, but I sold the machine to you shortly after getting MPI to work and never ran a full-scale MPI job on it.[/QUOTE]Maybe. I've not installed MPI either and have relatively little interest in doing so at the moment. It's primarily an ECM box, and a very productive one. LA is running on it only because Lionel called for volunteers to help reduce the OPN backlog. My own NFS LA is run on an entirely different dual-Xeon box which also has 8 cores and 16G RAM. In recent months, and probably will be for most of 2012, it has been running SNFS in the 190-200 digit range. Paul |
I'll take 20003_245.
|
I would like to reserve 431_83_minus_1
|
Reserved to both of you, thanks :smile:
Greg: at less than 14e9 bytes of raw relations (and probably few missing results, by now), 20003_245 is somewhat less oversieved than RSALS 31-bit LPs tasks usually are. A number of RSALS tasks can be filtered at target density 90 or above, but probably not this one. |
[QUOTE=debrouxl;284456]Greg: at less than 14e9 bytes of raw relations (and probably few missing results, by now), 20003_245 is somewhat less oversieved than RSALS 31-bit LPs tasks usually are. A number of RSALS tasks can be filtered at target density 90 or above, but probably not this one.[/QUOTE]
Worse than that, the gz file is corrupt. I only got 98.5 million valid unique relations from it before the error. gzrecover unfortunately didn't help. |
[QUOTE=frmky;284461]Worse than that, the gz file is corrupt. I only got 98.5 million valid unique relations from it before the error. gzrecover unfortunately didn't help.[/QUOTE]
Did you manage to download all the gz file? Often the server goes down while downloading is in progress...that already happened to me more than twice...In your case you should have a 13 GB gz file, do you have it? Checking here the file was last updated at 02-Jan-2012 09:18 so two things could happen: server down while you were downloading or file was updated while you were downloading it and therefore you have a smaller one, corrupted. |
| All times are UTC. The time now is 21:52. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.