![]() |
|
|
#331 |
|
Serpentine Vermin Jar
Jul 2014
CF116 Posts |
I'm glad that's one of the things you'll be using it for. I (temporarily?) gave up my own ECM hunt on M1277 since I was focusing resources on the strategic double-checking, but I still think that was a fun side-project.
|
|
|
|
|
|
#332 | |
|
Nov 2008
3·167 Posts |
Quote:
B2=3e14 needs 17 gig B2=4e14 needs 34 gig Think you might be disappointed... |
|
|
|
|
|
|
#333 |
|
"Curtis"
Feb 2005
Riverside, CA
22·1,217 Posts |
Those B2's are default matched up with B1 = 4e9 and up, which is quite large. I have no idea why you are quoting such B2 values with 800M!
I have chosen to run B1 = 4.5e9 on P95 and B2 = 150e12 (k = 3, so I could go higher but expected t70 factor time rises if I do), which indeed requires 17GB for Stage 2. One thread of stage 2 per node means roughly 25 curves per day. These settings are within 2% of the best expected time for t70 (I tested 100M increments of B1 from 9e8 to 6e9). 15000 curves for a t65, 69000 for a t70. I learned the power supplies are not strong enough for these servers- I had to turn memory down to 533 from 667 (using a "low power" setting in BIOS) and remove 8GB from two of the nodes to prevent the power supplies from shutting down when all 32 cores are in use. I also removed all-but-one hard drive from each node; my kill-a-watt suggests the 500w-rated power supplies shut down when wall plug draw exceeds 460w. Too bad they're rather non-standard supplies.. |
|
|
|
|
|
#334 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
22×5×72×11 Posts |
Quote:
Learn how to use GMP-ECM with -maxmem It's how I run seriously high B2 on memory constrained processors. Which would you rather have: a 10% performance drop by constraining memory usage or a 100% performance drop by not doing so? Example: Code:
pcl23@brnikat:~/ls/nums$ ps ax | grep ecm 13707 pts/3 R 0:15 ecm -maxmem 1024 850000000 13711 pts/3 R 0:08 ecm -maxmem 1024 850000000 13713 pts/3 R 0:03 ecm -maxmem 1024 850000000 13717 pts/3 R 0:01 ecm -maxmem 1024 850000000 13719 pts/3 R+ 0:00 grep ecm pcl23@brnikat:~/ls/nums$ head -1 /proc/meminfo MemTotal: 16311512 kB pcl23@brnikat:~/ls/nums$ Last fiddled with by xilman on 2016-01-18 at 19:04 Reason: Add example |
|
|
|
|
|
|
#335 | ||
|
Oct 2007
Manchester, UK
22×3×113 Posts |
I have found that GMP-ECM seems to somewhat overestimate the amount of memory it will use, so I have shied away from using -maxmem. Instead I have preferred to use -k and -treefile to manually tune it to the system I am running on.
To give a particular example, on the number 10^999 + 13, I ran GMP-ECM with B1=2.9E9 and B2=1E14, and with the options -k 5 and -treefile. GMP-ECM quoted an estimated 20GB memory usage, yet never used more than about 15GB. In reply to VBCurtis: Quote:
Quote:
1) Dedicate half of all cores to stage 1 2) Dedicate the other half to stage 2 3) ??? 4) Profit! |
||
|
|
|
|
|
#336 | |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
Quote:
Reason you might want to do that is you could get more bang for the watt by having faster memory and slightly reduced CPU speed. I don't know the BIOS settings on that setup... on HPs it would be done with setting a power cap, among other possibilities. But something that lets you control c-states or specifically disable turbo. |
|
|
|
|
|
|
#337 | |
|
"Curtis"
Feb 2005
Riverside, CA
22×1,217 Posts |
Quote:
|
|
|
|
|
|
|
#338 | |
|
"Curtis"
Feb 2005
Riverside, CA
486810 Posts |
Quote:
1. Minimum expected-time to complete a t70, as determined by the -v flag of GMP-ECM. 2. Within 2-3% of the minimum time for t70, maximize bounds to improve chances to find factors larger than 70 digits. The RDS quote mentioned has nothing to support it, even the runs in his own paper spend 40% of stage 1 time on stage 2. I've gone a few rounds with RDS on this, and believe that GMP-ECM does not obey his theory. As for using -maxmem, I am specifically trying to make time-efficient use of the large memory in these machines, with 6-7 of the 8 cores doing NFS work. I am not interested in dedicating more than 2 cores to ECM per machine, so there's no reason to sacrifice even 10% of performance(?). I'm trying to do something with the 32GB that I can't do with 16GB at home. I have no evidence that running enormous B2's is better than the bounds I've chosen for my first set of curves, but perhaps I should be trying to minimize t75 time since anyone with 16GB can run t70 curves relatively efficiently. |
|
|
|
|
|
|
#339 | |
|
Dec 2014
3×5×17 Posts |
Quote:
1. It requires 220VAC power. 2. The web interface to configure the iPass ports has a username / password. There is no way to reset the password, so if you buy a used box and the original owner changed the default (root / root) username / password then you get to play hacker before you can configure the box. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Hardware Benchmark Jest Thread for 100M exponents | joblack | Hardware | 284 | 2020-12-29 03:54 |
| Garbage hardware thread | PageFault | Hardware | 21 | 2004-07-31 20:55 |
| Old Hardware Thread | E_tron | Hardware | 0 | 2004-06-18 03:32 |
| Deutscher Thread (german thread) | TauCeti | NFSNET Discussion | 0 | 2003-12-11 22:12 |
| Gratuitous hardware-related banana thread | GP2 | Hardware | 7 | 2003-11-24 06:13 |