mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Hardware (https://www.mersenneforum.org/forumdisplay.php?f=9)
-   -   Bargain Hardware Thread (https://www.mersenneforum.org/showthread.php?t=3890)

Mark Rose 2015-11-26 21:36

[QUOTE=chalsall;417341]Note that in the picture there were zip-ties. Dumber than bricks.[/QUOTE]

Zip-ties are cheap. I've actually started using them for my own cabling needs.

Madpoo 2015-11-27 17:17

[QUOTE=Mark Rose;417346]Zip-ties are cheap. I've actually started using them for my own cabling needs.[/QUOTE]

I use zip-ties just to be lazy. But then, bear in mind my particular installs are currently a handful of servers in a single cabinet in a remote datacenter I visit maybe twice a year. :) There isn't a lot of churn.

In the past when I've done local setups where I might be in there on a regular basis doing things, I'll use those velcro strips you get by the roll and cut to size.

Otherwise I just leave a bag of extra zip-ties and a pair of nippers on site so when I inevitably need to do any work, I snip off all the old ties, do my cabling, then re-zip tie.

I'm a tidy person... I pick up all the snipped ties and throw them away. I can tell from looking around the datacenter that not everyone is as clean... in front of other leased cabinets or cages I see piles of snipped ties that get swept into the nooks and crannies and are left there.

I don't know how many folks on here have had a chance to visit an honest-to-goodness colocation facility, but at first glance it looks really clean and tidy, but with customers coming and going and working in their space, the corners of the room and random spaces on the floor get cluttered with crud like that. :smile:

It's usually the self-run facilities where they don't have customers coming and going that are either a total pigsty or spotlessly clean. The rest of them are "clean enough" so that even after 5-10 years, you might only find a very thin film of dust on the internals of a server.

Madpoo 2015-11-27 17:22

[QUOTE=chalsall;417341]Indeed. But then try to replace a bad cable or a connector in that bundle...

Note that in the picture there were zip-ties. Dumber than bricks.[/QUOTE]

The cable labeling was pretty slick though. I think I mentioned before that I used adhesive cable labels that I picked up at Home Depot. Nice little booklet of labels that were just numbered zero through 100 or something like that. Wrap around the cables.

Awesome for me because if I'm rewiring, I can just unplug stuff and know which switch port it goes back into. Otherwise I have to do one at a time, remember where it goes, and risk screwing it up. (Same with power cables... smart PDUs and all that with per-outlet control).

Sadly, the humidity at one of the colos was high enough that the dumb things just became a gooey mess, especially considering these are all on the hot-aisle side of the cabinet, so they warm up quite nicely. On my next trip back to that location, they had all unraveled and some had even "dripped" right off the cable. Grrr...

In our other location they do fine...lower humidity there even though ambient temps are about the same.

So those cable ID's in the picture...that's what I need. No more of those adhesive goopy things for me. I've seen those before but I have no idea where to get them...probably easy to find online if I bothered to look.

chalsall 2015-11-27 17:45

[QUOTE=Madpoo;417415]In the past when I've done local setups where I might be in there on a regular basis doing things, I'll use those velcro strips you get by the roll and cut to size.[/QUOTE]

Yes, that's what I meant... The problem with zip-ties is they can pinch and/or inadvertently cause a cut or break of the fibre when being installed, moved or removed.

In every submarine fibre landing station I've ever been in (three) all cables were laced AKA sewed. Heck, NASA still uses this technique rather than zip-ties or velcro.

LaurV 2015-11-28 02:08

1 Attachment(s)
[QUOTE=xilman;417335][B]Extremely[/B] rare that you see network cables that tidily arranged. Usually looks like a plate of spaghetti.[/QUOTE]
Yeah, "huston, we have a problem with the black cable, is not connected properly"...

kladner 2015-11-28 03:48

[QUOTE=LaurV;417438]Yeah, "huston, we have a problem with the black cable, is not connected properly"...[/QUOTE]

May I suppose that cable piracy is rampant, wherever this is?

0PolarBearsHere 2015-11-28 10:17

[QUOTE=LaurV;417438]Yeah, "huston, we have a problem with the black cable, is not connected properly"...[/QUOTE]

Yeah, cabling in Thailand is pretty impressive to look at. At least you can partially narrow it down by looking for the fibre repeater/booster tubes/boxes.

Madpoo 2015-11-28 17:18

[QUOTE=chalsall;417420]Yes, that's what I meant... The problem with zip-ties is they can pinch and/or inadvertently cause a cut or break of the fibre when being installed, moved or removed.

In every submarine fibre landing station I've ever been in (three) all cables were laced AKA sewed. Heck, NASA still uses this technique rather than zip-ties or velcro.[/QUOTE]

I don't zip tie fiber, if that makes you feel any better. LOL

Well, I *do* use them to hold the fiber out of the way but I don't cinch them tight. Just a loose loop to guide the fiber where it needs to be.

I actually migrated away from fiber in my setups. Not too long ago, if you wanted gigabit to your cabinet/cage at a colocation, fiber was pretty much your only option, but then Gig over copper came out and became common with Cat6/Cat5E and it's just so much easier. No GBICs/SFPs to worry about, etc.

We'll probably see that with 10Gbe before too long, becoming more common at the provider level (cat6a/cat7 copper). I'm nowhere near the point where I need 10Gb into our cabinet though... our bandwidth isn't quite that high... yet. :smile:

Inside our network, our servers have 4 Gb ports each that I channel into the stack which, again, is enough for now. A lot of HP's new servers do have a common option for 10Gb ports though

chalsall 2015-11-28 20:45

[QUOTE=Madpoo;417518]I don't zip tie fiber, if that makes you feel any better. LOL[/QUOTE]

:smile:

[QUOTE=Madpoo;417518]I actually migrated away from fiber in my setups. Not too long ago, if you wanted gigabit to your cabinet/cage at a colocation, fiber was pretty much your only option, but then Gig over copper came out and became common with Cat6/Cat5E and it's just so much easier. No GBICs/SFPs to worry about, etc.[/QUOTE]

Copper now works fine for very high speed at short distances. For a while I worked in longer distances. Sometimes kilometres; sometimes hundreds of kilometres.

I have to say, it's pretty cool when a large ship, and many divers, have to be used to bring to the shore the multi-fibre submarine-cable capable of TB/s....

bgbeuning 2015-11-29 03:12

[QUOTE=VBCurtis;417336] "Those are great servers, I've used them at like 4 different employers. Hey, I have an older one in my garage. Want it?"
[/QUOTE]

Score! I would like to hear your experiences with it. I do not see Keyboard, Video, Mouse (KVM)
connections, just Ethernet connections.

My current favorite machine is an "HP 8300 Elite" with i5-3570.
It only draw 60 Watts under full load and can do an M39,000,000 double
check in 2 weeks. A Xeon 5355 take 7 weeks.

If anyone goes looking for the 8300, be warned I have seen five different CPU in it.
i5-290, i5-790, i5-2400, i5-3470, and i5-3570. Some sellers (like Walmart) just
say i5 without saying which one.

VBCurtis 2015-11-29 17:57

[QUOTE=bgbeuning;417562]Score! I would like to hear your experiences with it. I do not see Keyboard, Video, Mouse (KVM)
connections, just Ethernet connections.

My current favorite machine is an "HP 8300 Elite" with i5-3570.
It only draw 60 Watts under full load and can do an M39,000,000 double
check in 2 weeks. A Xeon 5355 take 7 weeks.

[/QUOTE]
An i5-3570 can draw 77W under full load on its own, according to Intel specs. Your 60-watt spec is not possible for a computer using that chip unless it's at idle.

bgbeuning 2015-11-29 20:01

[QUOTE=VBCurtis;417643] Your 60-watt spec is not possible for a computer using that chip unless it's at idle.[/QUOTE]

The uptime (1) command says 4.0 and my current meter shows 0.54 Amps.
Where did I go wrong?

Mark Rose 2015-11-29 23:03

If you cat /process/cpuinfo , what does it say for CPU frequency?

bgbeuning 2015-11-30 01:25

[QUOTE=Mark Rose;417692]If you cat /process/cpuinfo , what does it say for CPU frequency?[/QUOTE]

Model Name Intel Core i5-3570 CPU 3.40 GHz
cpu MHz 3599.882
bogo mips 6784

The current meter does not work when put around a power cord.
So I made a 1 ft long cord with the 3 wires broken out so I could
measure a single wire. The current meter is one of those loops
you put around a wire.

For a Dell 1950 dual Xeon, I got a current reading of 3.0 Amps (360 watts).

VBCurtis 2015-11-30 02:24

I suspect your current meter's accuracy much more than I suspect Intel's documentation.
120w is possible for a desktop, but 60 just doesn't make sense. Perhaps you're measuring half the current.

bgbeuning 2015-11-30 11:24

[QUOTE=VBCurtis;417716]I suspect your current meter's accuracy much more than I suspect Intel's documentation.
120w is possible for a desktop, but 60 just doesn't make sense. Perhaps you're measuring half the current.[/QUOTE]

I measured the current draw of a 40W light bulb and got the expected result.

Xyzzy 2015-11-30 14:27

[QUOTE=bgbeuning;417751]I measured the current draw of a 40W light bulb and got the expected result.[/QUOTE]Can you compare a light bulb load to a computer load? (We don't know!)

Is this link useful?

[URL]http://www.allaboutcircuits.com/textbook/alternating-current/chpt-11/power-resistive-reactive-ac-circuits/[/URL]

Madpoo 2015-11-30 16:32

[QUOTE=Xyzzy;417761]Can you compare a light bulb load to a computer load? (We don't know!)

Is this link useful?

[URL]http://www.allaboutcircuits.com/textbook/alternating-current/chpt-11/power-resistive-reactive-ac-circuits/[/URL][/QUOTE]

A lightbulb wattage and the wattage of a PC should be "close enough". A tungsten bulb is nearly all resistance (not much inductance, nothing to write home about anyway).

A modern switching power supply will "approximate" a resistive load, but then I think there's a rule of thumb/fudge factor of 97% or so. I can't remember where I saw or read that, but that's in my brain for some reason.

Clamp type ammeters (which is basically what you're using) are generally close enough... it would be weird to be off by 50% though. You might try one of those Kill-A-Watt plugin things. I'm not sure, but I'd guess those are shunt-type measurements of the current, and they do fairly well for high inductive loads like refrigerators/freezers, for example.

kladner 2015-11-30 17:09

[QUOTE]A modern switching power supply will "approximate" a resistive load, but then I think there's a rule of thumb/fudge factor of 97% or so. I can't remember where I saw or read that, but that's in my brain for some reason.[/QUOTE]

I think you are referring to Power Factor. A PF of 1 is non-reactive, i.e.: resistive. Power Factor indicates how close the current and voltage peaks align. Less than 1, current lagging voltage, is an inductive load. Current leading voltage is a capacitive load, or a PF over 1. While PF and efficiency in a motor are not quite the same, PF does affect losses in transmission. For the same amount of power (watts), larger currents flow when the PF is not equal to 1, because some of that power is being drawn at a lower voltage, and
Power = e*i, voltage times amps.

Large installations use various means to correct their overall PF. These can be capacitor banks, or certain kinds of electric motor which can be set up to have a capacitive (current leading voltage) effect

retina 2015-12-01 02:52

[QUOTE=kladner;417781]... a PF over 1 ...[/QUOTE]... is not possible.

kladner 2015-12-01 06:18

[QUOTE=retina;417832]... is not possible.[/QUOTE]

Current can lead or lag voltage.

EDIT: But I guess the effect is the same. Out of sync is out of sync.

retina 2015-12-01 06:31

[QUOTE=kladner;417855]Current can lead or lag voltage.[/QUOTE]You are correct, it can. But the PF value is always <=1.

kladner 2015-12-01 06:44

[QUOTE=retina;417858]You are correct, it can. But the PF value is always <=1.[/QUOTE]

Admitted.

LaurV 2015-12-01 07:00

[QUOTE=retina;417858]You are correct, it can. But the PF value is always <=1.[/QUOTE]
Not if you are perendev... :razz:

bgbeuning 2015-12-04 14:26

[QUOTE=Madpoo;417772] You might try one of those Kill-A-Watt plugin things.[/QUOTE]

A Kill-A-Watt says 67 Watts with Power Factor (PF) 0.96 .

The machine only has 1 DIMM populated so I am wondering if the CPU is idle waiting
for access to memory. I am going to try 2 & 4 DIMM to see what difference they make.
If the LL test time improves linearly with the increased power usage, it would be worth it.

Mark Rose 2015-12-04 15:21

[QUOTE=bgbeuning;418205]A Kill-A-Watt says 67 Watts with Power Factor (PF) 0.96 .

The machine only has 1 DIMM populated so I am wondering if the CPU is idle waiting
for access to memory. I am going to try 2 & 4 DIMM to see what difference they make.
If the LL test time improves linearly with the increased power usage, it would be worth it.[/QUOTE]

You should get double the throughput. If you have four slots, make sure you use both channels.

Madpoo 2015-12-04 17:40

[QUOTE=Mark Rose;418210]You should get double the throughput. If you have four slots, make sure you use both channels.[/QUOTE]

Indeed... considering how memory intensive Prime95 is, it seems like we should encourage people to optimize their mem channels for max performance. I know it varies from system to system slightly, but if you have dual channel memory, use both channels. If you're rocking a Xeon with 3 or 4 mem channels, do that.

Optimally you'd want just one module per memory channel. I think it's the same for desktop CPU's as Xeons, where more than one module per channel will (potentially) run them at a slower speed than if you only had 1 DPC (dimm per channel).

Certain systems like the newer HP Proliants can handle running 2 or even 3 modules per channel at full speed (as long as you're using "official" HP memory, which they charge a premium price for, of course).

I have a few "unfortunate" systems where we've had to upgrade the memory in an inefficient way... they're 3-channel Xeons with support for up to 3 dimms per channel (18 slots on a dual socket board). And we've populated all 18 slots. That knocks the max speed for the DDR3 modules from 1333 MHz to 800 MHz.

Well, it works fine for the server's intended purpose, but I'm sure Prime95 runs slower as a result.

The systems in question have 144 GB of RAM... 18 x 8GB modules. If we weren't on a budget we'd have simply gone with 12 * 16GB modules for a total of 192 GB and would let it run them at 1333 or 1066 MHz (depending on the type/rank/voltage installed).

Ah well, we have to live with budgetary constraints. :smile:

fivemack 2015-12-06 21:15

[QUOTE=bgbeuning;417562]Score! I would like to hear your experiences with it. I do not see Keyboard, Video, Mouse (KVM)
connections, just Ethernet connections.

My current favorite machine is an "HP 8300 Elite" with i5-3570.
It only draw 60 Watts under full load and can do an M39,000,000 double
check in 2 weeks. A Xeon 5355 take 7 weeks.[/quote]

On the other hand, an i7/4790K machine draws about 120 watts under full load and can do an M39,000,000 double-check in 36 hours.

(ah, most people are not running multi-threaded ... that explains the difference in figures)

VBCurtis 2015-12-25 02:31

[QUOTE=bgbeuning;417562]Score! I would like to hear your experiences with it. I do not see Keyboard, Video, Mouse (KVM)
connections, just Ethernet connections.
[/QUOTE]

I have the server. It's a full generation older than the C6100, using Xeon 5420 and DDR2. On the bright side, the 8 slots of DDR2 are *not* tied to one processor, so I have 8 full cores sharing each node's memory. I think that's better for NFS matrices than NUMA-style.

Each node has a VGA port, 2 USB, 2 GigE network, and 3 hotswap SATA drive bays. Ubuntu 14.04 installed off a USB stick without issue- no drivers nor special treatment needed.

4 nodes, each node 2x4-core@2.5ghz + hyperthreading. A variety of scrounged memory in it, currently populated 12/5/4/4GB for the 4 nodes (lots of 512MB ECC sticks!). 6 Dell 250GB server-grade disks included, as well as all 12 3.5" drive trays.

With 16 threads of sr2sieve, 16 of NFS, 21 ECM, my kill-a-watt says 660 watts. I will not need a home heater this week. I need to get a network switch and more cabling before I tinker with LinuxPMI and try to run a single NFS job over the entire machine.

Madpoo 2015-12-25 08:48

[QUOTE=VBCurtis;420133]...the 8 slots of DDR2 ... A variety of scrounged memory in it, currently populated 12/5/4/4GB for the 4 nodes (lots of 512MB ECC sticks!)...[/QUOTE]

You should PM me your details. If memory serves me correctly, I have a TON of older DDR2 memory modules (ECC). They were pulled out of HP Proliants whenever I had to update the memory and was unable to use the originals. In some cases these are brand new, never been used. It was sometimes cheaper to order with the base memory and then add-on later, rather than do a custom order.

In fact, let me go check right now...

Okay, I have:
74 x 1GB PC2-5300 ECC modules (official HP spares, #416471-001)

I have a feeling I have more somewhere, but these are just the ones I have in my "junk memory drawer". Don't we all have one of those?

So, let's face it, 1 GB sticks don't do much good for me when I need servers with 192-256 GB (and everything I have now use DDR3/DDR4), and it's easy to look back and see how I wound up with so many. Plus, when we send servers off to the great recycling plant in the sky, I feel bad sending their memory to be shredded, so I hoard them.

I'd *love* to send them off to someone who could use them. Let me know.

lavalamp 2015-12-26 22:43

Speaking of memory, I have some bog standard non-ECC memory. 2*1GB of DDR (possibly 1 stick faulty, could be the motherboard though, not sure), and 2*1 GB and 2*2GB DDR2. If anyone wants these I can post them, but otherwise I'll just throw them away.

bgbeuning 2015-12-31 13:08

dell c410x
 
If you like GPU, you may like this item on ebay

[url]http://www.ebay.com/itm/Dell-CloudEdge-C410x-16x-M2090-GPU-PCI-Exansion-Systems-2x-PW-4x-HBA-with-cable-/151931459802?hash=item235fd218da:g:B9AAAOSwSdZWb19I[/url]

0PolarBearsHere 2016-01-08 07:00

Anyone want to give me $14000 so I can buy this?
Dual Intel Xeon Processor E5-2667 v3 (8C HT)
256GB DDR4
Dual NVIDIA Quadro K6000 12GB

Madpoo 2016-01-08 21:33

[QUOTE=0PolarBearsHere;421532]Anyone want to give me $14000 so I can buy this?
Dual Intel Xeon Processor E5-2667 v3 (8C HT)
256GB DDR4
Dual NVIDIA Quadro K6000 12GB[/QUOTE]

Seems like most of the $14K is in the K6000's... probably get a cheaper setup by forgoing the Xeons and 256GB (unless that was needed for something else) :smile: Of course a good dual Xeon with that much memory is VERY nice, I have to admit.

0PolarBearsHere 2016-01-08 22:27

[QUOTE=Madpoo;421584]Seems like most of the $14K is in the K6000's... probably get a cheaper setup by forgoing the Xeons and 256GB (unless that was needed for something else) :smile: Of course a good dual Xeon with that much memory is VERY nice, I have to admit.[/QUOTE]

I suppose I don't really need to RAM disk my entire computer :P

chalsall 2016-01-08 22:46

[QUOTE=0PolarBearsHere;421590]I suppose I don't really need to RAM disk my entire computer :P[/QUOTE]

Most likely not...

But thanks for demonstrating that distraction is worth money to some, and that you should be ignored....

Edit: Sorry, that might come across as negative...

kladner 2016-01-09 03:07

Umm.....I thought it was a humorous remark. :huh:

0PolarBearsHere 2016-01-09 10:43

[QUOTE=kladner;421604]Umm.....I thought it was a humorous remark. :huh:[/QUOTE]

I'm not worried, the Australian sense of humour is often misinterpreted :)

VBCurtis 2016-01-13 23:25

[QUOTE=Madpoo;420160]You should PM me your details. If memory serves me correctly, I have a TON of older DDR2 memory modules (ECC). [...]
Okay, I have:
74 x 1GB PC2-5300 ECC modules (official HP spares, #416471-001) [...]
Plus, when we send servers off to the great recycling plant in the sky, I feel bad sending their memory to be shredded, so I hoard them.

I'd *love* to send them off to someone who could use them. Let me know.[/QUOTE]

Madpoo turned out to have *much* more than dozens of 1GB sticks. He mailed me a complete set of 32 sticks of 4GB DDR2 ECC, filling the server with 4 nodes at 32GB each. The memory is now installed, and will be used to run large-bound ECM curves on M1277 on a thread or two while the rest of the nodes run NFS tasks.

Thanks, Aaron! If M1277 falls to my ECM, you deserve credit for making it possible.

Madpoo 2016-01-14 16:28

[QUOTE=VBCurtis;422319]Thanks, Aaron! If M1277 falls to my ECM, you deserve credit for making it possible.[/QUOTE]

I'm glad that's one of the things you'll be using it for. I (temporarily?) gave up my own ECM hunt on M1277 since I was focusing resources on the strategic double-checking, but I still think that was a fun side-project.

Gordon 2016-01-18 16:52

[QUOTE=VBCurtis;422319]Madpoo turned out to have *much* more than dozens of 1GB sticks. He mailed me a complete set of 32 sticks of 4GB DDR2 ECC, filling the server with 4 nodes at 32GB each. The memory is now installed, and will be used to run large-bound ECM curves on M1277 on a thread or two while the rest of the nodes run NFS tasks.

Thanks, Aaron! If M1277 falls to my ECM, you deserve credit for making it possible.[/QUOTE]

32gig doesn't go far, with B1=800M

B2=3e14 needs 17 gig

B2=4e14 needs 34 gig

Think you might be disappointed...

VBCurtis 2016-01-18 18:04

Those B2's are default matched up with B1 = 4e9 and up, which is quite large. I have no idea why you are quoting such B2 values with 800M!

I have chosen to run B1 = 4.5e9 on P95 and B2 = 150e12 (k = 3, so I could go higher but expected t70 factor time rises if I do), which indeed requires 17GB for Stage 2. One thread of stage 2 per node means roughly 25 curves per day. These settings are within 2% of the best expected time for t70 (I tested 100M increments of B1 from 9e8 to 6e9). 15000 curves for a t65, 69000 for a t70.

I learned the power supplies are not strong enough for these servers- I had to turn memory down to 533 from 667 (using a "low power" setting in BIOS) and remove 8GB from two of the nodes to prevent the power supplies from shutting down when all 32 cores are in use. I also removed all-but-one hard drive from each node; my kill-a-watt suggests the 500w-rated power supplies shut down when wall plug draw exceeds 460w. Too bad they're rather non-standard supplies..

xilman 2016-01-18 18:57

[QUOTE=Gordon;422900]32gig doesn't go far, with B1=800M

B2=3e14 needs 17 gig

B2=4e14 needs 34 gig

Think you might be disappointed...[/QUOTE]People, please!

Learn how to use GMP-ECM with -maxmem

It's how I run seriously high B2 on memory constrained processors. Which would you rather have: a 10% performance drop by constraining memory usage or a 100% performance drop by not doing so?

Example:
[code]
pcl23@brnikat:~/ls/nums$ ps ax | grep ecm
13707 pts/3 R 0:15 ecm -maxmem 1024 850000000
13711 pts/3 R 0:08 ecm -maxmem 1024 850000000
13713 pts/3 R 0:03 ecm -maxmem 1024 850000000
13717 pts/3 R 0:01 ecm -maxmem 1024 850000000
13719 pts/3 R+ 0:00 grep ecm
pcl23@brnikat:~/ls/nums$ head -1 /proc/meminfo
MemTotal: 16311512 kB
pcl23@brnikat:~/ls/nums$
[/code]
Four ecm processes with B1=850M running on a 16G machine with plenty left over for Real Work™

lavalamp 2016-01-18 19:57

I have found that GMP-ECM seems to somewhat overestimate the amount of memory it will use, so I have shied away from using -maxmem. Instead I have preferred to use -k and -treefile to manually tune it to the system I am running on.

To give a particular example, on the number 10^999 + 13, I ran GMP-ECM with B1=2.9E9 and B2=1E14, and with the options -k 5 and -treefile. GMP-ECM quoted an estimated 20GB memory usage, yet never used more than about 15GB.

In reply to VBCurtis:[QUOTE=VBCurtis;422906]Those B2's are default matched up with B1 = 4e9 and up, which is quite large. I have no idea why you are quoting such B2 values with 800M![/QUOTE]

Perhaps this quotation may be helpful:
[QUOTE=R.D. Silverman;262945]Optimality is achieved when the algorithm spends as much time in step 2
as it does in step 1. The B2/B1 ratio depends on the relative speed
of the two steps.[/QUOTE]

So perhaps with some careful tuning of B2, -k and maybe -treefile:

1) Dedicate half of all cores to stage 1
2) Dedicate the other half to stage 2
3) ???
4) Profit!

Madpoo 2016-01-18 22:50

[QUOTE=VBCurtis;422906]Those B2's are default matched up with B1 = 4e9 and up, which is quite large. I have no idea why you are quoting such B2 values with 800M!

I have chosen to run B1 = 4.5e9 on P95 and B2 = 150e12 (k = 3, so I could go higher but expected t70 factor time rises if I do), which indeed requires 17GB for Stage 2. One thread of stage 2 per node means roughly 25 curves per day. These settings are within 2% of the best expected time for t70 (I tested 100M increments of B1 from 9e8 to 6e9). 15000 curves for a t65, 69000 for a t70.

I learned the power supplies are not strong enough for these servers- I had to turn memory down to 533 from 667 (using a "low power" setting in BIOS) and remove 8GB from two of the nodes to prevent the power supplies from shutting down when all 32 cores are in use. I also removed all-but-one hard drive from each node; my kill-a-watt suggests the 500w-rated power supplies shut down when wall plug draw exceeds 460w. Too bad they're rather non-standard supplies..[/QUOTE]

You *might* get better power savings by simply disabling turbo mode on the CPUs. In that 4-way system you have 8 CPUs and if you were able to limit each CPU by around 10-15 W, you're talking about some savings there.

Reason you might want to do that is you could get more bang for the watt by having faster memory and slightly reduced CPU speed.

I don't know the BIOS settings on that setup... on HPs it would be done with setting a power cap, among other possibilities. But something that lets you control c-states or specifically disable turbo.

VBCurtis 2016-01-19 01:41

[QUOTE=Madpoo;422927]You *might* get better power savings by simply disabling turbo mode on the CPUs. In that 4-way system you have 8 CPUs and if you were able to limit each CPU by around 10-15 W, you're talking about some savings there.

Reason you might want to do that is you could get more bang for the watt by having faster memory and slightly reduced CPU speed.
[/QUOTE]

Good suggestion- unfortunately, I already did that. My only options in this basic BIOS are CPU multiplier; stock 2.5GHZ is 7.5 mult. I'm running them all at 7.0 = 2.33 Ghz, and slowing to 6 = 2.0ghz didn't help/isn't worth the lost productivity. If it didn't take 15 min to access the lower-level pair of nodes via 20ish screws, I'd pull 8GB out of those and put memory back to 667 and hope. I'll get around to that eventually..

VBCurtis 2016-01-19 02:00

[QUOTE=lavalamp;422911]I have found that GMP-ECM seems to somewhat overestimate the amount of memory it will use, so I have shied away from using -maxmem. Instead I have preferred to use -k and -treefile to manually tune it to the system I am running on.
[/QUOTE]

My M1277 work has two priorities:
1. Minimum expected-time to complete a t70, as determined by the -v flag of GMP-ECM.
2. Within 2-3% of the minimum time for t70, maximize bounds to improve chances to find factors larger than 70 digits.

The RDS quote mentioned has nothing to support it, even the runs in his own paper spend 40% of stage 1 time on stage 2. I've gone a few rounds with RDS on this, and believe that GMP-ECM does not obey his theory.

As for using -maxmem, I am specifically trying to make time-efficient use of the large memory in these machines, with 6-7 of the 8 cores doing NFS work. I am not interested in dedicating more than 2 cores to ECM per machine, so there's no reason to sacrifice even 10% of performance(?). I'm trying to do something with the 32GB that I can't do with 16GB at home. I have no evidence that running enormous B2's is better than the bounds I've chosen for my first set of curves, but perhaps I should be trying to minimize t75 time since anyone with 16GB can run t70 curves relatively efficiently.

bgbeuning 2016-01-20 01:06

[QUOTE=bgbeuning;420678]If you like GPU, you may like this item on ebay

[url]http://www.ebay.com/itm/Dell-CloudEdge-C410x-16x-M2090-GPU-PCI-Exansion-Systems-2x-PW-4x-HBA-with-cable-/151931459802?hash=item235fd218da:g:B9AAAOSwSdZWb19I[/url][/QUOTE]

Some more info about the c410x :
1. It requires 220VAC power.
2. The web interface to configure the iPass ports has a username / password.
There is no way to reset the password, so if you buy a used box and the
original owner changed the default (root / root) username / password then
you get to play hacker before you can configure the box.


All times are UTC. The time now is 15:35.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.