mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Hardware (https://www.mersenneforum.org/forumdisplay.php?f=9)
-   -   Bargain Hardware Thread (https://www.mersenneforum.org/showthread.php?t=3890)

Mark Rose 2014-11-06 14:09

There's a good thread about Xeon Phi: [url]http://www.mersenneforum.org/showthread.php?t=18223[/url]

These bargain cards aren't very promising.

xilman 2014-11-06 15:07

[QUOTE=Mark Rose;387020]There's a good thread about Xeon Phi: [url]http://www.mersenneforum.org/showthread.php?t=18223[/url]

These bargain cards aren't very promising.[/QUOTE]TBH, I'm more interested in ECM and NFS on rather smaller numbers. It's not clear at the moment whether Phi will be cost effective in that regime. As noted, I need to investigate further.

VBCurtis 2014-11-06 17:10

I have no background with code, but I am interested in participating in the 10-pack if there is a simple way to run ECM or NFS sieving on it.

Xyzzy 2014-11-06 19:04

If George is willing to "play with it" we are certain that a quick forum fundraiser could provide him with one, especially at the discounted price from the 10-pack deal.

LaurV 2014-11-07 02:58

[QUOTE=Xyzzy;387031]If George is willing to "play with it" we are certain that a quick forum fundraiser could provide him with one, especially at the discounted price from the 10-pack deal.[/QUOTE]
As said, I am in, for this particular goal. Let's say, 50 bucks (additional to the cost of my card plus shipping, from the 10-pack).
Mike, I think you may be willing to buy the pack, hehe, considering your geographical position and the fact that we did business before, I'll vote for you if it is not much trouble.

Prime95 2014-11-07 03:31

[QUOTE=Xyzzy;387031]If George is willing to "play with it" we are certain that a quick forum fundraiser could provide him with one, especially at the discounted price from the 10-pack deal.[/QUOTE]

Don't go there. AFAIK, there is no MASM support for Xeon Phi. The thought of rewriting all that assembly code in some other form makes me shudder.

Batalov 2014-11-07 03:42

...writing a compiler-compiler could be fun, too! ;-)

ewmayer 2014-11-07 21:59

I see NewEgg is having a [url=http://promotions.newegg.com/NEemail/Nov-0-2014/72HRSEARLYBLACKFRIDAYSALE_07/index-landing.html]72-hour early black friday sale[/url].

petrw1 2014-11-07 23:11

How much do the specifics matter for GIMPS...
 
I see a GTX970 anywhere from $370 - $477.

There are differences in the brand names or specs but I don't know which if any matter to GIMPS for GPU Factoring or LL.
I suspect these matter or do they?
[CODE]Core Clock: 1076 MHz
Boost Clock: 1216 MHz
CUDA Cores: 1664[/CODE]They are all: 4GB 256-Bit (G)DDR5

I don't need to know what all the following; only if they matter to thruput for GIMPS.
Matter enough to justify the extra $100
I am NOT a gamer and if the Grandkids become extreme gamers they can buy their own PC's. :P

[CODE]SLI? G-Sync? PCI? HDCP? ACX?
Extreme? SuperClocked?[/CODE]

Mark Rose 2014-11-08 00:56

As long as the percentage increase in Core/Boost clock is bigger than the percentage increase in the dollar amount, it's a better deal for GIMPS. All the 970's will have the same number of CUDA cores. The higher clocked cores will use a little more electricity.

The 970 is very overclockable, especially the core clock, which is what GIMPS needs. If you're comfortable with overclocking yourself, I'd read the reviews and find the card with the best cooling.

Xyzzy 2014-11-08 03:34

Too bad the 970 doesn't use the same reference cooler as the 980.

We much prefer exhausting heat outside the case rather than inside the case.

kladner 2014-11-08 15:19

EVGA and PNY offer reference-type coolers on the 970.
[url]http://www.newegg.com/Product/ProductList.aspx?N=100007709%20600536049&IsNodeId=1&Submit=ENE[/url]

Xyzzy 2014-11-08 20:14

We like that they reused the Titan cooler for the 980. (They did change from a vapor chamber to a heat pipe. And they added a backplate.)

[url]http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/8[/url]

kladner 2014-11-09 04:57

[QUOTE=Xyzzy;387208]We like that they reused the Titan cooler for the 980. (They did change from a vapor chamber to a heat pipe. And they added a backplate.)

[URL]http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/8[/URL][/QUOTE]

That is sweet looking, for sure. The brands I mentioned are nowhere near as elegant.

tha 2014-11-11 20:46

On the Dutch version of E-Bay a Tesla C2050 3Gb GPU GDDR5 NVIDIA is being offered for € 695,- It is said to be in good condition, almost new and refurbished by a dealer. How do we value such an offer?

Xyzzy 2014-11-11 22:23

2 Attachment(s)
Some thoughts:

At one time we had four GTX570 cards running. They were so power-hungry and threw off so much heat that we had to put each one into a separate computer. Each card was three slots wide, which posed a physical issue as well. Note that this was when the CPU did the sieving so that was another reason we were forced to run one card per box.

They churned out some serious work, and at one point we think we were at #1 for TF and #4 overall, both using the "lifetime" metric. We think we were outputting around 1,600GHd/d with that setup. The current draw was so high that we had to dedicate a branch circuit for them and the room the boxes were in used to hit 85°F in the summer, no matter what we set the thermostat to. However, the ridiculous cost to power and cool the boxes really hit us hard and eventually we had to retire them. This has cost us dearly in our overall standing on the leaderboard since nearly everyone else has passed us by.

So today we received two GTX980s. They both fit into one computer case and together they easily output 1,200GHd/d with little fuss. We have not measured their current draw but thermally they feel like they output the same heat as one 570. We are using a non-dedicated branch circuit with no problem as well. And, the CPU gets to run P-1 testing since GPU sieving is now thankfully an option.

The good:

They are relatively quiet.
They are very sturdy and do not sag at all, unlike our previous video cards.
They are energy efficient and relatively inexpensive.
One $600 980 does more TF work than a $1,000 Titan. (We paid that much for a Titan. They are probably cheaper now.)
One $600 980 does about the same work as a $1,000 690. (We paid that much for a 690. They are probably cheaper now.)
They have a three year warranty from a company known for being honest about warranties.

The bad:

The fan BIOS curve is weighted towards lower acoustics, so OOTB they throttle very quickly.
One of the cards was sent to us with a missing rubber SLI connector cover. We will have to contact EVGA about that.
We ran out of video card slots so our 750 is now homeless. (The 750 is good for around 175GHd/d.)
Our 570 cards were shipped in very exquisite packaging whereas the 980s came in ordinary packaging.

The ugly:

We have to run Windows to modify the fan curves and other GPU options. Plus, GPU-Z is a Windows program.
We have not figured out how to use the onboard (Intel) graphics for the display rather than one of the cards.

Random notes:

We spent several hours playing with all sorts of parameters to see where the "sweet spot" was for performance. After much testing, in mfaktc.ini we set "GPUSievePrimes=82486", "GPUSieveSize=128" and "GPUSieveProcessSize=32". We set the "power target" for the card to 125%, the temperature "limiter" to 79°C and the fan curve to "1:1". We did not boost the memory or GPU clock beyond the "stock" values, which on these cards are "superclocked" from the factory. The cards are very willing to pump out about 25-30 GHd/d more each if we allow the temps to go higher. Running the fans faster helps a little but get a bit loud over 85%. The voltage and power required to go a bit faster appears to increase rapidly. The cards will output around 430GHd/d each if we set the "power target" to 75% at which point the fans go near silent and the power and voltage drop to minimal levels. Various reviews show that there is a lot of headroom to overclock the GPU clock but we are happy with what we have. We let both cards run in the case for over an hour to make sure everything had stabilized WRT heat. With our settings, no matter how high the ambient temperature gets the cards will throttle to stay under 80°C. The fan curve adjusts the fans every second so the card temperature is very stable. In the pictures attached you may notice how each card uses the reference cooling design, which appears to almost be a totally enclosed design. Fortunately our "overclocker-friendly" motherboard has the cards spaced so there is room between them. The heat from the cards is almost entirely dumped outside the case, with a little leakage around the PCI connector and the missing SLI cover. GPU-Z tells us what metric the card is throttling with. In our case it is based off of the thermal sensors. When the cards first start up the limiter is the voltage sensor, which then switches to the power sensor as the card heats up a bit. Finally they switch to throttling via the temperature sensor. We have Prime95 running one P-1 test across four cores. During stage one the CPU temperature is around 63°C and during stage two it is around 55°C. We have 12GiB dedicated to P-1 testing. The onboard video on our motherboard has only DisplayPort and HDMI outputs. We have a HDMI-to-VGA adapter, which works, but we are unable to run the display off of the onboard video and have the graphics cards work with CUDA. If anyone knows a way around this we think we can get little extra work from the 980 that is currently running the system video.

:mike:

VictordeHolland 2014-11-12 00:11

[QUOTE=tha;387429]On the Dutch version of E-Bay a Tesla C2050 3Gb GPU GDDR5 NVIDIA is being offered for € 695,- It is said to be in good condition, almost new and refurbished by a dealer. How do we value such an offer?[/QUOTE]
A C2050 is basically a GTX470 with ECC and more memory. For TF you will be much better of with a used GTX580 or new GTX970. For CUDALucas the ECC memory is nice, but GPUs should really be doing TF instead of LL.

xilman 2014-11-12 10:55

[QUOTE=fivemack;387008][url]https://software.intel.com/en-us/articles/special-promotion-intel-xeon-phi-coprocessor-31s1p[/url]

...


I don't know whether there are seven people on the forum who might want one, but Colfax offers a ten-pack for $1290 so if there are more than six people it would be worth putting in a bulk order.
[/QUOTE]Another nice link

[url]http://www.phoronix.com/scan.php?page=news_item&px=MTgzNjY[/url]

I'm now getting keener on buying one, or just possibly two, of these.

Paul

ldesnogu 2014-11-12 11:52

[QUOTE=xilman;387470]Another nice link

[URL]http://www.phoronix.com/scan.php?page=news_item&px=MTgzNjY[/URL]

I'm now getting keener on buying one, or just possibly two, of these.[/QUOTE]
I would be in if these cards ran on more than a few motherboards and didn't require a very significant airflow in the case (they are passively cooled but the TDP is 270W). Too bad, the offer is definitely very attractive :no:

Xyzzy 2014-11-12 15:42

The link and forum thread there suggests that you can only use Intel's compiler. Wouldn't that be a major disadvantage?

xilman 2014-11-13 09:51

This Twitter conversation may also prove interesting and/.or discouraging.

[url]https://twitter.com/9600/status/532798000918454272[/url]

pinhodecarlos 2014-11-13 09:54

[QUOTE=xilman;387540]This Twitter conversation may also prove interesting and/.or discouraging.

[URL]https://twitter.com/9600/status/532798000918454272[/URL][/QUOTE]

I think they need to use heat pipes from Spirax Sarco...lol

fivemack 2014-11-13 11:57

[QUOTE=Xyzzy;387484]The link and forum thread there suggests that you can only use Intel's compiler. Wouldn't that be a major disadvantage?[/QUOTE]

Intel's compiler is reasonably compatible and generates rather good code.

However it appears that Intel has removed the offer which used to provide their compiler free for non-commercial use :no:

ldesnogu 2014-11-13 12:16

[QUOTE=fivemack;387545]Intel's compiler is reasonably compatible and generates rather good code.

However it appears that Intel has removed the offer which used to provide their compiler free for non-commercial use :no:[/QUOTE]
So one has to buy a [URL="https://software.intel.com/en-us/intel-parallel-studio-xe/try-buy"]$2949 compiler[/URL] (since you need MPI, don't you?) to go along an <$200 card. The offer is suddenly much less attractive.

VictordeHolland 2014-11-13 12:38

[QUOTE=ldesnogu;387546]So one has to buy a [URL="https://software.intel.com/en-us/intel-parallel-studio-xe/try-buy"]$2949 compiler[/URL] (since you need MPI, don't you?) to go along an <$200 card. The offer is suddenly much less attractive.[/QUOTE]
Fail, Intel!
CUDA and OpenCL compilers are free and so is the GCC (obviously).

ewmayer 2014-11-13 22:10

[QUOTE=ldesnogu;387546]So one has to buy a [URL="https://software.intel.com/en-us/intel-parallel-studio-xe/try-buy"]$2949 compiler[/URL] (since you need MPI, don't you?) to go along an <$200 card. The offer is suddenly much less attractive.[/QUOTE]

Oliver's || builds of my Mlucas code in generic-C mode (cf. [URL="http://www.mersenneforum.org/showthread.php?t=18223&page=2"]here[/URL]) indicate that pthread support is all one needs. (Unless Intel is using MPI to support pthreads - no clue about that, or even if it makes any sense, I've not used MPI, at least not knowingly.)

So I take it GCC cannot generate machine code for Phi?

petrw1 2014-11-14 05:56

Gee thanks Xyzzy for all the reasearch and info...
 
...and the rest of you too.

ldesnogu 2014-11-14 12:31

[QUOTE=ewmayer;387575]Oliver's || builds of my Mlucas code in generic-C mode (cf. [URL="http://www.mersenneforum.org/showthread.php?t=18223&page=2"]here[/URL]) indicate that pthread support is all one needs. (Unless Intel is using MPI to support pthreads - no clue about that, or even if it makes any sense, I've not used MPI, at least not knowingly.)[/QUOTE]
I'm afraid I don't know any more than you do. But Intel clearly mentions the $2949 cluster edition on [URL="https://software.intel.com/en-us/articles/intel-and-third-party-tools-and-libraries-available-with-support-for-intelr-xeon-phitm"]this page[/URL].

[QUOTE]So I take it GCC cannot generate machine code for Phi?[/QUOTE]gcc vectorizer isn't up to the task according to Intel, so it's only used to compile the Linux kernel for Phi. Look at the note at the bottom of the above page:
[quote]Our changes to the GCC tool chain, available as of June 2012, allow it to build the coprocessor’s Linux environment, including our drivers, for the Intel(R) Xeon Phi(tm) Coprocessor. The changes do not include support for vector instructions and related optimization improvements. GCC for Intel(R) Xeon Phi(tm) is really only for building the kernel and related tools; it is not for building applications. Using GCC to build an application for Intel Xeon Phi Coprocessor will most often result in low performance code due its current inability to vectorize for the new Knights Corner vector instructions. Future changes to give full usage of Knights Corner vector instructions would require work on the GCC vectorizer to utilize those instructions’ masking capabilities.[/quote]

So I'm afraid the reduced price Phi is basically of no use unless you're ready to spend another $3k :(

ewmayer 2014-11-14 22:42

I pinged Oliver about the compiler issue - his reply:
[quote]To my knowledge it is still the case that only the Intel compiler is capable of generating code for the Phi.

Are you sure that the non-supported non-commercial version of the compiler isn't available anymore? I don't know because have a "gold-license" for the software which has no (technical) restrictions at all so I don't care about the non-supported non-commercial version.

About the 31S1P: Keep in mind that

- it requires proper cooling
- to my knowledge it won't run on most consumer boards (Xeon Phi need explicit support from BIOS!)

On the other hand the price for the 31S1P is hot... really HOT. This is what Intel needs... a CHEAP variant of Xeon Phi for experiments. I mean every entry level nvidia card for 50 US$ can do CUDA.. not really fast but you can try.[/quote]
I wonder if a "designated builder" model would be viable for our potential "local enthusiasts club" here - approach 1 or 2 people with the needed compile tools about their willingness to do-build/shoot-back-executable, then do whatever code development you need locally (using GCC) and when it seems ready for a Phi build, shoot code tarball to builder.

But first we need to settle the issue as to whether the non-com version of ICC is/is-not still available.

ewmayer 2014-11-15 01:55

[QUOTE=ewmayer;387657]I wonder if a "designated builder" model would be viable for our potential "local enthusiasts club" here - approach 1 or 2 people with the needed compile tools about their willingness to do-build/shoot-back-executable, then do whatever code development you need locally (using GCC) and when it seems ready for a Phi build, shoot code tarball to builder.[/QUOTE]
Or better, someone with the needed tools willing to provide guest accounts for remote logins.

[quote]But first we need to settle the issue as to whether the non-com version of ICC is/is-not still available.[/QUOTE]

[url=https://software.intel.com/en-us/non-commercial-software-development]Non-Commercial Software Development[/url] | Intel

[url=https://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler]Intel C++ Compiler[/url] | Wikipedia

Mt main specific interest here is in the coming AVX512 step in the evolution of Xeon Phi, but I also have a general interest in the emerging manycore paradigm, which IMO (and this was Intel and AMD's own doing, or better non-doing) has been overly centered on nVidia. Competition here can only be good: nVidia's efforts have forced mighty Intel to race and try to catch up, and Intel's efforts will (hopefully) lead nVidia to make their software tools less reliant on proprietary API-hooks - as I've said elsewhere, one thing I think Intel got right is the "take your standards-compliant multithreaded code, let our compiler map it to our custom hardware" use model. Imagine how "portable manycore" code of (say) 5-10 years out would look if each of a half-dozen major manycore vendors required their own custom parallel-coding API to be used.

LaurV 2014-11-15 04:32

I am still in for one, in case someone collects the money to buy a carton box of them. I may be throwing away ~130 bucks (plus taxes) and just store it in the cabinet until its time will come, but well, there is a small risk I can afford. It may be that I could use it in the future, even learn something from it trying to write small pieces of code. It may be not...

kladner 2014-11-20 16:51

[URL="http://promotions.newegg.com/NEemail/Nov-0-2014/BlackNovembeRefurbishedSale_20/index-landing.html"]Refurb sale[/URL] at New Egg.

petrw1 2014-11-27 16:02

Is it just my imagination.....NASA like display.
 
===== I HAVE =======
I currently have a 4-port KVM ... Keyboard/Video/Mouse.
4 PC's ... 1 Keyboard / 1 Mouse / 1 Video Monitor

I plug it into 4 PCs and at the push of a button I can control which of the 4 is controlled by Keyboard/Mouse and which is displayed on my monitor.

======== I WISH I HAD ======

HOWEVER...now I am dreaming and envisioning a device (hardware/software/firmware) that can cut my monitor into 4 quadrants and display the screens for all 4 PCs at the same time.

AND....depending where on the screen I move my mouse will determine which PC interacts with the Keyboard/Mouse.

I realize I would need a big monitor with good resolution...and very good Glasses :)

Has anyone ever heard of such a thing?
Or is this just a lot of wishful thinking?

axn 2014-11-27 16:20

[QUOTE=petrw1;388550]======== I WISH I HAD ======

HOWEVER...now I am dreaming and envisioning a device (hardware/software/firmware) that can cut my monitor into 4 quadrants and display the screens for all 4 PCs at the same time.

AND....depending where on the screen I move my mouse will determine which PC interacts with the Keyboard/Mouse.

I realize I would need a big monitor with good resolution...and very good Glasses :)

Has anyone ever heard of such a thing?
Or is this just a lot of wishful thinking?[/QUOTE]

Four remote desktop / VNC windows? Wouldn't that take care of our requirement?

kracker 2014-11-27 17:08

[QUOTE=axn;388552]Four remote desktop / VNC windows? Wouldn't that take care of our requirement?[/QUOTE]

+1
TeamViewer FTW.

TheMawn 2015-02-12 01:47

Two GTX 580's apparently never overclocked, waterblocks AND original coolers included, bidding at $100 US and buyout at $250 US. I've never strayed into used parts before (I do look occasionally though) but this deal is tempting me... I don't know if I can manage the extra heat into the liquid cooling system or the noise of the air coolers though.

Mark Rose 2015-02-12 05:05

[QUOTE=TheMawn;395271]Two GTX 580's apparently never overclocked, waterblocks AND original coolers included, bidding at $100 US and buyout at $250 US. I've never strayed into used parts before (I do look occasionally though) but this deal is tempting me... I don't know if I can manage the extra heat into the liquid cooling system or the noise of the air coolers though.[/QUOTE]

Each GTX 580 will dump 200-250 watts into your coolant loop at stock clocks. That's a lot of radiator to add. I've never done water cooling because it's way cheaper to run on air.

I picked up a GTX 580 for $75 CAN on Saturday, off Kijiji. That's about $60 US at the moment. I had an available power supply that could handle it, so it was too tempting to not buy. I see another for sale at $80 CAN OBO, and [URL="http://www.ebay.ca/itm/Asus-GTX580-DirectCUII-/191505691555"]this one on eBay[/URL] for $72 CAN shipping included.

Xyzzy 2015-02-12 05:30

It would be neat to use a bunch of water-cooled video cards to heat a home's water supply.

Mark Rose 2015-02-12 06:06

[QUOTE=Xyzzy;395289]It would be neat to use a bunch of water-cooled video cards to heat a home's water supply.[/QUOTE]

Shouldn't be any harder than leaving the computers in the hypocausts.

kladner 2015-02-12 17:22

[QUOTE=Xyzzy;395289]It would be neat to use a bunch of water-cooled video cards to heat a home's water supply.[/QUOTE]

The water plan would probably work best with a heat exchanger coil in a preheat tank. This could then be fed to a conventional water heater, which would not have to work as hard. If you wanted to get fancy, the heat exchanger could be part of a circulating system, which would probably extract more heat than a passive setup.

paulunderwood 2015-08-04 23:01

I know next to nothing about GPUs, but would these [URL="http://www.ebay.com/itm/NVIDIA-Tesla-M2090-6GB-GDDR5-PCIe-x16-GPU-Computing-Processor-VIDEO-CARD-/281723734547?pt=LH_DefaultDomain_0&hash=item41980b0a13"]Tesla M2090[/URL] be of interest at $145 :whistle:

Or [URL="http://www.amazon.com/Nvidia-Tesla-M2090-Gpu-Card/dp/B005TJKPWU"]this one[/URL] for $160?

chalsall 2015-08-04 23:25

[QUOTE=paulunderwood;407248]I know next to nothing about GPUs, but would these [URL="http://www.ebay.com/itm/NVIDIA-Tesla-M2090-6GB-GDDR5-PCIe-x16-GPU-Computing-Processor-VIDEO-CARD-/281723734547?pt=LH_DefaultDomain_0&hash=item41980b0a13"]Tesla M2090[/URL] be of interest at $145 :whistle:[/QUOTE]

I often rent two (2) M2050s in a AWS cg1.4xlarge instance for about $0.14 an hour (~ $100 a month) and get about 280 GHz Days / Day out of each. So, definitely, at $145 each these would be attractive to someone who didn't spend as much as I do on local electricity!

Mark Rose 2015-08-05 01:36

I can get used GTX 580's for CA$100 locally that trial factor at about 430 GHz-d/d at roughly the same wattage and include built in cooling.

They might be a better deal for applications requiring double precision floating point though.

bgbeuning 2015-11-20 01:28

Evaluating bargains
 
I am looking at buying some machines to crunch on prime stuff.
There are lots of options, from ebay used machines, to off the
shelf machines, to build your own machines. I have been looking
for a scoring method to plug in some numbers to pick the best
machines, where best is most crunching for least money.

The scoring should use the prime95 benchmark (p=70M) pages,
the number of cores in the machine, and the price.
The formula

score = benchmark / cores * cost

where a lower score is better sounds reasonable to me.

Lets try a couple of examples.

"HP Desktop Computer Z210 XEON E3-1240 (3.30 GHz) 4 GB DDR3 250 GB HDD"
[URL]http://www.newegg.com/Product/Product.aspx?Item=N82E16883282047[/URL]

benchmark = 23.21
cores = 4
price = $295
score = 1711

"Dell C6100 XS23-TY3 Server 8x 2.26GHz 4C E5507 96GB"
[URL]http://www.ebay.com/itm/Dell-C6100-XS23-TY3-Server-8x-2-26GHz-4C-E5507-96GB-4x-73GB-2-5-HDD-SAS-1068E-/171974294365?hash=item280a777b5d:g:G-8AAOSwl9BWJk8R[/URL]

benchmark = 66.8 (closest I could find was Intel Xeon E5462)
cores = 32
price = $790
score = 1649

Build your own i5-6400

benchmark = 17.91
cores = 4
price = $538 (CPU $190, MB $100, RAM $108, case = $40, PS = $60, cooler = $40)
score = 2847

So do you think this method of scoring is valid?
Show me an example where the score does a bad job of assessing a system.
How do you score systems?

axn 2015-11-20 03:01

[QUOTE=bgbeuning;416655]The formula

score = benchmark / cores * cost

where a lower score is better sounds reasonable to me.

<snip>

So do you think this method of scoring is valid?
Show me an example where the score does a bad job of assessing a system.
How do you score systems?[/QUOTE]

As per your formula, higher benchmark number is bad. Is that intended?

EDIT:- By benchmark, you mean iteration times in ms? If so, then it is fine. But probably the inverse calculation would have been more intuitive. i.e Cores/Iteration time/$ = Thruput/$

For cost, you might look at purchase cost + cost of electricity for running for, say, 2 years. That will factor in power efficiency as well.
Also, for multicore systems, the scalability might not be great. You might not get the same iteration times when more cores are running.

retina 2015-11-20 03:19

Somewhere in there you need to include the RAM size and speed. The RAM that is installed affects the benchmark greatly. It is unfortunate that the benchmarks page does not include RAM figures. So perhaps you can consider looking to other places on this board to find where people have detailed the RAM specs and adjust your benchmark values accordingly.

LaurV 2015-11-20 03:42

[QUOTE=axn;416659]As per your formula, higher benchmark number is bad. Is that intended?[/QUOTE]
he said "where a lower score is better", so yes, it was intended.

[QUOTE=axn]
EDIT:- By benchmark, you mean iteration times in ms?[/QUOTE]
I think he means the benchmark on PrimeNet server benchmark list. The line "closest I could find was Intel Xeon E5462" supports this assumption.

[QUOTE=axn]
probably the inverse calculation would have been more intuitive. i.e Cores/Iteration time/$ = Thruput/$

For cost, you might look at purchase cost + cost of electricity for running for, say, 2 years. That will factor in power efficiency as well.
Also, for multicore systems, the scalability might not be great. You might not get the same iteration times when more cores are running.[/QUOTE]
This is all correct. There are lots of things to consider, not the last is the memory bottlenecks and the mobo chipset, cooling, electricity expenses, etc. (i.e. your performance may not scale when you use more cores).

VBCurtis 2015-11-20 05:10

[QUOTE=bgbeuning;416655]
"Dell C6100 XS23-TY3 Server 8x 2.26GHz 4C E5507 96GB"
[URL]http://www.ebay.com/itm/Dell-C6100-XS23-TY3-Server-8x-2-26GHz-4C-E5507-96GB-4x-73GB-2-5-HDD-SAS-1068E-/171974294365?hash=item280a777b5d:g:G-8AAOSwl9BWJk8R[/URL]

benchmark = 66.8 (closest I could find was Intel Xeon E5462)
cores = 32
price = $790
score = 1649
[/QUOTE]

This system looks very very interesting for NFS factoring work. Nice find! If I had any confidence at setting up the 4 individual nodes and remotely managing it all, I'd be inclined to buy one.

bgbeuning 2015-11-20 05:23

Thanks for the input.
You all raised valid points, but most I can not measure the impact of.
How much does RAM speed or MB chipset affect the iteration time?
I am sure they do affect it, but I don't have a way to measure the impact.

The C6100 basically has 4 motherboards in a 2U case.
Each MB has 24GB RAM which is more than the others
but I am not sure how to reflect that in a score.

I have 2 Xeon 1U servers (by Dell and HP).
They seem to scale up fine when using all cores.

My electric Utility charges $0.06 per kWh.
The C6100 draws 500W under full load which is 12 kW / day
or $0.72 / day or $22 / month. Hmm.

I am a little worried about the heat...

kladner 2015-11-20 05:51

Crosspost:
[STRIKE]score = benchmark / cores * cost

Consider the difference grouping makes:

[LIST=1][*]score=(benchmark divided by cores) times cost[*]score=benchmark divided by (cores times cost)[/LIST]
One puts cost in the numerator. The other puts cost in the denominator.

Let's say 'benchmark' = 4 ms
'cores' = 4
'cost' = 400
[LIST=1][*]x=(4/4)*400=400[*][STRIKE]x=4/(4*400)=0.75[/STRIKE][/LIST] In #1, 'cost' increases the score, which makes sense if lower is better.

In #2 'cost' [B]vastly decreases [/B]'score', which only gets worse as cost increases.
So #1 seems more sensible, especially if the 'benchmark' is iteration time, which is the only way the whole thing really makes sense.

However,
[QUOTE]benchmark = 66.8 (closest I could find was Intel Xeon E5462)
cores = 32
price = $790
score = 1649[/QUOTE]which works out to 1649.125 in the #1 equation, does not look like 'benchmark' is denoted in ms.

So #1 is the intended form, but what is 'benchmark'? :huh:[/STRIKE]

Duh. 'Cost' is electricity? What is benchmark? I should stop :digging:

axn 2015-11-20 06:38

E5507 is nothing like E5462. I guess an i7 920 (benchmark = 77.84ms) is a better comparison, but with probably worse performance. We should look at something like 85-90ms.

Also, i5-6400 score = 17.91/4*538= 2409 (not 2847).

And I don't see any benchmark numbers for E3-1240. Where did you get the 23ms number?

axn 2015-11-20 07:43

Also [url]http://www.newegg.com/Product/Product.aspx?Item=N82E16883158462[/url]

kladner 2015-11-20 07:46

[QUOTE=axn;416677]Also [URL]http://www.newegg.com/Product/Product.aspx?Item=N82E16883158462[/URL][/QUOTE]

[STRIKE]WFS![/STRIKE] (I can't remember what 'WFS' is supposed to mean, though I think it was enthusiastic. I'm trying to forget a rough day at work.)
Even if one needs to replace the memory, it is awfully cheap, even if it is four generations back, or so.

Second Edit: Consider how much you will spend for power for a 2xxx CPU. Long-term costs can easily outweigh purchase price.

FINALLY (or finale): I completely overlooked detailed explanations from the OP. :davieddy:

blip 2015-11-20 07:51

[QUOTE=kladner;416679]WFS![/QUOTE]
???

Gordon 2015-11-20 09:30

[QUOTE=bgbeuning;416655]

"Dell C6100 XS23-TY3 Server 8x 2.26GHz 4C E5507 96GB"
[url]http://www.ebay.com/itm/Dell-C6100-XS23-TY3-Server-8x-2-26GHz-4C-E5507-96GB-4x-73GB-2-5-HDD-SAS-1068E-/171974294365?hash=item280a777b5d:g:G-8AAOSwl9BWJk8R[/url]
[/QUOTE]

You seem to have neglected to mention the $753.73 shipping costs...

axn 2015-11-20 10:50

[QUOTE=Gordon;416687]You seem to have neglected to mention the $753.73 shipping costs...[/QUOTE]

Shipping costs are automatically shown for the country you're in (UK?). For US, given some valid zip codes, it shows much lower rates.

fivemack 2015-11-20 11:03

Is this just companies being unused to international shipping and picking a single ludicrously expensive option? Or does it really cost eight hundred dollars to ship a 250lb three-foot-by-three-foot-by-one-foot cardboard box from Michigan to the UK?

retina 2015-11-20 11:06

[QUOTE=fivemack;416695]Is this just companies being unused to international shipping and picking a single ludicrously expensive option? Or does it really cost eight hundred dollars to ship a 250lb three-foot-by-three-foot-by-one-foot cardboard box from Michigan to the UK?[/QUOTE]It depends upon how long you want to wait for the package. Plus that is heavy so going by air will be pricey. And general handling will require more than just one guy in a van.

xilman 2015-11-20 11:30

[QUOTE=fivemack;416695]Is this just companies being unused to international shipping and picking a single ludicrously expensive option? Or does it really cost eight hundred dollars to ship a 250lb three-foot-by-three-foot-by-one-foot cardboard box from Michigan to the UK?[/QUOTE]Wouldn't be surprised. The best price I could get for shipping a Victorian family bible to Florida was > £80, or ~$125. That weighed much less than 250lb. In the end I hand-delivered it to the recipient when she visited the UK. Even if she had to pay excess baggage charges it would still have been much cheaper.

fivemack 2015-11-20 11:59

[url]http://www.ebay.co.uk/itm/HP-C7000-Enclosure-16x-BL460C-Blade-servers-128-x-2-5GHz-CPU-Cores-512GB-RAM-/391174493753?hash=item5b13d11639:g:lL0AAOSwhOxVSKmr[/url]

looks vaguely intriguing.

If I do an ebay search for "blade server" the commonly-associated-search-terms include "scrap metal" :chappy:

What I can't figure out is what kinds of blade server fit in a C7000 chassis. [url]http://www.ebay.co.uk/itm/HP-BL460C-Blade-Server-2-x-Quad-Core-E5440-2-83Ghz-16GB-Ram-/221942862458?hash=item33acd3527a:g:H-kAAOSwcdBWSkip[/url] starts to look quite appealing; the chassis is [url]http://www.ebay.co.uk/itm/HP-C7000-Enclosure-10-x-Fans-6x-Power-Supplies-2-x-Admin-Modules-no-blades-/391174490597?hash=item5b13d109e5:g:lL0AAOSwhOxVSKmr[/url] for £400.

bgbeuning 2015-11-20 14:01

[QUOTE=kladner;416668] Duh. 'Cost' is electricity? What is benchmark? I should stop :digging:[/QUOTE]

Cost is money spent to buy the system
Benchmark is the 70M column from this page

[url]http://www.mersenne.org/report_benchmarks/[/url]

bgbeuning 2015-11-20 14:05

[QUOTE=Gordon;416687]You seem to have neglected to mention the $753.73 shipping costs...[/QUOTE]

It tells me $43 in shipping. I guess they know where we live.

Xyzzy 2015-11-20 14:40

We would factor system noise and physical size into the equation.

:primenet:

Gordon 2015-11-20 15:28

[QUOTE=fivemack;416700][url]http://www.ebay.co.uk/itm/HP-C7000-Enclosure-16x-BL460C-Blade-servers-128-x-2-5GHz-CPU-Cores-512GB-RAM-/391174493753?hash=item5b13d11639:g:lL0AAOSwhOxVSKmr[/url]

looks vaguely intriguing.

If I do an ebay search for "blade server" the commonly-associated-search-terms include "scrap metal" :chappy:

What I can't figure out is what kinds of blade server fit in a C7000 chassis. [url]http://www.ebay.co.uk/itm/HP-BL460C-Blade-Server-2-x-Quad-Core-E5440-2-83Ghz-16GB-Ram-/221942862458?hash=item33acd3527a:g:H-kAAOSwcdBWSkip[/url] starts to look quite appealing; the chassis is [url]http://www.ebay.co.uk/itm/HP-C7000-Enclosure-10-x-Fans-6x-Power-Supplies-2-x-Admin-Modules-no-blades-/391174490597?hash=item5b13d109e5:g:lL0AAOSwhOxVSKmr[/url] for £400.[/QUOTE]

Not forgetting that is going to be [SIZE="5"]NOISY[/SIZE]

fivemack 2015-11-20 20:32

I already run a 48-core 550W 1U server in that outbuilding; it would only deafen the spiders a bit more.

But I think these old blade servers have mostly been discarded because they're wildly inferior to new hardware on an energy-cost-per-computron basis, which remains the case even if they're cheap on eBay. Over three years, probably inferior to the same price in i7/4790K.

Madpoo 2015-11-20 21:56

[QUOTE=fivemack;416736]I already run a 48-core 550W 1U server in that outbuilding; it would only deafen the spiders a bit more.

But I think these old blade servers have mostly been discarded because they're wildly inferior to new hardware on an energy-cost-per-computron basis, which remains the case even if they're cheap on eBay. Over three years, probably inferior to the same price in i7/4790K.[/QUOTE]

Those blade enclosures do have some really cool technology. I kept looking at them as a possible solution for our needs, but the cost equation never seemed to work out in my favor. For what I was after, it always seemed to be a better bargain to get a set of separate 1U or 2U servers.

Of course, I wasn't space constrained, which is where these things really shine. You can fit quite a bit into 10U. Great for high density installations.

HP's new thing for that same target audience are the Moonshot servers. If you're curious:
[URL="https://www.hpe.com/us/en/servers/moonshot.html"]https://www.hpe.com/us/en/servers/moonshot.html[/URL]

The "blades" or "server cartridges" as they're now called are the "m" series instead of the BL series.
[URL="http://www8.hp.com/us/en/products/proliant-servers/index.html#!view=grid&page=1&facet=ProLiant-Moonshot"]Moonshot Cartridges[/URL]

But they're more for high density setups... none of the cartridges really come with a high performance CPU or dual CPUs... they're more for the low-power single-CPU systems and just have a LOT of them in a 4.3U space.

bgbeuning 2015-11-21 03:49

[QUOTE=Xyzzy;416712]We would factor system noise and physical size into the equation.[/QUOTE]

How primitive, his chair doesn't have wheels.

The 42U rack in my basement only has 2 1U servers and they want company.

blip 2015-11-21 08:30

[QUOTE=bgbeuning;416759]

The 42U rack in my basement only has 2 1U servers and they want company.[/QUOTE]

Same here, alas, I'd like to make a good bargain [URL="https://www.eex.com/en/"]here[/URL] first.

masser 2015-11-24 01:55

I'm selling my 32GB ram system:

[url]http://pcpartpicker.com/p/Qr9kt6[/url]

on ebay:

[url]http://www.ebay.com/itm/191745115466?ssPageName=STRK:MESELX:IT&_trksid=p3984.m1555.l2649[/url]

I would love for the parts/system to go to a mersenneforum person. It was a fun system to crunch with, but I've got the itch to build smaller, low wattage systems.

Batalov 2015-11-24 04:21

You can play the gift of the magi game [URL="http://mersenneforum.org/showthread.php?p=417028#post417028"]with Brain[/URL]! You could send him the system and he will send you the GPU. ;-)

LaurV 2015-11-24 05:15

[QUOTE=Batalov;417080]You can play the gift of the magi game [URL="http://mersenneforum.org/showthread.php?p=417028#post417028"]with Brain[/URL]! You could send him the system and he will send you the GPU. ;-)[/QUOTE]
Or you both send me the system and the GPU and I make them run together :razz:

0PolarBearsHere 2015-11-24 09:27

[QUOTE=LaurV;417083]Or you both send me the system and the GPU and I make them run together :razz:[/QUOTE]

I might be happy to take a fully populated one of these off someone's hands (for free).
[url]http://www.rave.com/product/rr-2411-6-gpu-tesla-supercomputer/[/url]

Not really bargain hardware though :P

kladner 2015-11-24 16:47

[QUOTE=0PolarBearsHere;417104]I might be happy to take a fully populated one of these off someone's hands (for free).
[URL]http://www.rave.com/product/rr-2411-6-gpu-tesla-supercomputer/[/URL]

Not really bargain hardware though :P[/QUOTE]

[B]*The ambient operating temperature must not exceed 25°C/77°F when [U]six K80 GPUs[/U] are populated.[/B]



Ya reckon it would be a bit loud? :whistle:

lavalamp 2015-11-24 16:56

300 W TDP per card, add in another 200 W for the rest of the system too, probably looking at a maximum of 2 kW power use in a 2U rack. Still 17.5 DP TFLOPs, can't argue with that.

Just the power bill alone for a 48U rack full of these would be eye watering.

kladner 2015-11-24 17:00

[QUOTE=lavalamp;417138]300 W TDP per card, add in another 200 W for the rest of the system too, probably looking at a maximum of 2 kW power use in a 2U rack. Still 17.5 DP TFLOPs, can't argue with that.

Just the power bill alone for a 48U rack full of these would be eye watering.[/QUOTE]
Not to mention the A/C cost. Best be in a cold place with cheap power. Without monstrous cooling those cards would throttle.

Madpoo 2015-11-24 18:57

[QUOTE=lavalamp;417138]300 W TDP per card, add in another 200 W for the rest of the system too, probably looking at a maximum of 2 kW power use in a 2U rack. Still 17.5 DP TFLOPs, can't argue with that.

Just the power bill alone for a 48U rack full of these would be eye watering.[/QUOTE]

Assuming you could populate a 48U rack full of these, and going with 2KW per server, that's 48 KW in a single cabinet. That's a lot. You'd be talking a full load of 4 3-phase power strips (60A x 208V) per cabinet, and even then you'd really be going max capacity...probably need to shoehorn in a 5th strip. Datacenters will generally call a circuit saturated when you're at the 0.8 * W level.

Many datacenters I've dealt with won't even allow a power density like that in the first place just for cooling reasons alone. They typically list the BTUs per square foot they'll handle and 48KW in the footprint of a single cabinet is pushing it for all but the most efficient locations. Those would be the datacenters with hot and cold aisles separated by plastic walls and doors to make sure the cooling is directed exactly where it needs to be, and they'd actually make use of the top-mounted fans on the cabinet.

The location where Primenet itself is hosted is kind of like that... I don't know what their power density is exactly, but since they rent cabinet space per rack-unit, they want to optimize how many can fit in a single cabinet. The cages are lined to keep the hot/cold aisles separate, blank panels on all empty U, etc.

EDIT: After looking at some actual estimates from Raritan on their 3-phase PDUs, looks like a 60A 3-phase strip could handle 17.3 KVA, not the 12.5 KVA I foolishly assumed, because we're talking 3-phase here. So 4 of those actually would get you going, provided you had the cooling capacity to handle a beast like that.

retina 2015-11-25 00:13

[QUOTE=Madpoo;417150]You'd be talking a full load of 4 3-phase power strips (60A x 208V) ...[/QUOTE]So many assumptions. More than a few[sup][1][/sup] places in the world use 415V 3Φ.

[size=1][color=grey][sup][1][/sup] By "more than a few", I mean most.[/color][/size] :razz:

chalsall 2015-11-25 00:17

[QUOTE=retina;417178]So many assumptions. More than a few[sup][1][/sup] places in the world use 415V 3Φ.

[size=1][color=grey][sup][1][/sup] By "more than a few", I mean most.[/color][/size] :razz:[/QUOTE]

Care to list them for distribution rather than transmission?

LaurV 2015-11-25 05:22

[QUOTE=Madpoo;417150]Assuming you could populate a 48U rack full of these...[/QUOTE]
:goodposting:
Very good and informative post.

Well, I would be happy with only one, that one with 6 cards. :razz:
Anyhow, to stay in the "power" side of the discussion, they ARE passive cooled, but the case has 12 fans in the front, I think you can install thick fans on two rows, which is like 24 fans, high speed and high noise! And not low power, they consume about 10-15W each... So, beside of cards/cpus, you add the fans, divide by 0.8 (the efficiency of PSU) then multiply by two (redundant power supply, eventually hot-swap)...

I am still wondering how they solve all those problems. I would really like to work in a company that design or produce those toys...

LaurV 2015-11-25 05:25

[QUOTE=chalsall;417179]Care to list them for distribution rather than transmission?[/QUOTE]
What's the difference? You take an insulated knife, cut the transmission line and connect the hot side to your computer directly, for distribution. And don't tell anybody..

Disclaimer: don't do that at home!

Mark Rose 2015-11-25 06:44

[QUOTE=LaurV;417204]So, beside of cards/cpus, you add the fans, divide by 0.8 (the efficiency of PSU) then multiply by two (redundant power supply, eventually hot-swap)... [/QUOTE]

If I were planning to run a 2 kW system, I'd spec a more efficient power supply than that, like [url=http://www.super-flower.com.tw/products_detail.php?class=2&sn=16&ID=119&lang=]this one[/url] that's 90%+ efficient, and save 277 watts over an 80% efficient unit.

kladner 2015-11-25 07:23

[QUOTE=Mark Rose;417216]If I were planning to run a 2 kW system, I'd spec a more efficient power supply than that, like [URL="http://www.super-flower.com.tw/products_detail.php?class=2&sn=16&ID=119&lang="]this one[/URL] that's 90%+ efficient, and save 277 watts over an 80% efficient unit.[/QUOTE]

Even with an 800 to 900 watt system, a Platinum supply made a noticeable improvement over its Gold predecessor. It is also a 1200 w supply, instead of 1 KW, so it runs a bit more in its sweet spot. On the downside, it is too long for me to put anything larger than a 92mm fan in the bottom port of the case, where previously I had a 140mm. :max:

Xyzzy 2015-11-25 16:07

Wouldn't it be more efficient to run stuff off of DC and have just one big (very efficient?) AC/DC converter?

:mike:

retina 2015-11-25 16:24

[QUOTE=Xyzzy;417242]Wouldn't it be more efficient to run stuff off of DC and have just one big (very efficient?) AC/DC converter?[/QUOTE]Only if you have superconducting cables to each cabinet and your converter is very reliable. Otherwise it is better to have lower currents for less cable loss and also to distribute the conversion among many PSU for redundancy.

LaurV 2015-11-25 16:38

[QUOTE=Mark Rose;417216]If I were planning to run a 2 kW system, I'd spec a more efficient power supply than that, like [URL="http://www.super-flower.com.tw/products_detail.php?class=2&sn=16&ID=119&lang="]this one[/URL] that's 90%+ efficient, and save 277 watts over an 80% efficient unit.[/QUOTE]
There is no such a think like 90+ for 2KW power supplies. The one you link to it is guaranteed 80+, and the specs say it is 90% [B][U]when 50% load[/U][/B], which is a marketing trick in fact. Put it at work and measure it, like measure input and output energy and you may get 75% when under heavy load. When close to nominal, you lose by heating, switching frequency out of optimal, etc. Also, consider that most stuff in your computer runs at 5V, 3.3V, down to 1.8V. Say an average of 4V (pulled out of my butt, but let's say), then P=U*I, I=P/U, 2000W/4V=500 Amps going through all those wires. No matter what you put in between, mosfets, diodes, spaceships, per total, there are 500 amps going through all that "sheaf" of wires, which may look like in your avatar, which I love, or may look like newest mobos, which I may not love, but at the end, those 500 Amps give out a lot of heat. There is no way you could transform those 240V AC into 12V, 5V, 3.3V etc without losing a lot on the way, short of using some supercondutor wires. For example, at 50 amps you lose 0.2V in the 40 cm of (AWG) wires between the power supply and the mobo: the PSU gives 5.15V, and the mobo gets 4.95V. Measure it! This is already 0.2 from 5, or 4% of the "efficiency". Gone! Pufff!

And we talk about 300-500Amps stepped down in one or more steps (like, 1.8 core voltage, made from 3.3, made from...) not about a single stepped 50 amps.

Advertising and marketing, yeah, full!
But reality is different.

I go to bed now... Midnight here, [URL="https://en.wikipedia.org/wiki/Loi_Krathong"]Loy Krathong[/URL] is gone, but these idiots around me just discovered firecrackers, fireworks and petards, they discover them every year, and they are very enthusiastic about making lots of noises and smoke, scaring the ghosts away, whatever, like children having new toys... In spite of [URL="http://bangkok.coconuts.co/2015/11/10/bangkok-bans-fireworks-sky-lanterns-during-loy-krathong"]official bans[/URL] - every year the buildings look like after the war, and the hospitals are full with guys having accidents, damaged eyes, ears, fingers... Few cretins even try to put firecrackers in their mouth or in their asses every year, :w00t: this is not a joke!... worse than darwin awards. And they don't learn...

chris2be8 2015-11-25 17:12

There must be a point where you want the fans driven by a mains powered electric motor. A three phase induction motor is at least as efficient as a brushless dc motor, but doesn't go through the PSU so saves about 20% there.

The limit is the thicker insulation for mains voltage, so it doesn't scale down to only a few watts.

Chris

danaj 2015-11-25 17:52

My last job upgraded their network gear and gave away the old equipment to employees. I picked up a nice Dell 3224 24-port switch. It works, but t took less than 5 minutes for my family to scream at me to turn the bloody thing off. It's a rack mount unit and has two small howling fans in the back.

Madpoo 2015-11-26 05:40

[QUOTE=LaurV;417204]...
I am still wondering how they solve all those problems. I would really like to work in a company that design or produce those toys...[/QUOTE]

Not only all that, but I've seen ambient temp differences of 5-10 degrees F between systems on the bottom of a cabinet and those on the top (of a loaded cabinet, that is).

Heat rises, and unless you have a LOT of airflow from front to back, some of the heat from the lower systems migrates up the cabinet. Now that I think about it, a top-mounted fan to vent up and out seems like it would draw air up that way on purpose, rather than rely on the systems to move the air front-back.

Besides that, some cabinets are just too stinkin' shallow to mount your network gear in the front. It's [I]very[/I] common to mount your net gear (switches, firewalls, etc) facing backwards where there's more room for copper or fiber to stick out. Plus it helps wire routing since all the servers have their ports in the back.

But what that means is you now have the exhaust from your switches or whatever blowing from the hot aisle to the cold. Doh! But it's truly unavoidable unless you have a nice (expensive) cabinet with more room up front where the door won't crimp your fiber when you close it. Honestly, it drives me nuts that they don't take that into consideration.

I just suck it up, mount 'em backwards and put them as high up in the cabinet as I can so the heat coming out won't get "sucked" back into any server intakes.

fivemack 2015-11-26 08:08

[QUOTE=Madpoo;417296]
But what that means is you now have the exhaust from your switches or whatever blowing from the hot aisle to the cold. Doh! But it's truly unavoidable unless you have a nice (expensive) cabinet with more room up front where the door won't crimp your fiber when you close it. Honestly, it drives me nuts that they don't take that into consideration.[/quote]

I notice that [url]http://www.colfaxdirect.com/store/pc/viewCategories.asp?idCategory=7[/url] explicitly have 'front to back airflow' and 'back to front airflow' on some of the switches. Those little box fans that 1U devices use look as if they rotate through 180 degrees without all that much kludging.

Xyzzy 2015-11-26 14:55

1 Attachment(s)
.

Mark Rose 2015-11-26 15:51

[QUOTE=Xyzzy;417318].[/QUOTE]

Pretty sure I saw that on /r/cableporn the other day.

xilman 2015-11-26 18:47

[QUOTE=Xyzzy;417318].[/QUOTE][b]Extremely[/b] rare that you see network cables that tidily arranged. Usually looks like a plate of spaghetti.

VBCurtis 2015-11-26 18:51

So I spent a few days researching the Dell C6100 server mentioned above. I decided I would like to learn to set up a headless server, tinker with RAID on old smallish disks, and have 32 cores of Xeon (hyperthreaded, I do believe) of NFS firepower. I'm just not sure it's worth $800 for a toy.

I mention the C6100 to a techie pal. His reply: "Those are great servers, I've used them at like 4 different employers. Hey, I have an older one in my garage. Want it?"

So, 32 x 2.5Ghz cores of Xeon will be here for Christmas. Thanks for the idea, bgbeuning!

Going to try LinuxPMI for thread migration, see if I can run a single instance of factmsieve.py with 32 (or 64!) threads and have the processes migrate to the other nodes automagically.

Chuck 2015-11-26 19:07

[QUOTE=xilman;417335][b]Extremely[/b] rare that you see network cables that tidily arranged. Usually looks like a plate of spaghetti.[/QUOTE]

When the Dept of Navy data center I worked for starting in 1989 moved to a new building, the network racks were extremely tidy (though not color-coded). However, over ensuing years it turned into the spaghetti version with additions/changes etc.

Mark Rose 2015-11-26 19:22

[QUOTE=xilman;417335][b]Extremely[/b] rare that you see network cables that tidily arranged. Usually looks like a plate of spaghetti.[/QUOTE]

[url]https://www.reddit.com/r/cableporn/[/url]

I spent hours ogling the first time I cam across that subreddit. It's SFW.

Madpoo 2015-11-26 20:30

[QUOTE=xilman;417335][b]Extremely[/b] rare that you see network cables that tidily arranged. Usually looks like a plate of spaghetti.[/QUOTE]

My installs typically start out fairly tidy (nothing like that pic above... that was SEXY). But then after a few years of adding/removing equipment without being able to take things offline at all, you shortcut here and there to avoid accidentally unplugging the wrong things, and next thing you know you've got bundles of wires draped where they don't belong.

I've likened it (swapping out gear) to changing a tire while the car is still going down the highway... that it can be done at all is impressive, so I'm not terribly concerned if the cabling isn't as pretty as it used to be. Besides, when it's in a remote datacenter, out of sight = out of mind. :smile: Just so long as I don't block the fan outlets and trap excess heat back there, I'm happy.

chalsall 2015-11-26 20:46

[QUOTE=Madpoo;417339]My installs typically start out fairly tidy (nothing like that pic above... that was SEXY).[/QUOTE]

Indeed. But then try to replace a bad cable or a connector in that bundle...

Note that in the picture there were zip-ties. Dumber than bricks.


All times are UTC. The time now is 15:35.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.