![]() |
|
|
#265 | |
|
Romulan Interpreter
Jun 2011
Thailand
7×1,373 Posts |
Quote:
|
|
|
|
|
|
|
#266 | |
|
Oct 2015
1000010102 Posts |
Quote:
http://www.rave.com/product/rr-2411-...supercomputer/ Not really bargain hardware though :P |
|
|
|
|
|
|
#267 | |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
Quote:
Ya reckon it would be a bit loud?
|
|
|
|
|
|
|
#268 |
|
Oct 2007
Manchester, UK
101010010112 Posts |
300 W TDP per card, add in another 200 W for the rest of the system too, probably looking at a maximum of 2 kW power use in a 2U rack. Still 17.5 DP TFLOPs, can't argue with that.
Just the power bill alone for a 48U rack full of these would be eye watering. |
|
|
|
|
|
#269 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
Not to mention the A/C cost. Best be in a cold place with cheap power. Without monstrous cooling those cards would throttle.
|
|
|
|
|
|
#270 | |
|
Serpentine Vermin Jar
Jul 2014
331110 Posts |
Quote:
Many datacenters I've dealt with won't even allow a power density like that in the first place just for cooling reasons alone. They typically list the BTUs per square foot they'll handle and 48KW in the footprint of a single cabinet is pushing it for all but the most efficient locations. Those would be the datacenters with hot and cold aisles separated by plastic walls and doors to make sure the cooling is directed exactly where it needs to be, and they'd actually make use of the top-mounted fans on the cabinet. The location where Primenet itself is hosted is kind of like that... I don't know what their power density is exactly, but since they rent cabinet space per rack-unit, they want to optimize how many can fit in a single cabinet. The cages are lined to keep the hot/cold aisles separate, blank panels on all empty U, etc. EDIT: After looking at some actual estimates from Raritan on their 3-phase PDUs, looks like a 60A 3-phase strip could handle 17.3 KVA, not the 12.5 KVA I foolishly assumed, because we're talking 3-phase here. So 4 of those actually would get you going, provided you had the cooling capacity to handle a beast like that. Last fiddled with by Madpoo on 2015-11-24 at 19:13 |
|
|
|
|
|
|
#271 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
140648 Posts |
|
|
|
|
|
|
#272 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×5×7×139 Posts |
|
|
|
|
|
|
#273 |
|
Romulan Interpreter
Jun 2011
Thailand
7·1,373 Posts |
![]() Very good and informative post. Well, I would be happy with only one, that one with 6 cards. ![]() Anyhow, to stay in the "power" side of the discussion, they ARE passive cooled, but the case has 12 fans in the front, I think you can install thick fans on two rows, which is like 24 fans, high speed and high noise! And not low power, they consume about 10-15W each... So, beside of cards/cpus, you add the fans, divide by 0.8 (the efficiency of PSU) then multiply by two (redundant power supply, eventually hot-swap)... I am still wondering how they solve all those problems. I would really like to work in a company that design or produce those toys... |
|
|
|
|
|
#274 |
|
Romulan Interpreter
Jun 2011
Thailand
7·1,373 Posts |
What's the difference? You take an insulated knife, cut the transmission line and connect the hot side to your computer directly, for distribution. And don't tell anybody..
Disclaimer: don't do that at home! |
|
|
|
|
|
#275 | |
|
"/X\(‘-‘)/X\"
Jan 2013
22·733 Posts |
Quote:
|
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Hardware Benchmark Jest Thread for 100M exponents | joblack | Hardware | 284 | 2020-12-29 03:54 |
| Garbage hardware thread | PageFault | Hardware | 21 | 2004-07-31 20:55 |
| Old Hardware Thread | E_tron | Hardware | 0 | 2004-06-18 03:32 |
| Deutscher Thread (german thread) | TauCeti | NFSNET Discussion | 0 | 2003-12-11 22:12 |
| Gratuitous hardware-related banana thread | GP2 | Hardware | 7 | 2003-11-24 06:13 |