mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2017-05-22, 09:48   #23
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

2×7×461 Posts
Default

Quote:
Originally Posted by GP2 View Post
Probably the Sandy Bridge box with six cores is not the most cost effective, however. A high-end Xeon box with a lot more cores might make more sense despite the much higher upfront cost. It would be interesting to do the analysis for that too.
My £300 200W second-hand Sandy Bridge boxes are dual eight-core 2GHz Xeons; when I used them for mprime, I ran four workers at four threads each, which gets me (so far as I can parse out from mersenne.org results for 'birch3' of user 'fivemack') four 73M numbers completed per box about every twelve days.

Last fiddled with by fivemack on 2017-05-22 at 21:26
fivemack is offline   Reply With Quote
Old 2017-05-22, 21:07   #24
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

2C6E16 Posts
Default

Quote:
Originally Posted by Madpoo View Post
I asked how they're measuring power to our circuits... is it a clamp-on ammeter, or something else? No idea. I can only rely on what the APC strips tell me.
Not meaning to make too fine of a point, but do you have SNMP access to meaningful data wrt power consumption?
chalsall is offline   Reply With Quote
Old 2017-05-23, 07:47   #25
westicles
 
May 2015

7 Posts
Default

Probably worth waiting a couple weeks to see if Skylake-X supports AVX-512.
westicles is offline   Reply With Quote
Old 2017-05-23, 07:59   #26
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22×2,939 Posts
Default

Quote:
Originally Posted by westicles View Post
Probably worth waiting a couple weeks to see if Skylake-X supports AVX-512.
If by 'X' you mean Xeon, we already know that those do indeed support AVX-512. It's only a question of when they become available.
ewmayer is offline   Reply With Quote
Old 2017-05-23, 09:07   #27
westicles
 
May 2015

7 Posts
Default

Quote:
Originally Posted by ewmayer View Post
If by 'X' you mean Xeon, we already know that those do indeed support AVX-512. It's only a question of when they become available.
Not the server chip, but the high-end desktop. This one says it will support AVX-512 but others say it won't.

https://hothardware.com/news/monster...9-7920x-leaked
westicles is offline   Reply With Quote
Old 2017-05-23, 21:22   #28
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22·2,939 Posts
Default

Quote:
Originally Posted by westicles View Post
Not the server chip, but the high-end desktop. This one says it will support AVX-512 but others say it won't.

https://hothardware.com/news/monster...9-7920x-leaked
Ah, gotcha - I believe the A to your Q is at very end of the linked piece:

Other reportedly confirmed specs for these new parts include supports for dual-channel DDR4-2666 on Kaby Lake-X (quad-channel for Skylake-X), 112W TDP for Kaby Lake-X, 160W TDP for Skylake-X and AVX-512 support for all Core i9 SKUs.

So my surmise is AVX-512 support in all the i9s[Skylake-X], but not in the new i7s[Kaby Lake-X].
ewmayer is offline   Reply With Quote
Old 2017-05-29, 21:00   #29
GP2
 
GP2's Avatar
 
Sep 2003

2×5×7×37 Posts
Default

Quote:
Originally Posted by airsquirrels View Post
In reality you could put 3-4 GPUs in one 100W host and drop your incremental $/exponent down to $1.14 or so.
I'm not well-informed about hardware since I use the cloud, but I was wondering if this configuration is in fact realistic? Are you yourself, or anyone else, actually doing this?

Can you really fit four Titan Blacks — all of them generating an amount of heat proportionate to the 250 W that each one consumes — into a single inexpensive host box?

From images, these cards are quite thick and solid, and it's not even clear that you could fit one next to another in adjacent slots. Not to mention that doing so would completely block the airflow of the cooling fan.

Presumably you'd be using some beefed-up liquid cooling system in a very large case, but wouldn't that entail hardware costs above what you initially assumed, and probably additional electricity costs to circulate the liquid?

Can the average motherboard even accept four GPU cards, or would you need to buy a more expensive special-purpose one? Or would this require some kind of elaborate data-center rack mount system, again with additional hardware expense to be taken into account?
GP2 is offline   Reply With Quote
Old 2017-05-29, 21:18   #30
airsquirrels
 
airsquirrels's Avatar
 
"David"
Jul 2015
Ohio

11×47 Posts
Default

Quote:
Originally Posted by GP2 View Post
I'm not well-informed about hardware since I use the cloud, but I was wondering if this configuration is in fact realistic? Are you yourself, or anyone else, actually doing this?

Can you really fit four Titan Blacks — all of them generating an amount of heat proportionate to the 250 W that each one consumes — into a single inexpensive host box?

From images, these cards are quite thick and solid, and it's not even clear that you could fit one next to another in adjacent slots. Not to mention that doing so would completely block the airflow of the cooling fan.

Presumably you'd be using some beefed-up liquid cooling system in a very large case, but wouldn't that entail hardware costs above what you initially assumed, and probably additional electricity costs to circulate the liquid?

Can the average motherboard even accept four GPU cards, or would you need to buy a more expensive special-purpose one? Or would this require some kind of elaborate data-center rack mount system, again with additional hardware expense to be taken into account?
I do indeed have multiple systems with density between 3 and 10 GPUs per host. Of course the 10 GPU system has significant 11000RPM fans blowing through the horizontal cooling v fins allowing the GPUs to be side by side without needing as much airflow. The 8GPU systems are liquid cooled via an external loop, so that is not a good comparison.

A better example is the system I built over the weekend. $250 Kaby Lake 51W CPU host with 1x PCIe risers and 7 GPUs spread out with ~ 2 slot spacing separate from the motherboard. The cards (RX580s) are around 65* at load with no additional fans in a 72* room. This of course does not work as well for GPU loads that need lots of PCIe bandwidth, however neither LL or TF requires this.
airsquirrels is offline   Reply With Quote
Old 2017-05-29, 21:22   #31
airsquirrels
 
airsquirrels's Avatar
 
"David"
Jul 2015
Ohio

11·47 Posts
Default

Here is what the 8 Titan Blacks looked liked in the SuperMicro (expensive) GPU host:
Attached Thumbnails
Click image for larger version

Name:	IMG_2061.PNG
Views:	179
Size:	389.9 KB
ID:	16159  
airsquirrels is offline   Reply With Quote
Old 2017-05-29, 23:46   #32
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

27AE16 Posts
Default

Quote:
Originally Posted by airsquirrels View Post
Here is what the 8 Titan Blacks looked liked in the SuperMicro (expensive) GPU host:
Gulp.
What is the combined approximate LL throughput? What is the price range of the host board?
kladner is offline   Reply With Quote
Old 2017-05-30, 00:49   #33
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

22×2,939 Posts
Default

Quote:
Originally Posted by kladner View Post
Gulp.
What is the combined approximate LL throughput? What is the price range of the host board?
You probably don't wanna know. David's budget-priced new KabyLake + multi-GPUs system, OTOH, might within the reach of those of modest means, especially if populated with used GPUs of the kind David is also selling in a neighboring thread - though the stuff he sells tends to draw significantly more Watts/FLOP than the newer gear he replaces it with, so you def. need to factor in electricity cost when evaluating such a 'budget gear' option.
ewmayer is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
What is the purpose of these these forums? Who is welcome? only_human Soap Box 78 2012-06-21 13:12
Prime95 version 26.5 built 5 slow! Unregistered Information & Answers 7 2012-06-17 05:22
New Sandy Bridge Computer Help (Built - WOW!) Prime95 Hardware 104 2011-05-24 00:32
petaflop computer built Fusion_power Hardware 7 2008-06-11 10:06
Purpose of p-1 factoring drew Marin's Mersenne-aries 2 2005-06-29 15:00

All times are UTC. The time now is 15:50.


Fri Jul 7 15:50:10 UTC 2023 up 323 days, 13:18, 0 users, load averages: 1.27, 1.26, 1.22

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔