![]() |
|
|
#1 |
|
May 2021
2×3 Posts |
Best setup for around $3000 recommendations
Good evening gentlemen,
I got my year-end bonus, and I'm thinking of blowing part of it on a full-time GIMPS setup. I've been running PRP-WR on my gaming PC but I want to build a machine/machines that will be min-maxed for GIMPS. I've been focusing on large core count CPUs with fast RAM but I can't decide if it would be better to go all out on one machine or get two more modest machines. I'm looking to spend around $3k. Does anyone have any setups that worked well for them or any sage advice? Thank you |
|
|
|
|
|
#2 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·112·47 Posts |
|
|
|
|
|
|
#3 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
11010100010012 Posts |
|
|
|
|
|
|
#4 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·112·47 Posts |
|
|
|
|
|
|
#5 | |
|
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/
61608 Posts |
Quote:
I wouldn't run out and buy a 5800X3D though: the new 7800X3D will be out on February 14th. In addition to having 96 MB of L3 it also has AVX512 that will give a significant performance boost. The memory sweet spot for the 7000 series is DDR5-6000, using only one DIMM per channel (two DIMMs per channel often requires lowering speeds to 4800 or lower for stability). You may also hear about the 7900X3D and 7950X3D, but only one of their chiplets has the additional L3. In theory you could run one worker on the chiplet with the extra cache and another worker on the other, which would end up using main memory bandwidth. This is entirely untested, of course, because anyone who has the chips is under NDA. Personally I'd wait and build a pair of 7800X3D systems. All the 7000 series chips have a basic integrated GPU, so you don't need to buy one. The most cost effective GPUs for your task are used Radeon VII off eBay, running gpuOwl (which I have no experience running personally). Last fiddled with by Mark Rose on 2023-01-12 at 06:39 |
|
|
|
|
|
|
#6 |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
137758 Posts |
I would probably recommend going a low-power route. Most of today's cpus are clocked to really inefficient speeds. For example the 7900X(170W) clocks at base/boost 4.7/5.6 Ghz and the 7900(65W) clocks at 3.7/5.4 Ghz. The 7900 is far more efficient.
Another benefit of underclocking cores is that you can use more before saturating memory bandwidth. Given most cpus have more cores than memory bandwidth can handle, underclocking is a good way forward. Assuming they are memory bandwidth bound, the 7900 and 7900X may produce pretty similar performance for some workloads(such as WR-PRP tests) at very different power budgets. Does anyone have any recent benchmark data that confirms that about 2 cores per DDR5 channel is what we can get away with for Zen 4? It would also be interesting to know how much benefit having two CCDs(and double the L3 cache) helps performance per watt. |
|
|
|
|
|
#7 |
|
Sep 2002
Database er0rr
5·937 Posts |
I would opt for a second-hand EPYC 7H12 chip. Water cool it. Give it 8 DIMMS. Okay, maybe not ideal for GIMPS.
|
|
|
|
|
|
#8 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
24·3·163 Posts |
PRP for world-record mersennes? Even the authors of prime95 and mlucas run gpuowl for performance and efficiency, on Radeon VII GPUs, several to a system. Consider an open frame like the coin miners use, but a more capable CPU and motherboard than they use; a KW+ high efficiency power supply (overspec it and operate it near its peak conversion efficiency, that is if you expect to actually use 1KW, get a 1600W)
Rough budget from memory, with adjustments for current guessed pricing: $200 PSU 1600W output rated, gold or higher efficiency rating; 100 good frame that provides support both ends for the gpus; 8-GPU size is nice and roomy for 6 or less; 4 x $500 for Radeon VIIs, ideally all from the same brand name, with Hynix VRAM not Samsung; 6 x $10 for PCIe extenders (keep a spare or 2 at all times, they do fail) total so far ~$2360; spend the other 640 on the following, and if there's any money and power budget left, one more GPU & extender: $10 plugin surge suppressor; at least 64MB of ram, to enable v30.8 efficient P-1 stage 2 or simultaneous P-1 on all GPUs; motherboard with 6 PCIe slots cpu; AVX512 is nice in that it lets one explore the full range of prime95/mprime, but AVX2 allows up to 920M exponent OS Ubuntu, so you can use rocm driver for high performance and save money, or go Windows but that will cost a license fee. Optionally, a roll of hardware cloth or other inexpensive EMI shielding material, from which with tin snips to craft an EMI shield box in halves. (Otherwise cell phone battery life and TV on air reception suffer.) Assemble as if your plan is to install 6 GPUs. Spacing for air flow but occasionally mounting a replacement or spare can be handy. Don't put much else on the same circuit breaker; a microwave oven or vacuum cleaner added & running will trip the breaker downing the system abruptly. For CPU performance, regarding the performance of gpuowl, it is not critical, as it matters briefly, during validation of past work during resumption of runs in progress, during GCD in P-1, and during building proof files at the end of PRP runs. The other 99% of the time it's the GPU's performance that matters. I've used such a rig with only 16GB DDR3 ram, i7-4790 or G1840. CPU speed and installed ram will matter more if you plan to use Gpuowl to do P-1 stage 1 and transfer the P-1 to prime95 on the CPU to do the stage 2, as is offered in a recent update to gpuowl. Run the GPUs at reduced power from nominal, for higher economic efficiency, and to keep from overheating the 15-amp wall socket & surge suppressor. I've seen the plastic discolor or melt on a surge suppressor or inline power monitor, and suppressor and monitor electronic failures, while the outlet still looked ok. (It was presumably from lightning hits conducted through the buried power cabling, although I can't rule out chronic running hot as a contributing factor. Bolts also have taken out unprotected appliances, including a washing machine, the occasional GPU PCB, and motherboard.) Do not run mfakto on the IGP. (I lost two motherboards and 2 GPUs trying that; the combined load of CPU, IGP, and PCIe draw produced sparks and flame on one location on the motherboard.) Last fiddled with by kriesel on 2023-01-12 at 12:21 |
|
|
|
|
|
#9 |
|
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/
61608 Posts |
Also keep in mind that household circuits in North America are limited to 80% or rated capacity for continuous duty, so a 120 volt 15 amp circuit is limited to 1440 watts, and some circuit breakers will trip even below this point after many hours or even days.
My repurposed mining rig is running 3 3070s on one power supply and 5 3070s on another. I cap them at 200 watts each, and unrestricted they will pull 250 running mfaktc, close to maxing out the one circuit. So you may need a dedicated circuit (or two!). It's something to think about before building a monster machine. I would only buy a platinum rated power supply if drawing that much as the additional cost will pay for itself over a year or two in electricity savings. As an FYI for those reading: not all GPUs are good to use with all risers. My 3070s are fine with USB risers because they don't draw much from the PCIe slot. I believe the same with the Radeon VIIs. My 1070s draw too much from the PCIe slot to use with USB risers. Risers should be about $2 each after the mining crash. If you're running a high amp machine, use a cord rated above it. Lets say I don't throttle my five 1070s on the one power supply, they would pull 1250 watts, plus maybe 50 for the motherboard/CPU. Call it 1300 watts. It's a platinum power supply running at 90% efficiency, so that's 1444 watts or 12 amps. That exceeds what a 16 AWG cord can handle continuously (10 amps), if the cord is even built to handle that much (some will get too hot at the plug at either end). I recently purchased a 14 AWG cord rated for 15 amps and it's no longer a concern. The plug at the end still heats up 10°C, drawing just 1150 watts though, measured by infrared thermometer! Last fiddled with by Mark Rose on 2023-01-12 at 17:03 |
|
|
|
|
|
#10 | ||
|
Random Account
Aug 2009
Not U. + S.A.
2,861 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#11 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17×487 Posts |
If PRP is your goal, then by far the best solution IMO is used Radeon VIIs.
My highest throughput machine is an old Intel CPU that I bought to develop AVX-512 code. The mobo supports 4 GPUs. With a 1000W Platinum power supply, I have four Radeon VIIs with overclocked GPU memory and underclocked GPU compute. Each card draws ~150W at the wall. For 112M exponents, the four GPUs will finish 3.7 PRP tests each day. I recently picked up two Radeon VIIs for about $325 each including taxes and shipping. Energy costs are very significant. Platinum power supplies are a must. Running underclocked is a must. If you ever get the urge to run at normal clocks, compute the extra energy costs and instead put that to buying more underclocked GPUs. The payback period for me was just under 2 years. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Configuration recommendations | gedelmann | Software | 4 | 2021-07-06 16:32 |
| Android app recommendations | Uncwilly | Lounge | 11 | 2020-10-07 12:40 |
| need recommendations for a PC | ixfd64 | Hardware | 45 | 2012-11-14 01:19 |
| 3000 < k < 4000 | otutusaus | Riesel Prime Data Collecting (k*2^n-1) | 2 | 2012-05-09 19:27 |
| 3000 and beyond! | guido72 | Teams | 5 | 2009-11-30 16:02 |