![]() |
[QUOTE=storm5510;622365]I always found a warm or hot plug in a wall outlet concerning. In this situation, I would visit Lowes and get a length of 12 AWG and add a higher capacity plug and receptacle. I am sure you have the proper tools to put it all together. Just a thought.[/QUOTE]
Indeed. I once had bad wiring on an electrical socket: some of the insulation was under the screw on the side of the plug, which didn't like having ~900 watts pulled through it for GIMPS. It partially melted the plug on my laptop charger and was damaging the insulation on the house wiring. I measured it at 72°C (162°F). Before getting that 15 amp cord with 14 AWG wiring, I had a 16 AWG cord. The cord wasn't getting too warm, but the plug had reached 56° (133°F). The new cord's plug reaches about 30° (86°F) in a 20° (68°F) basement. I'm not worried about it as it's staying there steadily. |
[QUOTE=Mark Rose;622379]Indeed. I once had bad wiring on an electrical socket: some of the insulation was under the screw on the side of the plug, which didn't like having ~900 watts pulled through it for GIMPS. It partially melted the plug on my laptop charger and was damaging the insulation on the house wiring. I measured it at 72°C (162°F).
Before getting that 15 amp cord with 14 AWG wiring, I had a 16 AWG cord. The cord wasn't getting too warm, but the plug had reached 56° (133°F). The new cord's plug reaches about 30° (86°F) in a 20° (68°F) basement. I'm not worried about it as it's staying there steadily.[/QUOTE] I imagine you could smell it cooking. I have several things here that get warm right where the wire goes into the back of the plug. The worst is a Bissell vacuum cleaner. It almost gets too warm to pull out of a wall receptacle by hand if I use it more than a few minutes. I would stay the wire inside the jacket is 16 AWG by looking at it. Too small for that application. |
Some of the cords that ship with equipment or permanently attached to equipment are not adequate for their continuous operation. That's malpractice IMO.
Re the budget for the hardware suggestions: the OP set the budget.[QUOTE=MarkVanCoutren;622308]I'm looking to spend around $3k.[/QUOTE] |
[QUOTE=storm5510;622365]Something really good can be built for half of that.[/QUOTE]That is true. And building two such half-price systems might give better throughput than a single full-price system. Plus there is some redundancy in case one is out of action.
|
Another possibility for the OP's stated budget is a used system with dual lotsa-core Xeons and half a TB of ECC ram, to do some serious P-1 factoring alongside substantial PRP. (Then scrape up some more funds later and put a couple radeon VIIs in it, for more-efficient PRP.)
As someone who started out on the lotsa cheap systems route, fewer more capable boxes has some appeal for controlling sysadmin overhead. |
It's been a while since I built my i5 6600 systems, but I found them by far the cheapest way to get performance at the time (8.5 years ago, damn!):
I bought 4 x i5-6600 at $148, 4 x 32 GB at $120 ($480 for 8 DIMMs), 4 x motherboard at $40 each, and a single 650 watt power supply for $95. Cost me $2020 after tax. I also bought three ATX splitter cables for US$63. When operating off the one power supply, turbo disabled, each system consumed about 67.5 watts at the wall with a 0.1 undervolt. I did eventually buy GTX 1070s, more power supplies, and cases. Those machines are all doing P-1 currently. I bought 8 channels of bandwidth for 16 cores. The challenge has always been memory bandwidth. I should probably sell those systems while they still have some value. They're not very power efficient these days. It's nice to have a cluster to tinker with though. |
[QUOTE=kriesel]Another possibility for the OP's stated budget is a used system with dual lotsa-core Xeons... [/QUOTE]
I would [U]not[/U] recommend a dual CPU setup until software openly supports it, i.e. [I]Prime95[/I] and others. [QUOTE=Mark Rose]...I should probably sell those systems while they still have some value. They're not very power efficient these days. It's nice to have a cluster to tinker with though.[/QUOTE] Everything I have is getting long-in-the-tooth. Two HP workstations from the early 2010's and the custom i7 I built in 2018. It is hard to believe nearly five years has passed on the newest one. Tinker is right, and I do it frequently. |
[QUOTE=storm5510;622436]I would [U]not[/U] recommend a dual CPU setup until software openly supports it, i.e. [I]Prime95[/I] and others.
[/QUOTE]I've been running prime95 and mlucas on dual-xeon systems for years. It's easy. Single instance by default. For a little more performance, one can run separate prime95 instances launched by Windows batch files specifying which NUMA node & cpu affinities each runs on. But that is not required, just a choice, optional. Probably the same applies on Linux. Dual-xeon used former servers with 128GiB ECC ram can be found for ~$600. I have one of those churning through P-1 redo for exponents that got only stage1 or low stage 2 the first time. |
What makes Radeon VII so good? is that the HBM2 memory?
Would not modern professional cards be better at the task? Ignoring cost of card. |
It's the memory and good FP64 ratio. HBM and good DP ratio is now mostly only in datacentre cards, but if a modern pro card has HBM it's probably better than a Radeon VII. If they come with GDDR6/X then it's likely that they're pro versions of consumers cards which are good at rendering but not necessarily PRP, the main difference is validation and often there's twice as much memory capacity for models. Nvidia's consumer FP64 ratio is awful, and it looks like AMD's RDNA3 FP64 ratio is half as good as RDNA2 so that might be one to avoid, the 6950XT RDNA2 is comparable to the Radeon VII for PRP thanks to the extra cache making bandwidth less relevant.
|
[QUOTE=M344587487;631070]It's the memory and good FP64 ratio. HBM and good DP ratio is now mostly only in datacentre cards, but if a modern pro card has HBM it's probably better than a Radeon VII. If they come with GDDR6/X then it's likely that they're pro versions of consumers cards which are good at rendering but not necessarily PRP, the main difference is validation and often there's twice as much memory capacity for models. Nvidia's consumer FP64 ratio is awful, and it looks like AMD's RDNA3 FP64 ratio is half as good as RDNA2 so that might be one to avoid, the 6950XT RDNA2 is comparable to the Radeon VII for PRP thanks to the extra cache making bandwidth less relevant.[/QUOTE]
Thanks for the explanation! How about professional accelerators like AMD Instinct? Link: [url]https://www.amd.com/en/products/server-accelerators/instinct-mi250x[/url] |
| All times are UTC. The time now is 16:24. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.