mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware

Reply
 
Thread Tools
Old 2021-04-16, 23:27   #23
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

2·13·313 Posts
Default

https://seasonic.com/prime-tx

https://www.tomshardware.com/reviews...-psu,5510.html
https://www.kitguru.net/components/p...supply-review/

$338.33 including shipping and taxes.

Attached Thumbnails
Click image for larger version

Name:	K.JPG
Views:	31
Size:	582.1 KB
ID:	24699  
Xyzzy is offline   Reply With Quote
Old 2021-04-16, 23:31   #24
masser
 
masser's Avatar
 
Jul 2003
wear a mask

7×229 Posts
Default

:drool:
masser is online now   Reply With Quote
Old 2021-04-16, 23:39   #25
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

2·13·313 Posts
Default

Attached Thumbnails
Click image for larger version

Name:	A.PNG
Views:	48
Size:	516.8 KB
ID:	24700   Click image for larger version

Name:	B.JPG
Views:	34
Size:	669.5 KB
ID:	24703  
Xyzzy is offline   Reply With Quote
Old 2021-04-17, 07:52   #26
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

25·5·59 Posts
Default

Quote:
Originally Posted by Xyzzy View Post
Code:
Model name:                      Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz 
CPU MHz:                         3000.000 
CPU max MHz:                     4800.0000 
 CPU min MHz:                     1200.0000 
...
Code:
$ cat results.bench.txt | grep 6144 
...
Timings for 6144K FFT length (18 cores, 1 worker):  1.88 ms.  Throughput: 531.20 iter/sec.
Quote:
Originally Posted by Xyzzy View Post
TL;DR - This CPU is a tiny bit faster than a Vega 64 GPU for PRP work.
Code:
[Main thread Apr 10 20:26] Mersenne number primality test program version 29.8 
[Main thread Apr 10 20:26] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 18x1 MB, L3 cache size: 25344 KB 
[Work thread Apr 10 20:26] Starting Gerbicz error-checking PRP test of M77936863 using AVX-512 FFT length 4200K, Pass1=1920, Pass2=2240, clm=1, 18 threads 
[Work thread Apr 10 20:28] Iteration: 100000 / 77936863 [0.12%], ms/iter:  1.124, ETA: 24:17:50 
...
[Work thread Apr 10 20:45] Iteration: 1000000 / 77936863 [1.28%], ms/iter:  1.122, ETA: 23:58:56 
 [Work thread Apr 10 20:45] Gerbicz error check passed at iteration 1000000.
Quote:
During this test the CPU runs at 2.8GHz all-core (AVX-512) and uses ~200W. The temps stabilize at 57°C with a Noctua UH-12S HSF with the computer out in the open on a test bench.
Same here, we got the same ballpark, more or less, except we stay below 48°C with one of these. Not an endorsement, we bought the cheapest we found that fits for 2066 socket. We think an x120 is too small for the 18-cores, ~200W monster (we try to stay in 165W by limiting the power from mobo) and we do not have space in the case for the x360 or larger. The x240 will fit perfectly there. We will have to put another "two times x120" and "two times x240" in the box, from the 4 GPU cards (only one 1080Ti with an x120 cooler mounted in the picture, which drives the monitor, but the final build will have 2x1080Ti and 2x2080Ti, mix cooling, i.e. they have both water and air cooling, as seen in the photo attached below, and in other former pictures we posted here around in the past).

Click image for larger version

Name:	blue.jpg
Views:	60
Size:	149.8 KB
ID:	24704

(and DON'T make fun of my barbeque stick! That's an important piece of hardware, without it the card would fall on a side, as there is no fixing frame nor screws, and it is quite heavy.
we don't want any magic smoke getting out from any component
also, there is one 1T 2280 M.2 under the wooden stick, covered by the chipset's cooling block, you can see it if you look careful to the photo; there is no other ssd or hdd connected; no connection to the subject, just boasting ).

Initially (before tuning the clocks and FFTs), the lower FFTs were a bit faster, but the larger were slower (capsized at 501 iter/second), compared with your benchmarks. Note that we decided to go for 64GB RAM at 3200 - we wanted 128 initially, possible higher clocks, but the price and especially the lead-time were lousy, and we concluded that for the current (old) rig we "never" used all the ram (i7-6950x, 128G at 2400), so we bailed out at 64 for the new one, which we could buy directly at the counter - advantage of buying it locally is that, in case of failure, we can go back and throw it in their heads (joking, people here at local "JIB computers" were always very nice and replaced all our bad parts, when they went into the weeds, which didn't happen too often, but it did happen sometimes (grrr... wanted to write "now and then" but decided against using "now" and started knocking on head wood ).
Quote:
Originally Posted by Xyzzy View Post
We have a W10 "Home" license but we were unable to upgrade it to a "Pro" license.
When you get win10 (create bootable usb, directly from M$ site) there is no difference between home or pro. Only enterprise is different. If you installed and registered as home, there is a way to "upgrade" to pro, but it is complicate, requires deleting files, etc. OTOH, unless you really do "strange things" in your computer, you won't need pro, you can do everything you want with home. Look to the comparison list, the difference is much smaller than it was for former windozes (like between win7 home vs pro), and the price tells it too (like 139 vs 199 bucks). Possibly, install anydesk on top to compensate for missing network features, and you are good to go. Performance-wise, there is no difference (we tested that, be sure!).

We are here right now in the same situation, we tried without success to install win 7 (from original dvd, paid many years ago to M$) but the mobo is too new and it crashes somewhere in the middle of the installation process, in fact, it doesn't crash, but the mouse and keyboard are inaccessible, and we can't click a "next" button in spite of the fact that we reconnected them to different USB ports, and the animation on screen is still running (so the computer waits for us to click next, or "tab" into it, but neither the keyboard nor the mouse works). We assume Gigabyte, Microsoft and Intel conspired against us, they have a section of micro-code hidden in Bios, or wherever, which says "if the user is LaurV and the time zone is Thailand and the installation time is between 1:00 AM and 4:00 AM, then do not install win 7, and lock the keyboard and the mouse".

Therefore, we had to dld win10 installer from M$, and create the bootable usb and install win 10. The download, as well as installation, went unbelievable smooth! We didn't register it (as either home or pro) yet. We are still "deciding" about the fact that we can use this opportunity to install some Linux distro and start the funk once for all learn some real operating system (for long an item in our "todo list", but never found the time, motivation, cleverness, etc., therefore we still suck at that subject). But as we were for 40 years a windoze user (and programmer, we ported WinCE and some Embed versions to many devices that the factory we work for, produced along the years), we continue to procrastinate that task...

Ok, and now, 521 restarts later (or may be 607 restarts, we are nor sure) we got something like this (edited to look similar to yours, the details in the attached zip).

We would be interested if you "tune" yours in the same fashion (try all FFT combinations, with 1, 2, 3, 6, 9 workers, these 18 cores have strange habits when marry each-other...)
Code:
Core/Workers equal split:

FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=1 (18 cores, 1 worker) :  1.82 ms.  Throughput: 549.85 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=1 (18 cores, 2 workers):  5.32,  5.56 ms.  Throughput: 368.02 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (18 cores, 3 workers):  8.85,  8.81,  8.82 ms.  Throughput: 339.84 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (18 cores, 6 workers): 18.93, 18.58, 18.22, 18.67, 18.76, 18.33 ms.  Throughput: 322.98 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (18 cores, 9 workers): 28.39, 28.30, 28.28, 28.34, 27.70, 28.13, 27.91, 28.30, 27.88 ms.  Throughput: 319.88 iter/sec.

Xyzzy's test (best, after tuning):

FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (1 core, 1 worker):   19.38 ms.  Throughput: 51.60 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (2 cores, 1 worker):   9.94 ms.  Throughput: 100.56 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (3 cores, 1 worker):   6.68 ms.  Throughput: 149.63 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (4 cores, 1 worker):   5.07 ms.  Throughput: 197.42 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=2 (5 cores, 1 worker):   4.13 ms.  Throughput: 241.90 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=2 (6 cores, 1 worker):   3.49 ms.  Throughput: 286.43 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=2 (7 cores, 1 worker):   3.03 ms.  Throughput: 329.80 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (8 cores, 1 worker):   2.77 ms.  Throughput: 361.65 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (9 cores, 1 worker):   2.54 ms.  Throughput: 393.35 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=1 (10 cores, 1 worker):  2.39 ms.  Throughput: 418.04 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=2 (11 cores, 1 worker):  2.28 ms.  Throughput: 438.67 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=1 (12 cores, 1 worker):  2.17 ms.  Throughput: 460.50 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=1 (13 cores, 1 worker):  2.11 ms.  Throughput: 473.72 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=1 (14 cores, 1 worker):  2.04 ms.  Throughput: 490.41 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=1 (15 cores, 1 worker):  1.98 ms.  Throughput: 506.07 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=1024, Pass2=6144, clm=1 (16 cores, 1 worker):  1.93 ms.  Throughput: 517.92 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=1 (17 cores, 1 worker):  1.90 ms.  Throughput: 527.26 iter/sec.
FFTlen=6144K, Type=3, Arch=8, Pass1=768,  Pass2=8192, clm=1 (18 cores, 1 worker):  1.86 ms.  Throughput: 536.87 iter/sec.

Front line tests (tuning):

FFTlen=5600K, Type=3, Arch=8, Pass1=896,  Pass2=6400,  clm=4 (18 cores, 1 worker):  2.06 ms.  Throughput: 485.11 iter/sec.
FFTlen=5600K, Type=3, Arch=8, Pass1=896,  Pass2=6400,  clm=2 (18 cores, 1 worker):  1.56 ms.  Throughput: 641.92 iter/sec.
FFTlen=5600K, Type=3, Arch=8, Pass1=896,  Pass2=6400,  clm=1 (18 cores, 1 worker):  1.53 ms.  Throughput: 654.65 iter/sec.
FFTlen=5600K, Type=3, Arch=8, Pass1=1280, Pass2=4480,  clm=2 (18 cores, 1 worker):  1.75 ms.  Throughput: 572.56 iter/sec.
FFTlen=5600K, Type=3, Arch=8, Pass1=1280, Pass2=4480,  clm=1 (18 cores, 1 worker):  1.56 ms.  Throughput: 639.73 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=192,  Pass2=30720, clm=4 (18 cores, 1 worker):  2.05 ms.  Throughput: 487.83 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=192,  Pass2=30720, clm=2 (18 cores, 1 worker):  2.51 ms.  Throughput: 397.75 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=192,  Pass2=30720, clm=1 (18 cores, 1 worker):  5.91 ms.  Throughput: 169.20 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=640,  Pass2=9216,  clm=4 (18 cores, 1 worker):  1.84 ms.  Throughput: 543.78 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=640,  Pass2=9216,  clm=2 (18 cores, 1 worker):  1.64 ms.  Throughput: 608.70 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=640,  Pass2=9216,  clm=1 (18 cores, 1 worker):  1.59 ms.  Throughput: 627.25 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=768,  Pass2=7680,  clm=4 (18 cores, 1 worker):  2.00 ms.  Throughput: 499.62 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=768,  Pass2=7680,  clm=2 (18 cores, 1 worker):  1.69 ms.  Throughput: 591.33 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=768,  Pass2=7680,  clm=1 (18 cores, 1 worker):  1.62 ms.  Throughput: 618.17 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=960,  Pass2=6144,  clm=4 (18 cores, 1 worker):  2.16 ms.  Throughput: 462.71 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=960,  Pass2=6144,  clm=2 (18 cores, 1 worker):  1.67 ms.  Throughput: 600.49 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=960,  Pass2=6144,  clm=1 (18 cores, 1 worker):  1.59 ms.  Throughput: 629.88 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1152, Pass2=5120,  clm=2 (18 cores, 1 worker):  1.74 ms.  Throughput: 573.79 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1152, Pass2=5120,  clm=1 (18 cores, 1 worker):  1.60 ms.  Throughput: 625.71 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1280, Pass2=4608,  clm=2 (18 cores, 1 worker):  1.84 ms.  Throughput: 542.17 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1280, Pass2=4608,  clm=1 (18 cores, 1 worker):  1.67 ms.  Throughput: 599.78 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1536, Pass2=3840,  clm=2 (18 cores, 1 worker):  2.02 ms.  Throughput: 494.24 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1536, Pass2=3840,  clm=1 (18 cores, 1 worker):  1.72 ms.  Throughput: 581.54 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1920, Pass2=3072,  clm=2 (18 cores, 1 worker):  2.21 ms.  Throughput: 452.96 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=1920, Pass2=3072,  clm=1 (18 cores, 1 worker):  1.73 ms.  Throughput: 576.57 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=2304, Pass2=2560,  clm=1 (18 cores, 1 worker):  1.84 ms.  Throughput: 544.70 iter/sec.
FFTlen=5760K, Type=3, Arch=8, Pass1=3072, Pass2=1920,  clm=1 (18 cores, 1 worker):  2.50 ms.  Throughput: 400.35 iter/sec.
Attached Files
File Type: zip bench.zip (21.5 KB, 7 views)

Last fiddled with by LaurV on 2021-04-17 at 13:41 Reason: add links, fix typos, line spacing, alignment in code tags
LaurV is offline   Reply With Quote
Old 2021-04-17, 12:46   #27
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

2×13×313 Posts
Default

We have a BBQ stick like yours that we use to push on the slot tab to remove the video card. Our fingers are too fat to do it manually.

We only went with W10 "Pro" because "W10" Home doesn't support 256GB of memory.

We will tune the CPU once we get a more robust cooling system installed.

Xyzzy is offline   Reply With Quote
Old 2021-04-18, 05:44   #28
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

25×5×59 Posts
Default

About the 256G, if this is your concern only, did you try "edu"? There is no memory restriction (well, the 2TB of the 40 bits addresses, but that should be enough), and you may get it free. Some people say "edu pro" is better than "pro", and "edu" (based on "enterprise") is even better. Unless you already bought "pro"...

A friend of mine who read my post above, where I said that "pro" and "home" are the same fast said that may not be true, and "home" may actually be about 1% to 3% faster. The reason may be the WIP (Windows information protection). I don't know what WIP does, but the feature comparison list says that:

Quote:
WIP helps to protect against potential data leakage without otherwise interfering with the employee experience. WIP also helps to protect enterprise apps and data against accidental data leaks on enterprise-owned devices and personal devices that employees bring to work, without requiring changes to your environment or other apps.
People interpret that like a "patch" for the "meltdown" and "spectre" (they wrote special code to carefully "arrange" the data in memory to avoid the side-leaks specific to these two), which is active on "pro" (checkboxed in the list) but not on "home", and the patch is known from the past (Intel microcode for CPUs) to make everything in the system about 1 to 3 percent slower. OTOH, my friend says he never tested it to check if it is true. It may be only gossips.

Related to thermals, one (BIG) issue with the water cooled CPU in the open (i.e. no case fans) is the fact that, well, there are no case fans...

While the CPU temperature stays at 47°C, or 48°C max (anyhow, under 50°C, or for Americans 122°F ), the memories get close to 60°C (140°F), and the VRM mosfets together with the big iron backplate behind the mobo get close to 70°C (158°F). That because, when you use the air cooler, there is a lot of air moving around, cold, warm, or hot, but it is moving, taking away with it the heat from the memories, chipset, mosfets, and other things which haven't their own fans. When everything it "on water", and no case walls to hung the radiators, there is no air blown anywhere near the mobo, chipset, power mosfets, nothing to dissipate that heat, so they really DO get hot.

For my build, with 2+2+2+1+1 x120 radiator fans, a lot of study will need to be done related to the direction of blowing for the fans. Inside or outside of the case? This will involve many hours of tests. No joke. Probably a mix of them will be the best, but how? (yes, fans blowing "hot" air into the case! This was not a typing mistake. Do you guess why?)

Last fiddled with by LaurV on 2021-04-18 at 06:08 Reason: spacing, typos
LaurV is offline   Reply With Quote
Old 2021-04-18, 13:06   #29
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

2×13×313 Posts
Default

We don't consider 70°C for VRMs to be too hot.

Our board has two miniature VRM fans. The "HS Fan" reading in the second attachment is the VRM fan speed.

Attached Thumbnails
Click image for larger version

Name:	fans.jpg
Views:	18
Size:	113.9 KB
ID:	24710   Click image for larger version

Name:	temps.png
Views:	11
Size:	31.6 KB
ID:	24711  
Xyzzy is offline   Reply With Quote
Old 2021-04-18, 13:07   #30
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

2×13×313 Posts
Default

Quote:
Originally Posted by LaurV View Post
(yes, fans blowing "hot" air into the case! This was not a typing mistake. Do you guess why?)
The "hot" air from the radiators will only be a few degrees above ambient and will also be much cooler than the rest of the heated computer parts?

Xyzzy is offline   Reply With Quote
Old 2021-04-18, 15:47   #31
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

25·5·59 Posts
Default

Yep, that's half of the solution. More exactly, the second half

The idea is that "heating" and "cooling" is (at molecular level) just thermal agitation (or respective, stopping it). Fast molecules (hot) hit slow molecules (cold) and lose (transfer) part of their energy. More cold molecules, more heat transfer. Why the water (and liquids in general) are better coolers than air (and gases in general)? Because liquids are more "dense", they contain a lot more molecules per unit of volume (say, cubic centimeter), so, more bumps, better energy (heat) transfer.

That works for air too, and for all compressible fluids in general, the compressed fluid always cools (or heats) better than the rarefied one. More cold-air molecules hit your radiator wings, faster they will take the heat away. If you put your mobo inside a 10-bar air chamber it will cool faster. In fact, if you put it in absolute vacuum, it will never cool, because there is no air to take the heat away. The mobo will heat to hundreds or thousands of degrees, when, if it will not melt, it will start losing more energy by IR radiation, which can propagate through vacuum. Haha.

I won many bets in my life by proving to friends and colleagues (who dared to call themselves "IT Professionals") that if you put all your case fans to blow air into the case, and let the air go out through "special" holes, to create the right air flow inside, then the box stays cooler, compared to the "classic" method where some fans are blowing cold air "into" the case (usually in front, or below the housing, cold air is pushed into the case) and some other fans are blowing air out of the case, "to remove the heat" (usually in the back of the house, to avoid hot air blowing "into your face", or on the top of the house, because there the air "is hotter", because "the hot air moves to the top", which is by itself false, the air moves so fast that it doesn't have time to "accumulate" the heat in some corner or top of the housing, even with just a single fan). When you have the fans pushing more air into the case, the pressure inside increase, even if only a bit (because the case has many holes). The best solution would be to have the CPU and GPU fans to blow the hot air OUT of the case, or mount them outside so they would not influence the air movement inside of the case, but then, have enough additional case fans to blow enough cold air from outside INTO the case, to have higher pressure inside. This not only cools faster, but also avoids condensation inside.

Unfortunately, for my build that won't be possible, I can not mount the fans outside, because all the hoses for the "all in one" coolers are short and unmovable (i.e. it would need to scrap them all and make a classic water loop, a lot of effort and money), and if I mount them on the case's walls oriented in such a way to "blow the hot air out of the house", then eight such strong fans will "vacuum" the case inside (i.e. decrease the pressure), slowing down the cooling of the passive stuff (chipset, mosfets), and moreover, I will not have much space there to put more fans to bring more air in (beside of a 28 cm fan lateral, and a 14 cm in front, all walls are "taken"). Mosfets, RAM, and chipset must have some fan to blow cold air ONTO them. Like on your mobo (which is brilliant, by the way!). So, at least some of the 8 fans will have to blow towards inside. This is what I assume, but I may be wrong. Therefore, it is a quest between "taking the air out and decreasing the pressure", or "putting warm air in", both with negative effects for cooling, and the effects can not be quantified unless trying many possibilities. It will all depend on the how warm is the air that I put inside, and how much is the pressure difference (i.e. how much "compressed air flow" I can put inside. Probably, the CPU fans will blow inside, because the CPU circuit will be the "colder". If the CPU can stay in (say) the 60s, then the water temperature will stay somewhere in the 50s (thermal resistance of the CPU cover and the copper block of the cooler/pump), which means that the air will be in the 40s (thermal resistance of the aluminum reservoir, yeah, I know, don't ever talk to me about two different metals in the circuit, didn't I say this is a cheap cooler?), as long as the temperature in the room stays around 30 (in hot days). The water runs very fast, in fact the water that comes out from the water blocks is just 1-2-3 degrees hotter that the water that enters water blocks, exactly as you said, the water has no time to "get hot" there. In average the water is about 10-15 degrees colder than the CPU (otherwise, no cooling, the thermal resistance of the block works like the resistance in electronics, except it is thermal ). Similar, the water that comes out of the radiator is just 1-2-3 degres colder than the water that enters the radiator. Same for the air, which is in average 10-12 degrees colder than the water (for a x240 radiator), (otherwise no cooling), the air that is sucked by the fans is at room temperature, and the air that is blown by the same fans through the radiator's mesh is just few degrees hotter. No time to get "hot". So, blowing it into the case may help with cooling due to pressurizing, or may not, due to the fact that is warmer. We will see...

Last fiddled with by LaurV on 2021-04-18 at 15:56
LaurV is offline   Reply With Quote
Old 2021-04-18, 18:37   #32
Nick
 
Nick's Avatar
 
Dec 2012
The Netherlands

68916 Posts
Default

At home with LaurV...
Click image for larger version

Name:	e101_0419.jpg
Views:	63
Size:	70.1 KB
ID:	24714
Nick is offline   Reply With Quote
Old 2021-04-18, 23:04   #33
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

7·11·151 Posts
Default

Re. Hot-air-blowing: [hot by human standards] != [hot by chipset standards].

The 2nd Radeon 7 GPU I added to my old ATX-case Haswell system (all case fans long-ago kaput, ventilation all via ambient air thanks to removed case side panels) last summer sits with its bottom a mere ~3cm above the hot-air vent fan of the PSU, so roughly half the GPU intake air is preheated by the PSU. Said GPU2 still runs - at same settings - roughly 10C cooler than the one, GPU1, sitting above it. Part of that is of course because GPU1 is also getting air partly prewarmed by GPU2, but the main thing is that air at (roughly 50C) blowing from PSU directly into the intake fans of GPU2 has a significant cooling effect relative to the hotter-running GPU.
ewmayer is online now   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 19:42.

Thu May 6 19:42:55 UTC 2021 up 28 days, 14:23, 0 users, load averages: 2.57, 2.19, 2.09

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.