![]() |
[QUOTE=LaurV;575713]Wow! it works! :shock: You (two) are my heroes for this weekend![/QUOTE]
Great! We are glad it works for you. [QUOTE=LaurV;575713]Albeit a little bit too complicate, first it didn't work, as I had the "CPU and GPU" output (sure! I want to see what BOTH of them are doing!), then I looked in the code and seen that you use the "-k" switch only when the output is "GPU Only"[/QUOTE] Yes, sorry, I guess I should have mentioned that. I did not realize anyone was using the "GPU and CPU" output type, as it is very verbose. I added it shortly before we officially announced the notebooks, as I saw it was requested a few times on the main Colab thread and it was easy to implement. When using that option, both CUDALucas and MPrime are run in the background, while the [C]tail -f[/C] command is run the foreground, so there is no easy way to pass input to CUDALucas. I updated [URL="https://www.mersenneforum.org/showthread.php?t=26574"]our PrimeNet script[/URL] on Saturday to support still getting first time LL tests using [URL="https://www.mersenneforum.org/showpost.php?p=575260&postcount=73"]the method[/URL] described by @Prime95 above, so that users can still use CUDALucas while we work on upgrading our GPU notebook to use GpuOwl. (@LaurV - You will no longer have [URL="https://www.mersenneforum.org/showpost.php?p=575673&postcount=11"]to do this manually[/URL]. :wink:) Anyone who wants to continue doing first time LL tests on the GPU would need to reset up their GPU notebooks after they finish any current assignments. I also included many of the [URL="https://www.mersenneforum.org/showpost.php?p=573177&postcount=44"]changes needed[/URL] for our PrimeNet script to support GpuOwl, including adding support for reporting LL/PRP and P-1 results. Going forward we decided we are going to recommend users do PRP tests, which will be the default, although we will still provide the option of doing LL tests on the GPU for users with very limited Drive space, [URL="https://www.mersenneforum.org/showpost.php?p=573177&postcount=44"]as explained above[/URL]. Prime95/MPrime of course has its PrimeNet functionality builtin, so unfortunately there is not much we can do about the CPU for users with limited Drive space. Those users will need to do LL DC tests on the CPU, although as George said, there is "a chance that a new Mersenne prime is hidden in all those double-checks". |
[QUOTE=danc2;572211]I realize we did not post any output or pictures, just links.
Since we have this dedicated thread, here is example output from a GPU notebook running the Tesla V100-SMX2-16GB (a $6,195.00 GPU according to Amazon). [/QUOTE]LL-test runs much slower than with gpuowl -LL, the same Exponent and the Tesla V100 gpu |
Colab now using AMD CPUs
This is the first time I've ever had an AMD!!
[QUOTE]Previous CPU counts 15 Intel(R) Xeon(R) CPU @ 2.30GHz 63 9 Intel(R) Xeon(R) CPU @ 2.00GHz 85 8 Intel(R) Xeon(R) CPU @ 2.20GHz 79 1 [COLOR="Red"]AMD EPYC[/COLOR] 7B12 49[/QUOTE] |
@mognuts
Yeah, I was pretty surprised when I first saw that on my machines also! [QUOTE] Previous CPU counts 111 Intel(R) Xeon(R) CPU @ 2.30GHz 63 97 Intel(R) Xeon(R) CPU @ 2.20GHz 79 29 Intel(R) Xeon(R) CPU @ 2.00GHz 85 15 AMD EPYC 7B12 49 [/QUOTE] |
[QUOTE=mognuts;582760]This is the first time I've ever had an AMD!![/QUOTE]
I was told if you snag one of those to throw it back because the performance is lower than the others. But that was a while back, that advice might have been referring to a different AMD model. |
[QUOTE=PhilF;582788]I was told if you snag one of those to throw it back because the performance is lower than the others. But that was a while back, that advice might have been referring to a different AMD model.[/QUOTE]
Busy, but quickly... The AMD CPUs have been given out for quite a while now. And, at least for P-1'ing, they're faster than all the Intel instances (~20% or so). |
[QUOTE=chalsall;582791]And, at least for P-1'ing, they're faster than all the Intel instances (~20% or so).[/QUOTE]
I cannot confirm that. Using Prime95 v30.4 and exponents in range 104M with bounds determined by Prime95, I get the following ranking for the time needed for P-1 stage 1 + 2 in total: [CODE] Model [B]63[/B], Intel(R) Xeon(R) CPU @ 2.30GHz: [B]36.09[/B] h Model [B]79[/B], Intel(R) Xeon(R) CPU @ 2.20GHz: [B]31.58 [/B]h Model [B]49[/B], AMD EPYC 7B12: [B]31.36[/B] h Model [B]85[/B], Intel(R) Xeon(R) CPU @ 2.00GHz: [B]25.27 [/B]h[/CODE]So, the Intel Model 85 is clearly fastest. |
[QUOTE=Flaukrotist;582793]I cannot confirm that. ...snip... So, the Intel Model 85 is clearly fastest.[/QUOTE]
I could very well be wrong. My observations were subjective. Would be worth collecting hard data on this. |
There are 3 versions of the Intel chipset on Colab (that I've received on free accounts). The 2.30 GHz model 63 is the worst, followed by the 2.20 GHz model 79, and the 2.00 GHz model 85 with AVX512 is by far the best. The AMD chipset's times overlap with the times I get with the 2.00GHz Intel - the worst times for the 2.00Ghz model 85 Intel are slightly worse that the worst times with the AMD, but the best times with the 2.00 Ghz model 85 Intel are much better than the best times with the AMD. This is for running tests with mprime (LL, PRP, PM1, CERT).
For around 110M PRP, iteration times on 2.30 and 2.20 GHz Intels are around 40ms ranging from the mid 30 to mid 40 - timings on the two overlap but the 2.30 GHz model 63 averages the worst. For the 2.00 GHz model 85 Intel I've seen from 21ms to 32ms. For the AMD I see 26 to 31ms. The iterations times can vary through 6-12 hour session, sometimes by a lot, but most instances seem to stay pretty close to the same ms/iteration throughout the session. The average times on the model 85 are better than the average times on the AMD model 49. There are far more 2.30 and 2.20 GHz Intels available to me at any given time than either the 2.00 GHz Intel or the AMD. |
| All times are UTC. The time now is 02:35. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.