![]() |
![]() |
#34 |
P90 years forever!
Aug 2002
Yeehaw, FL
2·32·7·59 Posts |
![]()
Yes, get an LL-DC assignment and then PRP it. Upload your PRP result and proof file as you normally would (either by prime95 or gpuowl's python script).
|
![]() |
![]() |
![]() |
#35 | |||||
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
504110 Posts |
![]() Quote:
Code:
2020-05-12 bad ll 122743793 manual, condor quadro 2000; diverges May 1 2020 after 107M 87.1% by 108M 88%. run was almost 3 months Feb 21 to May 11. Roundoff error was a very comfortable 0.12; no error messages in the logged console output.CUDALucas v2.06 log excerpts: | May 01 04:57:06 | M122743793 107000000 0x0f01f93746501744 | 6912K 0.10742 55.9884 559.88s | 10:04:52:44 87.17% | ok to here bad from here | May 01 20:30:15 | M122743793 108000000 0x9b21e398524e0ebe | 6912K 0.11475 55.9881 559.88s | 9:13:19:29 87.98% | see https://mersenneforum.org/showpost.p...69&postcount=9 for interim residues from a matched run Quote:
Quote:
As far as I know, no version gpuowl has a 6272K fft transform. But a relatively recent version has higher reach with the 6M transform; here for v7.2-53 and similar, excerpt from help output: Code:
FFT 6M [ 37.75M - 116.51M] 1K:12:256 1K:6:512 1K:3:1K 256:12:1K 512:12:512 512:6:1K 4K:3:256 FFT 6.50M [ 40.89M - 125.95M] 1K:13:256 256:13:1K 512:13:512 and for 7M fft, the older gpuowl version (probably one of Fan Ming's compiles, Google drive file date jan 21 2020) is producing iteration times 7.75-8.46 ms/iter on T4. An old less optimized version of gpuowl, running a longer fft length, still a little faster than the Colab CUDALucas T4 timings posted recently: Quote:
Code:
2021-02-25 18:27:24 config.txt: -user kriesel -cpu colab/TeslaT4 -yield -maxAlloc 15000 -use NO_ASM 2021-02-25 18:27:25 config.txt: 2021-02-25 18:27:25 colab/TeslaT4 115545511 FFT 7168K: Width 256x4, Height 64x8, Middle 7; 15.74 bits/word 2021-02-25 18:27:26 colab/TeslaT4 OpenCL args "-DEXP=115545511u -DWIDTH=1024u -DSMALL_HEIGHT=512u -DMIDDLE=7u -DWEIGHT_STEP=0x1.322aaa7d291efp+0 -DIWEIGHT_STEP=0x1.ac1b50a86d588p-1 -DWEIGHT_BIGSTEP=0x1.306fe0a31b715p+0 -DIWEIGHT_BIGSTEP=0x1.ae89f995ad3adp-1 -DNO_ASM=1 -I. -cl-fast-relaxed-math -cl-std=CL2.0" 2021-02-25 18:27:28 colab/TeslaT4 2021-02-25 18:27:28 colab/TeslaT4 OpenCL compilation in 2109 ms 2021-02-25 18:27:46 colab/TeslaT4 115545511 OK 1000 0.00%; 7753 us/sq; ETA 10d 08:50; 947a2638dcd5659d (check 4.25s) 2021-02-25 18:34:34 colab/TeslaT4 115545511 50000 0.04%; 8324 us/sq; ETA 11d 03:03; 2abe8c5a456c9248 2021-02-25 18:40:42 colab/TeslaT4 Stopping, please wait.. 2021-02-25 18:40:47 colab/TeslaT4 115545511 OK 93500 0.08%; 8455 us/sq; ETA 11d 07:09; 94321be129778fdc (check 4.62s) 2021-02-25 18:40:47 colab/TeslaT4 Exiting because "stop requested" 2021-02-25 18:40:47 colab/TeslaT4 Bye 2021-02-25 18:48:30 config.txt: -user kriesel -cpu colab/TeslaT4 -yield -maxAlloc 15000 -use NO_ASM 2021-02-25 18:48:30 config.txt: 2021-02-25 18:48:30 colab/TeslaT4 115545511 FFT 7168K: Width 256x4, Height 64x8, Middle 7; 15.74 bits/word 2021-02-25 18:48:30 colab/TeslaT4 OpenCL args "-DEXP=115545511u -DWIDTH=1024u -DSMALL_HEIGHT=512u -DMIDDLE=7u -DWEIGHT_STEP=0x1.322aaa7d291efp+0 -DIWEIGHT_STEP=0x1.ac1b50a86d588p-1 -DWEIGHT_BIGSTEP=0x1.306fe0a31b715p+0 -DIWEIGHT_BIGSTEP=0x1.ae89f995ad3adp-1 -DNO_ASM=1 -I. -cl-fast-relaxed-math -cl-std=CL2.0" 2021-02-25 18:48:30 colab/TeslaT4 2021-02-25 18:48:30 colab/TeslaT4 OpenCL compilation in 5 ms 2021-02-25 18:48:49 colab/TeslaT4 115545511 OK 94500 0.08%; 7770 us/sq; ETA 10d 09:11; 802418424467173d (check 4.22s) 2021-02-25 18:49:32 colab/TeslaT4 115545511 100000 0.09%; 7829 us/sq; ETA 10d 11:03; eec0fc882a58923c 2021-02-25 18:56:30 colab/TeslaT4 115545511 150000 0.13%; 8346 us/sq; ETA 11d 03:31; 857fa1746622daba 2021-02-25 19:03:32 colab/TeslaT4 115545511 200000 0.17%; 8442 us/sq; ETA 11d 06:29; 07065de43d5d6667 2021-02-25 19:10:39 colab/TeslaT4 115545511 OK 250000 0.22%; 8445 us/sq; ETA 11d 06:29; a491206a633e11cd (check 4.58s) 2021-02-25 19:17:41 colab/TeslaT4 115545511 300000 0.26%; 8450 us/sq; ETA 11d 06:31; 7dd17f25c99a3c46 2021-02-25 19:18:11 colab/TeslaT4 Stopping, please wait.. 2021-02-25 19:18:15 colab/TeslaT4 115545511 OK 303500 0.26%; 8452 us/sq; ETA 11d 06:33; 6154addd71f541a2 (check 4.56s) 2021-02-25 19:18:15 colab/TeslaT4 Exiting because "stop requested" 2021-02-25 19:18:15 colab/TeslaT4 Bye And v6.11-366 would run 115M at 6M fft length, per https://www.mersenneforum.org/showpo...36&postcount=9, picking up additional speed by reducing fft length. Quote:
Bot: "a computer program that performs automatic repetitive tasks" https://www.merriam-webster.com/dictionary/bot Seems to me to match the behavior you described for your software, reactivating tabs at regular intervals, dismissing prompts when they appear, etc. Not meaning to be dismissive, pejorative, or other forms of negative, but not ignoring details either, of how the provider wants the service to be used. Gpuowl does indeed support lower proof powers. (Confirmed by both source code inspection and a short test run on a small exponent.) I'm not sure how low the Primenet server and verification process supports. Please use a reasonably high proof power for efficiency. Each reduction of proof power by one doubles the verification effort. Last fiddled with by kriesel on 2021-02-25 at 21:19 |
|||||
![]() |
![]() |
![]() |
#36 | |
Dec 2019
418 Posts |
![]()
Bot and Ethics Discussion
Quote:
However, I think its important to note that "Bot" is not outlined anywhere in the terms of service as being discouraged (only Crypto mining). Please, anyone, quote the terms of service here to dispel any misinformation I am spouting if I am missing this. Consider the following when considering if an extension breaches an ethical boundary: ✅ Colab is for research use (translation: should be used for research purposes, as often as is allowed) ✅ Colab is unique in that its hardware is not always available (translation: Google wants people to use their machines for research, but also wants to make a higher ROI whenever possible. Reconnecting & auto-starting does not go against this goal.). ✅ Colab has not banned or made public mention about any of the extensions that exist thus far to automatically reconnect or run Colab notebooks (to my knowledge) (translation: Google is unaware, lazy, collecting data for some grand purpose, or does not care about automatically using machines so long as they are used for research purposes) ✅ Colab, if they are doing an experiment/collecting data of some kind, should be thankful to someone who made an extension as they are getting free data. (translation: Google is happy whether an extension is violating an unwritten rule or not) ✅ Colab has never replied to Chris's requests to validate that running the GPU72 project is okay or not (translation: they likely do not care) GpuOwl All this info on Gpuowl is really intriguing. It sounds like Gpuowl is really a preferred way to go for some people. I wonder how long it would take to add Gpuowl into this project. Maybe not that long. Though the Gpuowl vs. CUDALucas performance is interesting to talk about, since we already have CUDALucas implemented as the cruncher for GPU work, one may consider that we could add in Gpuowl (as opposed to swap out) to the project and give users the ability to decide which cruncher to use (have not talked to Teal about that yet, though we already said we wanted to add Gpuowl). This also would be nice for testing as we would be using Colab machines with identical GPUs and assumedly a generally identical environment to test on. If we didn't make it clear before in the README or elsewhere in the forum, we would love to use Gpuowl. All that constrains us is time and resources as we both have jobs and other projects. We want our project to be used by the most people and, one day, find a prime number (or more) and maybe even be on the front page of mersenne. If anyone is wanting to be involved in an upgrade from CUDALucas to Gpuowl, please contact us here or elsewhere. Last fiddled with by danc2 on 2021-02-26 at 03:17 |
|
![]() |
![]() |
![]() |
#37 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
712 Posts |
![]() Quote:
Code:
processing: PRP (not-prime) for M58834309 Result type (150=PRP_COMPOSITE) inappropriate for the assignment type (101=LL_PRIME). Processing result but not deleting assignment. CPU credit is 124.6960 GHz-days. I believe the posted points to be actually and expressly contradicted or made irrelevant by Colab's occasional output of the attachment shown in https://mersenneforum.org/showpost.p...2&postcount=28 And "we're not the only ones doing it or providing a tool for it" is not a credible defense for something Google does not allow. Google offers Colab for interactive use of notebooks specifically by humans, and not by interactive use by programs/robots. Who or what runs the notebook is the distinction I think they are making. Last fiddled with by kriesel on 2021-02-26 at 12:54 |
|
![]() |
![]() |
![]() |
#38 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
3·3,181 Posts |
![]() Quote:
![]() My personal opinion on the whole automation thing is that because of the multiple "prove you're a human" challenges we've all faced using their instances over the last several months we've been playing with this, clearly, the intent is to have the human (slave) in the loop. If people want to try to get around this, it's at their own risk. Personally, I just manually restart the instances when I happen to flip to that virtual desktop during my day. |
|
![]() |
![]() |
![]() |
#39 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
712 Posts |
![]()
If someone had the time and inclination, adding a choice to the manual assignment page to generate PRP DC worktodo lines for LLDC candidates would help us humans avoid rewriting the lines and adding our errors
|
![]() |
![]() |
![]() |
#40 | |
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2×4,751 Posts |
![]() Quote:
Drop in your LL-DC lines from your worktodo, get PRP lines out. I could set you up with an excel or g-sheets sheet for this. [edit] I just did it in excel, it imports into g-sheets with no problem and works there. See attachment[/edit] Last fiddled with by Uncwilly on 2021-02-26 at 19:01 |
|
![]() |
![]() |
![]() |
#41 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
116618 Posts |
![]()
Quadro 2000 and 4000 while designed for pro use do not have ECC; Quadro 5000 has ECC vram.
|
![]() |
![]() |
![]() |
#42 |
P90 years forever!
Aug 2002
Yeehaw, FL
2×32×7×59 Posts |
![]() |
![]() |
![]() |
![]() |
#43 |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
712 Posts |
![]()
Thanks George for making that more efficient and reliable by minimizing the middleman's work. First try worked fine with gpuowl v7.2-63-ge47361b & drag & drop the resulting proof onto prime95 v30.4b9's working folder, and subsequent quick Cert completion, except for the claim an LLDC assignment expired the day after it wasn't really assigned, a PRP was. Is it practical to do a similar PRP substitution for PrimeNet API LL DC candidates, for prime95/mprime v30.3 or above, preferably without requiring a client software modification & end user software version updating times n systems? (Given some pending assignments and the occasional P-1 stage 2 will restart from the beginning, wait till it's done to upgrade, warning, rollouts take weeks.)
Last fiddled with by kriesel on 2021-02-27 at 16:07 |
![]() |
![]() |
![]() |
#44 | ||||
"Teal Dulcet"
Jun 2018
2·3·5 Posts |
![]() Quote:
Quote:
Quote:
Code:
2021-03-04 06:34:54 Tesla V100-SXM2-16GB-0 106928347 LL 0 loaded: 0000000000000004 2021-03-04 06:35:57 Tesla V100-SXM2-16GB-0 106928347 LL 100000 0.09%; 638 us/it; ETA 0d 18:56; 95920d6941eafe3f 2021-03-04 06:35:57 Tesla V100-SXM2-16GB-0 waiting for the Jacobi check to finish.. 2021-03-04 06:36:45 Tesla V100-SXM2-16GB-0 106928347 OK 100000 (jacobi == -1) Code:
2021-03-04 06:37:33 Tesla V100-SXM2-16GB-0 106928347 OK 0 loaded: blockSize 400, 0000000000000003 2021-03-04 06:37:34 Tesla V100-SXM2-16GB-0 106928347 OK 800 0.00%; 638 us/it; ETA 0d 18:57; 7d85dc41e3222beb (check 0.41s) 2021-03-04 06:38:37 Tesla V100-SXM2-16GB-0 Stopping, please wait.. 2021-03-04 06:38:38 Tesla V100-SXM2-16GB-0 106928347 OK 100000 0.09%; 639 us/it; ETA 0d 18:59; 4d66b4eed5ea9ab3 (check 0.42s) Code:
sudo apt-get update sudo apt-get install libgmp3-dev -y wget -nv https://github.com/preda/gpuowl/archive/5c5dc6669d748460c57ff1962fdbbbc599bac0d0.tar.gz tar -xzvf 5c5dc6669d748460c57ff1962fdbbbc599bac0d0.tar.gz cd gpuowl-5c5dc6669d748460c57ff1962fdbbbc599bac0d0 sed -i 's/<filesystem>/<experimental\/filesystem>/' *.h *.cpp sed -i 's/std::filesystem/std::experimental::filesystem/' *.h *.cpp sed -i 's/-Wall -O2/-Wall -g -O3/' Makefile make -j "$(nproc)" ./gpuowl -h ./gpuowl -ll 106928347 -iters 100000 ./gpuowl -prp 106928347 -iters 100000
We would also obviously need latest version of GpuOwl to always successfully build on Colab. To achieve this, I submitted a pull request to the GpuOwl repository, which adds Continuous Integration (CI) to automatically build GpuOwl on Linux (with both GCC and Clang) and Windows on every commit and pull request. It was merged a few days ago. This allows users to now see directly on the top of the GpuOwl README if the latest version of GpuOwl builds by checking the badges. It should also eventually eliminate the need for @kriesel to have to manually build and upload binaries for Windows users. See my pull request for more info. Quote:
|
||||
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Google Diet Colab Notebook | Corbeau | Cloud Computing | 1142 | 2021-04-15 02:11 |
Primality testing of numbers k*b^n+c | Viliam Furik | Math | 3 | 2020-08-18 01:51 |
Alternatives to Google Colab | kriesel | Cloud Computing | 11 | 2020-01-14 18:45 |
Google Notebooks -- Free GPUs!!! -- Deployment discussions... | chalsall | Cloud Computing | 3 | 2019-10-13 20:03 |
a new primality testing method | jasong | Math | 1 | 2007-11-06 21:46 |