![]() |
|
|
#4379 | |
|
"James Heinrich"
May 2004
ex-Northern Ontario
11·311 Posts |
Quote:
Some FAQ:
|
|
|
|
|
|
|
#4380 | ||
|
If I May
"Chris Halsall"
Sep 2002
Barbados
230478 Posts |
Quote:
Quote:
1.1. Kaggle lets you "commit" a Notebook, where-in every Section runs, in order, until the last executable exists. 1.2. TL;DR: Leave your browser open if possible. 2. The GPU72_TF Notebook fetches three (3#) TF assignments initially and then gets to work. 2.1. Assignments are first "reissued" from previous Notebook runs which have been "killed" (RIP), and then new assignments as specified by the AKey's work preference. 2.2. Once an assignment is completed is reported back to GPU72, and another assignment is fetch. 3. Yeah... Sorry. I subscribe strongly to "Never send a human to do a machine's job". But often achieving that ideal involves a human. In this case, it involves my time... ![]() 3.1. I have mapped in my head a solution space for this (read: automatically submitting back to Primenet the Instance(s)' results), but things have been a little hectic in the last few weeks. 3.1.1. Still on one of my whiteboards, as well as in my pen-and-paper workbook. 4. Nominally ill-advised. Although there could be some workflows where this would make sense (constrained human resources, for example). 4.1. Empirical experimentation suggests that each Colab Account gets ~12 to 16 hours of GPU compute per day. 4.2. Kaggle is contrained to ~30 hours of P100 GPU per week per account. If you're creative, you can actually get ~38.99 hours... 4.3. Interestingly, different Google Accounts seem to be thusly individually temporally constrained. Even when running within the same browser context (and thus OS fingerprint, IP address, and even MAC address). Last fiddled with by chalsall on 2019-11-05 at 17:44 Reason: Second 4.2 -> 4.3. |
||
|
|
|
|
|
#4381 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
65358 Posts |
Thanks, that helps.
I also discovered I don't need to copy-paste code per Uncwilly's post, I just need to click the magic Colaboratory link on gpu72.com after creating a NAK and copy-paste in the Access Key. I always have two browsers open with my home and work Google accounts signed in, so I fired up a second instance on my other browser and it seems to run fine (except my first attempt got me a "Tesla P100-PCIE-16GB" (1140 GHd/d) and the second a notably slower "Tesla K80" (390 GHd/d), luck of the draw I guess). Last fiddled with by James Heinrich on 2019-11-05 at 18:02 |
|
|
|
|
|
#4382 |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
111248 Posts |
Looking at your charts in www.mersenne.ca for GPU-TF vs. GPU-LL performance it seems these Tesla 100 and K80 are relatively much better at LL than TF. I assume LL includes P-1.
For this reason, I would prefer to use these GPUs (especially the K80) for P-1 rather than TF. Have people had much luck running CUDA-P1 in CoLab or Kaggle? |
|
|
|
|
|
#4383 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
9,767 Posts |
Quote:
Please know, though, that Primenet is not currently lacking in either LL'ing nor P-1'ing resources. My understanding is that both CUDA P-1 and LL code have been successfully built and run on both Colab and Kaggle. I also (possibly correctly; possibly not) understand that the OpenCL LL code implementation is actually more efficient than the native CUDA one. Outside of my experience space to understand why. To say again what I've said before... The GPU72_TF experiment was a "proof-of-concept". Just seeing if what we thought might be possible actually was. Once that knowledge was established, other things can then be done... |
|
|
|
|
|
|
#4384 |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
22·3·17·23 Posts |
|
|
|
|
|
|
#4385 |
|
Random Account
Aug 2009
111101001002 Posts |
I got up early this morning and found my colaboratory instance had stopped. Looking at the details, I saw "spider" so I figured someone had been working on it during the wee hours of the morning. The spider appeared to be functioning properly the last time I checked.
I am still running 274 locally. It is getting close to 98-million. I am wondering what happens when the 99's are complete. I changed the "High" value in the GPU72config file to 110,000,000. However, I do not know if the allocation from PrimeNet goes that far. If the allocation does not go that far, then I imagine there will be a wrap-around back to smaller exponents running to 275. That will be fine. At 276, I will stop because my colab instance can run those quite a bit faster than my 1080. In the interim, something else may come down the road.
|
|
|
|
|
|
#4386 | |
|
"James Heinrich"
May 2004
ex-Northern Ontario
11·311 Posts |
Quote:
![]() What I noticed is that the K80 is a dual-GPU model and mfaktc is of course using only one GPU, so the throughput is half what is shown on my mfaktc table, which makes sense. |
|
|
|
|
|
|
#4387 | |
|
"Mr. Meeseeks"
Jan 2012
California, USA
1000011110002 Posts |
Quote:
|
|
|
|
|
|
|
#4388 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
342110 Posts |
|
|
|
|
|
|
#4389 | |
|
Random Account
Aug 2009
22×3×163 Posts |
Quote:
I switched browsers on my HP earlier today so both my desktops would be using Firefox. They keep each other synced. I also got a K80 on the HP. I ended up deleting my instance on Colab and recreated it with the same code. P100 the first try. I only run one computer with it. So, you would probably have to drop one as well to get a P100 again. |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Status | Primeinator | Operation Billion Digits | 5 | 2011-12-06 02:35 |
| 62 bit status | 1997rj7 | Lone Mersenne Hunters | 27 | 2008-09-29 13:52 |
| OBD Status | Uncwilly | Operation Billion Digits | 22 | 2005-10-25 14:05 |
| 1-2M LLR status | paulunderwood | 3*2^n-1 Search | 2 | 2005-03-13 17:03 |
| Status of 26.0M - 26.5M | 1997rj7 | Lone Mersenne Hunters | 25 | 2004-06-18 16:46 |