![]() |
|
|
#4511 | |
|
Random Account
Aug 2009
22·3·163 Posts |
Quote:
I agree with your thinking on LL and P-1 here. Let the people who like to run those, run them, when they are factored enough. There is no need to mix them here! |
|
|
|
|
|
|
#4512 |
|
Random Account
Aug 2009
22·3·163 Posts |
I decided to give this a run today and I got something new, to me:
Tesla P100-PCIE-16GB It's about the same speed as the previous P100. 1100 GHz-d/day. This would appear to have more RAM. |
|
|
|
|
|
#4513 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
11×311 Posts |
Every P100 I've seen says 16GB
|
|
|
|
|
|
#4514 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
976710 Posts |
Ditto. On Colab. And they're a recent addition.
Interestingly, out of the last four backend requests, three have been P100s. I haven't run Kaggle for a while now, but I seem to remember they weren't explicit about the RAM. Could be wrong about that. |
|
|
|
|
|
#4515 |
|
Random Account
Aug 2009
22×3×163 Posts |
|
|
|
|
|
|
#4516 | |
|
Random Account
Aug 2009
22×3×163 Posts |
I think we have reached the top of the collection. I am receiving this error from Gpu72WorkFetch.
Quote:
|
|
|
|
|
|
|
#4517 |
|
"James Heinrich"
May 2004
ex-Northern Ontario
1101010111012 Posts |
I also got something (different) odd:
Code:
20191222_133834: GPU72 TF V0.33 Bootstrap starting... 20191222_133834: Working as "xxxxxxxxxxxxxxxxxxxxxxxx"... 20191222_133834: Installing needed packages (1/4) 20191222_133839: Installing needed packages (2/4) 20191222_133845: Installing needed packages (3/4) 20191222_133854: Installing needed packages (4/4) 20191222_133855: Fetching initial work... 20191222_133856: Running GPU type Tesla P100-PCIE-16GB 20191222_133856: running a simple selftest... 20191222_133900: Selftest statistics 20191222_133900: number of tests 107 20191222_133900: successfull tests 107 20191222_133900: selftest PASSED! 20191222_133900: Bootstrap finished. Exiting. My other instance happily resumed its work (I'll have to check in 30 mins if it continues to a next exponent or also quits). |
|
|
|
|
|
#4518 |
|
"Tony Gott"
Aug 2002
Yell, Shetland, UK
33210 Posts |
Yes, I am receiving that as well, plus, there are no assignments allocated to crunch using Colab.
|
|
|
|
|
|
#4519 | |
|
Random Account
Aug 2009
22·3·163 Posts |
Quote:
As for the rest, I was kind of expecting it to wrap back around on its own and start running everything again to 2^75. It never occurred to me that it would have to be reset manually, if that is what needs to happen. |
|
|
|
|
|
|
#4520 | |
|
"James Heinrich"
May 2004
ex-Northern Ontario
11·311 Posts |
Quote:
Code:
20191222_154800: no factor for M95828179 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs] 20191222_154800: tf(): time spent since restart: 17m 38.257s 20191222_154800: estimated total time spent: 50m 50.830s 20191222_154800: Bootstrap finished. Exiting. |
|
|
|
|
|
|
#4521 |
|
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2·4,909 Posts |
Send Chris a PM. He can fix the problem.
Also, he posted code a while back about how to add assignments. Run the initial code. Let it exit. Then run the code to add manual assignments. Or edit the worktodo file while stopped. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Status | Primeinator | Operation Billion Digits | 5 | 2011-12-06 02:35 |
| 62 bit status | 1997rj7 | Lone Mersenne Hunters | 27 | 2008-09-29 13:52 |
| OBD Status | Uncwilly | Operation Billion Digits | 22 | 2005-10-25 14:05 |
| 1-2M LLR status | paulunderwood | 3*2^n-1 Search | 2 | 2005-03-13 17:03 |
| Status of 26.0M - 26.5M | 1997rj7 | Lone Mersenne Hunters | 25 | 2004-06-18 16:46 |