mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
Thread Tools
Old 2019-12-12, 14:43   #4511
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

22·3·163 Posts
Default

Quote:
Originally Posted by chalsall View Post
Yup. We're about three weeks or so from bringing most of 9x up to 74.

Thanks to Ben, the idea of giving LL Cat4 assignments which have been "optimally" TF'ed and P-1'ed has been thrown out the window.

No problem. Instead, super cool!

My current thinking is once everything in 99M is taken to 74, GPU72 will next offer work to take to 75 (starting in 95M). The next logical and least expensive work assignment which "Makes Sense" for GIMPS in the near future.

Alternative thinking welcomed.
Running 2^75's on Colab is not practical, for me. My 1080 can run around 3x what a K80 can. 2^74's, at this level finish in 29 minutes, typically. Double that for a 2^75. I have no problems with that. The only thing they have faster is a P100, and above. I never seem to get those now. So, I think I will let that part of this go.

I agree with your thinking on LL and P-1 here. Let the people who like to run those, run them, when they are factored enough. There is no need to mix them here!
storm5510 is offline   Reply With Quote
Old 2019-12-14, 15:06   #4512
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

22·3·163 Posts
Default

I decided to give this a run today and I got something new, to me:

Tesla P100-PCIE-16GB

It's about the same speed as the previous P100. 1100 GHz-d/day. This would appear to have more RAM.
storm5510 is offline   Reply With Quote
Old 2019-12-14, 15:24   #4513
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

11×311 Posts
Default

Every P100 I've seen says 16GB
James Heinrich is offline   Reply With Quote
Old 2019-12-14, 23:07   #4514
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

976710 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
Every P100 I've seen says 16GB
Ditto. On Colab. And they're a recent addition.

Interestingly, out of the last four backend requests, three have been P100s.

I haven't run Kaggle for a while now, but I seem to remember they weren't explicit about the RAM. Could be wrong about that.
chalsall is offline   Reply With Quote
Old 2019-12-15, 01:03   #4515
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

22×3×163 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
Every P100 I've seen says 16GB
Until now, they have displayed as something shorter than what I wrote above. Perhaps it is just a name change and nothing more.
storm5510 is offline   Reply With Quote
Old 2019-12-22, 13:00   #4516
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

22×3×163 Posts
Default

I think we have reached the top of the collection. I am receiving this error from Gpu72WorkFetch.

Quote:
Fatal Error: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: startIndex
storm5510 is offline   Reply With Quote
Old 2019-12-22, 13:42   #4517
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

1101010111012 Posts
Default

I also got something (different) odd:
Code:
20191222_133834: GPU72 TF V0.33 Bootstrap starting...
20191222_133834: Working as "xxxxxxxxxxxxxxxxxxxxxxxx"...

20191222_133834: Installing needed packages (1/4)
20191222_133839: Installing needed packages (2/4)
20191222_133845: Installing needed packages (3/4)
20191222_133854: Installing needed packages (4/4)
20191222_133855: Fetching initial work...
20191222_133856: Running GPU type Tesla P100-PCIE-16GB

20191222_133856: running a simple selftest...
20191222_133900: Selftest statistics
20191222_133900:   number of tests           107
20191222_133900:   successfull tests         107
20191222_133900: selftest PASSED!
20191222_133900: Bootstrap finished.  Exiting.
Restarting the notebook results in the same thing.
My other instance happily resumed its work (I'll have to check in 30 mins if it continues to a next exponent or also quits).
James Heinrich is offline   Reply With Quote
Old 2019-12-22, 13:59   #4518
bayanne
 
bayanne's Avatar
 
"Tony Gott"
Aug 2002
Yell, Shetland, UK

33210 Posts
Default

Yes, I am receiving that as well, plus, there are no assignments allocated to crunch using Colab.
bayanne is offline   Reply With Quote
Old 2019-12-22, 15:42   #4519
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

22·3·163 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
I also got something (different) odd:
Code:
20191222_133834: GPU72 TF V0.33 Bootstrap starting...
20191222_133834: Working as "xxxxxxxxxxxxxxxxxxxxxxxx"...

20191222_133834: Installing needed packages (1/4)
20191222_133839: Installing needed packages (2/4)
20191222_133845: Installing needed packages (3/4)
20191222_133854: Installing needed packages (4/4)
20191222_133855: Fetching initial work...
20191222_133856: Running GPU type Tesla P100-PCIE-16GB

20191222_133856: running a simple selftest...
20191222_133900: Selftest statistics
20191222_133900:   number of tests           107
20191222_133900:   successfull tests         107
20191222_133900: selftest PASSED!
20191222_133900: Bootstrap finished.  Exiting.
Restarting the notebook results in the same thing.
My other instance happily resumed its work (I'll have to check in 30 mins if it continues to a next exponent or also quits).
My last Colab instance ran in the 95M area from 2^74 to 2^75. It did not seem to have a fetch problem at the time.

As for the rest, I was kind of expecting it to wrap back around on its own and start running everything again to 2^75. It never occurred to me that it would have to be reset manually, if that is what needs to happen.
storm5510 is offline   Reply With Quote
Old 2019-12-22, 15:55   #4520
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

11·311 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
My other instance happily resumed its work (I'll have to check in 30 mins if it continues to a next exponent or also quits).
After it finished its assignment it did indeed quit:
Code:
20191222_154800: no factor for M95828179 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs]
20191222_154800: tf(): time spent since restart:   17m 38.257s
20191222_154800:       estimated total time spent: 50m 50.830s
20191222_154800: Bootstrap finished.  Exiting.
James Heinrich is offline   Reply With Quote
Old 2019-12-22, 16:08   #4521
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

2·4,909 Posts
Default

Send Chris a PM. He can fix the problem.

Also, he posted code a while back about how to add assignments. Run the initial code. Let it exit. Then run the code to add manual assignments. Or edit the worktodo file while stopped.
Uncwilly is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Status Primeinator Operation Billion Digits 5 2011-12-06 02:35
62 bit status 1997rj7 Lone Mersenne Hunters 27 2008-09-29 13:52
OBD Status Uncwilly Operation Billion Digits 22 2005-10-25 14:05
1-2M LLR status paulunderwood 3*2^n-1 Search 2 2005-03-13 17:03
Status of 26.0M - 26.5M 1997rj7 Lone Mersenne Hunters 25 2004-06-18 16:46

All times are UTC. The time now is 08:25.


Mon Aug 2 08:25:15 UTC 2021 up 10 days, 2:54, 0 users, load averages: 2.25, 2.05, 1.84

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.