mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
Thread Tools
Old 2019-11-03, 22:17   #4368
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

976710 Posts
Default Thanks Oliver...

Another person who doesn't get enough credit for this hobby is Oliver (TheJudger).

Not only did he write mfaktc, but he also quite regularly does something like 1 or 2,000 THzD (!) of work in a week.

Can't imagine his power bill...
chalsall is offline   Reply With Quote
Old 2019-11-04, 15:06   #4369
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

7A416 Posts
Default

Quote:
Originally Posted by chalsall View Post
And that is exactly why people are allowed to set their own "Pledge" level (and even range, if they so choose)....
The pledge setting does not seem to function this way now, so I did some searching on PrimeNet. Exponents as high as 125-million have been factored to 274, and many to 275. 277 has been suggested as an end point, for now. The exponents I mention could be ran to 278 or 279. Time, and technology, will tell.
storm5510 is offline   Reply With Quote
Old 2019-11-04, 16:42   #4370
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by storm5510 View Post
The pledge setting does not seem to function this way now, so I did some searching on PrimeNet.
Not sure what you mean by this.

The "Pledge" level should be honored for all of the work types except for "Let GPU72 Decide" (LG72D). "What Makes Sense", and all the other options, will give you something to TF up to the pledge level, but never further.

In addition, if you specify a range, the results should be within that. For example, if you choose "Lowest Exponent", 98M Low, 74 Pledge that's what you should get.

If you're /not/ seeing this behavior, please let me know.
chalsall is offline   Reply With Quote
Old 2019-11-04, 17:31   #4371
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

111101001002 Posts
Default

Quote:
Originally Posted by chalsall View Post
Not sure what you mean by this.

The "Pledge" level should be honored for all of the work types except for "Let GPU72 Decide" (LG72D). "What Makes Sense", and all the other options, will give you something to TF up to the pledge level, but never further.

In addition, if you specify a range, the results should be within that. For example, if you choose "Lowest Exponent", 98M Low, 74 Pledge that's what you should get.

If you're /not/ seeing this behavior, please let me know.
It's been ignored for several months. When I first started, everything I received was 72 bits. Later, it went to 73. Now at 74. I believe there is nothing available in your allocated area below what I am running now, and I have no problem with 74. However, if there is a problem, I am sure you would like to find the cause.

I just changed it back to option 1, LowestTFLevel. I had it at option 0. All other options, I've never changed. I'll see what happens and report back.
storm5510 is offline   Reply With Quote
Old 2019-11-04, 18:14   #4372
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

100110001001112 Posts
Default

Quote:
Originally Posted by storm5510 View Post
It's been ignored for several months. When I first started, everything I received was 72 bits. Later, it went to 73. Now at 74. I believe there is nothing available in your allocated area below what I am running now, and I have no problem with 74. However, if there is a problem, I am sure you would like to find the cause.
Ah... I now understand what you're saying. This behavior is nominal.

If the pledge is below what is available (within the range being requested), it is "bumped" up to the lowest next bit level. Currently, this is 74. There are some workers who still have their MISFIT config set to get 71 work! (Truely "fire-and-forget".)

The GPU72 sub-project was created to (mostly) help the GIMPS project.

If people /really/ want to work to lower bit levels (which won't be needed for years), they're available directly from Primenet.
chalsall is offline   Reply With Quote
Old 2019-11-04, 21:20   #4373
linament
 
Nov 2013

22×5 Posts
Default Breadth Colab vs. Depth Kaggle

With the recent introduction of "LL TF (Breadth First)", "LL TF (Depth First)" assignments for Colab/Kaggle instances, I have recently started to use one access key to do "LL TF (Breadth First)" assignments on Colab and another access key to do "LL TF (Depth First)" assignments on Kaggle. I shutdown my instances at night. I have been getting 73 to 74 bit assignments for Breadth First and 76 to 77 bit assignments for Kaggle. This morning I started up the Colab instance a few hours before I started the Kaggle instance. I noticed when I checked on the Colab instance a few hours later, that the Colab instance had completed the 76 to 77 bit assignment that Kaggle had been working on the night before and then went on to work a 73 to 74 bit assignment. Is this the intended reaction?


BTW, I notice that Depth First is being assigned 76 to 77 bits in the 99M range.
linament is offline   Reply With Quote
Old 2019-11-04, 21:37   #4374
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by linament View Post
Is this the intended reaction?
Yes. This is "sane" behavior. Once an assignment has had work started, it is the assignees until completion.

Currently, the re-assignment code path doesn't look at the work preference, but instead just asks "anything outstanding I should do?".

I don't have the cycles at the moment, but I could look into doing a "weight" on the temporal dimension, and only assign work which is older than (say) a week.

Quote:
Originally Posted by linament View Post
BTW, I notice that Depth First is being assigned 76 to 77 bits in the 99M range.
Yup...

The work Ben et al have been doing has caused the Cat 4 cut-off to climb a lot faster than I had expected.

So, for at least a little while, I want to build up a buffer in 99M, and try to fill in below as best we can over the next couple of weeks.

Can you say "fun" boys and girls? I can!
chalsall is offline   Reply With Quote
Old 2019-11-05, 14:34   #4375
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

22·3·163 Posts
Default

I was doing a little browsing on GPU72.com. There is a left side menu entry marked, "Notebook Access Keys" with three options inside. I have been doing some reading about Google's Colab Notebook. That is all way over my head and seems like a lot of effort. If this on GPU72.com is related to Google's notebook then it seems extremely simple.

Anybody?
storm5510 is offline   Reply With Quote
Old 2019-11-05, 14:53   #4376
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

2·4,909 Posts
Default

Quote:
Originally Posted by storm5510 View Post
If this on GPU72.com is related to Google's notebook then it seems extremely simple.

Anybody?
Chris has made it simple. If you have a google log in (like gmail, go to colab.google . com and in a separate tab go to GPU72 and setup a notebook key (copy the code (not just the key) that is shown.) Then back at colab, paste the copied code in (chose New Python 3 notebook from the menu. You will see [ ] , click inside that and paste the code there.) In the menu, set runtime to a GPU type. Then press the play button. Profit.
Uncwilly is offline   Reply With Quote
Old 2019-11-05, 14:58   #4377
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

11·311 Posts
Default

I believe that should be https://colab.research.google.com/

I'd also seen talk about such things in this thread but didn't know what it was about, but given Uncwilly's brief instructions above, it seems to work:
Code:
Beginning GPU Trial Factoring Environment Bootstrapping...
Please see https://www.gpu72.com/ for additional details.

20191105_150342: GPU72 TF V0.32 Bootstrap starting...
20191105_150342: Working as "f74a87e94f97d3588a5ca5cccfadbe96"...

20191105_150342: Installing needed packages (1/3)
20191105_150344: Installing needed packages (2/3)
20191105_150349: Installing needed packages (3/3)
20191105_150356: Fetching initial work...
20191105_150357: Running GPU type Tesla P100-PCIE-16GB

20191105_150357: running a simple selftest...
20191105_150408: Selftest statistics
20191105_150408:   number of tests           107
20191105_150408:   successfull tests         107
20191105_150408: selftest PASSED!
20191105_150408: Starting trial factoring M95509121 from 2^75 to 2^76 (80.12 GHz-days)

20191105_150408: Exponent  TF Level  % Done     ETA   GHzD/D  Itr Time |   Class #,   Seq # |    #FCs | SieveRate |  SieveP | Uptime
20191105_150417: 95509121  75 to 76    0.1%   1h42m  1127.38    6.396s |    0/4620,   1/960 |  42.81G | 6693.1M/s |   82485 |   0:01
I'm not sure if it just does a single TF and then exits, or keeps looping... I guess I'll find out in 2 hours.

Last fiddled with by James Heinrich on 2019-11-05 at 15:08
James Heinrich is offline   Reply With Quote
Old 2019-11-05, 16:30   #4378
storm5510
Random Account
 
storm5510's Avatar
 
Aug 2009

111101001002 Posts
Default

Quote:
Originally Posted by Uncwilly View Post
Chris has made it simple. If you have a google log in (like gmail, go to colab.google . com and in a separate tab go to GPU72 and setup a notebook key (copy the code (not just the key) that is shown.) Then back at colab, paste the copied code in (chose New Python 3 notebook from the menu. You will see [ ] , click inside that and paste the code there.) In the menu, set runtime to a GPU type. Then press the play button. Profit.
I managed to muddle my way though it. It's running. Do I need to leave the browser windows open? If so, it is not a problem.

There is a menu entry under "Runtime" called "Interrupt Execution." Is this the proper way to stop the process, in case I need to?


Code:
Beginning GPU Trial Factoring Environment Bootstrapping...
Please see https://www.gpu72.com/ for additional details.

20191105_155730: GPU72 TF V0.32 Bootstrap starting...
20191105_155730: Working as "d395e1d04a122be8365b3727a298c8c0"...

20191105_155730: Installing needed packages (1/3)
20191105_155740: Installing needed packages (2/3)
20191105_155749: Installing needed packages (3/3)
20191105_155820: Fetching initial work...
20191105_155823: Running GPU type Tesla P100-PCIE-16GB

20191105_155823: running a simple selftest...
20191105_155833: Selftest statistics
20191105_155833:   number of tests           107
20191105_155833:   successfull tests         107
20191105_155833: selftest PASSED!
 20191105_155833: Starting trial factoring M95497499 from 2^75 to 2^76 (80.13 GHz-days)

20191105_155833: Exponent  TF Level  % Done     ETA   GHzD/D  Itr Time |   Class #,   Seq # |    #FCs | SieveRate |  SieveP | Uptime
20191105_155846: 95497499  75 to 76    0.1%   1h43m  1116.52    6.459s |    0/4620,   1/960 |  42.81G | 6628.6M/s |   82485 |   0:02
20191105_155948: 95497499  75 to 76    1.0%   1h41m  1123.65    6.418s |   45/4620,  10/960 |  42.81G | 6670.9M/s |   82485 |   0:03
20191105_160048: 95497499  75 to 76    2.3%   1h40m  1116.34    6.460s |  100/4620,  22/960 |  42.81G | 6627.6M/s |   82485 |   0:04
20191105_160157: 95497499  75 to 76    3.3%   1h39m  1117.73    6.452s |  145/4620,  32/960 |  42.81G | 6635.8M/s |   82485 |   0:05
20191105_160257: 95497499  75 to 76    4.4%   1h38m  1118.59    6.447s |  196/4620,  42/960 |  42.81G | 6640.9M/s |   82485 |   0:06
20191105_160403: 95497499  75 to 76    5.4%   1h37m  1122.60    6.424s |  240/4620,  52/960 |  42.81G | 6664.7M/s |   82485 |   0:07
20191105_160506: 95497499  75 to 76    6.5%   1h36m  1118.59    6.447s |  292/4620,  62/960 |  42.81G | 6640.9M/s |   82485 |   0:08
20191105_160610: 95497499  75 to 76    7.5%   1h35m  1115.83    6.463s |  337/4620,  72/960 |  42.81G | 6624.5M/s |   82485 |   0:09
20191105_160715: 95497499  75 to 76    8.5%   1h34m  1118.25    6.449s |  381/4620,  82/960 |  42.81G | 6638.9M/s |   82485 |   0:10
20191105_160819: 95497499  75 to 76    9.6%   1h32m  1122.08    6.427s |  436/4620,  92/960 |  42.81G | 6661.6M/s |   82485 |   0:11
20191105_160923: 95497499  75 to 76   10.6%   1h31m  1121.55    6.430s |  484/4620, 102/960 |  42.81G | 6658.5M/s |   82485 |   0:13
20191105_161028: 95497499  75 to 76   11.7%   1h30m  1121.20    6.432s |  537/4620, 112/960 |  42.81G | 6656.4M/s |   82485 |   0:14
20191105_161132: 95497499  75 to 76   12.7%   1h30m  1110.33    6.495s |  577/4620, 122/960 |  42.81G | 6591.8M/s |   82485 |   0:15
20191105_161236: 95497499  75 to 76   13.8%   1h28m  1122.60    6.424s |  624/4620, 132/960 |  42.81G | 6664.7M/s |   82485 |   0:16
20191105_161341: 95497499  75 to 76   14.8%   1h27m  1122.42    6.425s |  664/4620, 142/960 |  42.81G | 6663.7M/s |   82485 |   0:17
20191105_161445: 95497499  75 to 76   15.8%   1h26m  1118.94    6.445s |  721/4620, 152/960 |  42.81G | 6643.0M/s |   82485 |   0:18
20191105_161549: 95497499  75 to 76   16.9%   1h25m  1121.73    6.429s |  772/4620, 162/960 |  42.81G | 6659.5M/s |   82485 |   0:19
If I want to run this on another machine, do I have to create a different key instance? I am thinking about my older HP. It has a really slow GPU and I generally avoid this type of work on it.

I apologize for all these questions. I've tread into an area I have no experience with.
storm5510 is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Status Primeinator Operation Billion Digits 5 2011-12-06 02:35
62 bit status 1997rj7 Lone Mersenne Hunters 27 2008-09-29 13:52
OBD Status Uncwilly Operation Billion Digits 22 2005-10-25 14:05
1-2M LLR status paulunderwood 3*2^n-1 Search 2 2005-03-13 17:03
Status of 26.0M - 26.5M 1997rj7 Lone Mersenne Hunters 25 2004-06-18 16:46

All times are UTC. The time now is 09:31.


Mon Aug 2 09:31:47 UTC 2021 up 10 days, 4 hrs, 0 users, load averages: 1.25, 1.25, 1.34

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.