![]() |
[QUOTE=EugenioBruno;539566]deeply TFing what's about to be FCd is better because it saves FCs and moves the wavefront faster;
TFing exponents far ahead is worse because they wouldn't be tested anyway for a while[/QUOTE]You got it. :tu: |
[QUOTE=EugenioBruno;539566]I have to admit I didn't really understand the argument fully, but tomorrow I will search for explanations, probabilities, arguments and so on, so I can understand without annoying you folks too much.[/QUOTE]
No problem. For those who ask (like you), there are many going "Why are they doing that?"... Basically, it comes down to a (scarce) resource management problem. As a thought experiment, imagine that you were the sole person looking for the next Mersenne Prime. Based on James' deep analysis, we've empirically determined that it is more efficient to first TF to 77 bits before doing the First Check ***on the same GPU***. Then, add to the problem space the fact that there are thousands of participants in GIMPS, all volunteers, and all running a huge mix of CPUs and GPUs. Some like finding factors. Some hope to find the next MP. Some are content using mprime/Prime95 to ensure the sanity of their kit by running DCs. Then, on top of that, consider that there are actually multiple "wavefronts". The Cat 0 through 4 assignment classes, plus the P-1'ers, plus the DC'ers. At the end of the day, none of this really matters all that much. But it /does/ make for some really interesting driving problems... :smile: |
[QUOTE=EugenioBruno;539566]I'm not even sure if the qualitative gist I get - deeply TFing what's about to be FCd is better because it saves FCs and moves the wavefront faster; TFing exponents far ahead is worse because they wouldn't be tested anyway for a while, and hardware is going to evolve in the meantime - is roughly correct.[/QUOTE]You got it right.
|
1 Attachment(s)
Occasionally, when I open my browser to the colab session running the notebook provided by GPU72, I get the message in the attached image. Is this expected?
[ATTACH]21880[/ATTACH] |
[QUOTE=linament;539573]Occasionally, when I open my browser to the colab session running the notebook provided by GPU72, I get the message in the attached image. Is this expected?[/QUOTE]
Hmmm... No... The Notebook should automatically detect that a GPU is available, and use it if it is. The only reason it should revert to CPU only is if the nvidia-smi command doesn't work correctly. Perhaps this is another delta Colab has made to their environment, although I have never seen this myself. One thing to try when you see that... Stop the Notebook Section, and then rerun it and see if it detects the GPU on the second run. Another thing would be to "Connect to Hosted Runtime" (drop-down menu in the upper-right-hand side) and see if it complains about not being able to attach to a GPU backend. Edit: Sorry... I glanced at your screenshot too quickly. You /are/ running both the GPU and CPU code in that instance. I have no idea why Google thinks you're not. |
[QUOTE=chalsall;539570]
Basically, it comes down to a (scarce) resource management problem. [/QUOTE] As Aragorn would say, "You have my GTX 1650!" :D |
[QUOTE=EugenioBruno;539577]As Aragorn would say, "You have my GTX 1650!"[/QUOTE]
And it's much appreciated! :smile: |
Colab paid tier first time restricted
This morning after my 24 hour run time expired on three GPU sessions and one P-1, I was unable to get a GPU to start a new session, and only one P-1 session was allowed to connect.
|
Got GPUs again
Later this morning I was again able to get multiple GPS sessions.
|
I don't know if this is significant. However, whenever I interrupt execution of my Colab session (such as shutting down my machine for the night) that is using the GPU72 script, it terminates with the following error message. At this point, I am only running a P-1 instance because I have used up my GPU quota for the day.
[QUOTE]20200315_185854 ( 3:45): [Work thread Mar 15 18:58] M100990271 stage 1 is 73.65% complete. Time: 344.483 sec. Exiting... Can't locate LWP/UserAgent.pm in @INC (you may need to install the LWP::UserAgent module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.26.1 /usr/local/share/perl/5.26.1 /usr/lib/x86_64-linux-gnu/perl5/5.26 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.26 /usr/share/perl/5.26 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at ./comms.pl line 32. BEGIN failed--compilation aborted at ./comms.pl line 32. Done.[/QUOTE] |
[QUOTE=linament;539792]However, whenever I interrupt execution of my Colab session (such as shutting down my machine for the night) that is using the GPU72 script, it terminates with the following error message. At this point, I am only running a P-1 instance because I have used up my GPU quota for the day.[/QUOTE]
No, it's not a problem. Ungraceful, but not a problem. The issue is the CPU Payload doesn't use the Perl LWP module, so it isn't installed. But at the end of the Notebook Section, the Comms module is called to let GPU72 know that the Section was stopped. BTW... You don't actually need to stop your Section(s) when you're going to shut your machine down for the night. Just close your browser (and answer "Yes, I really want to leave this page") and your Session will continue working for an hour or so. |
| All times are UTC. The time now is 22:57. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.