![]() |
[QUOTE=Uncwilly;552105]Using your handy on-line tool I changed the settings on my colab sessions. Works nice. I bumped them both up.[/QUOTE]
I too just updated my colab to use LMH TF Depth First (79). However, as I've commented on a previous thread, the time I actually get a GPU in Colab seems to be dwindling. I've taken the advice and make sure to "Connect to hosted runtime" each time I get a GPU...but they never seem to run for more than a couple hours. Then, its a couple 12-hour shifts of CPU P-1 work until I will be assigned another GPU. But, for when I get them, I'll have them follow your recommendation. |
I had thought that we were trying to get the 90M range to 77 and 100M-110M to 76 as a buffer against the wave front. But I would be happy to switch my collab time over to LMH depth first 79 if that's where you want it.
|
[QUOTE=chalsall;552138]Cool. Thanks. It will be nice getting them back to Primenet, to assign for (please) PRP runs.
This actually brings up an idea (perhaps it should be posted somewhere else)... Perhaps Primenet should only hand out 332M work to PRP (with CERTs) clients? The jobs are ***heavy***; it would be nice to only have to do the run once.[/QUOTE] Colab, I have not looked at it in many months. For me, it did not present much of an advantage. TF: I can run them on my own hardware, without something timing out, until they are complete, and at a decent speed. With my recently expanded GPU collection, I can run nearly twice as many in the same period of time. I still check the GPU72 site daily looking for assignments which are one day old, or more. Yesterday, I found two. Either, they did not make it into my [I]mfaktc[/I] queue, or Primenet did not relay the information back. I ran them and Primenet accepted the results, so I did not receive them to run when they were issued to me. |
[QUOTE=Aramis Wyler;552173]I had thought that we were trying to get the 90M range to 77 and 100M-110M to 76 as a buffer against the wave front.[/QUOTE]
We are. That's where the crunch is. [QUOTE=Aramis Wyler;552173]But I would be happy to switch my collab time over to LMH depth first 79 if that's where you want it.[/QUOTE] As always, do what you enjoy doing. Some want to work 332M, so I've made it available. Personally, I think doing PRP work up there is a bit silly, but if people want to play the lottery, it's not my job to explain the maths... :wink: |
[QUOTE=chalsall;552196]
As always, do what you enjoy doing. Some want to work 332M, so I've made it available.:[/QUOTE] So set aside anyone else's opinion, or enjoyment for a moment. What, in your technical opinion, is the best use of a good GPU right now? I understand not everyone agrees and that's perfectly legitimate. I'm just curious as to YOUR thoughts since you are so knowledgeable on the matter. I like seeing tasks finish....so I did a lot of high exponent, low bit factor work since it flies on my screen so fast. But I understand those exponents wont be tested for a looong time At night I queue up lower exponent higher bit level work since I'm asleep and not watching. But in terms of what you think is the most helpful to the actual overall TF work, where do you see the most helpful work being done? No wrong answer..... |
[QUOTE=LOBES;552202]where do you see the most helpful work being done?[/QUOTE]Set your work preference to "What Makes Sense" or "Let GPU72 Decide" (there's a subtle historical difference between the two that nobody but Chris understands, but don't worry about it). That way you can be assured that the work you're doing is optimal-at-the-moment based on expert monitoring and tweaking.
|
[QUOTE=James Heinrich;552203]Set your work preference to "What Makes Sense" or "Let GPU72 Decide" (there's a subtle historical difference between the two that nobody but Chris understands, but don't worry about it). That way you can be assured that the work you're doing is optimal-at-the-moment based on expert monitoring and tweaking.[/QUOTE]
That is certainly easy enough. Doing so just got me the following which is coming along nicely: Factor=N/A,100346761,74,76 |
One-off oddball error I found in one of my Colab sessions from yesterday, failed to start up correctly:[code]Beginning GPU Trial Factoring Environment Bootstrapping...
Please see https://www.gpu72.com/ for additional details. gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-1-227df1425772> in <module>() 24 stdout=subprocess.PIPE, 25 universal_newlines=True, ---> 26 bufsize=0) 27 28 try: 1 frames /usr/lib/python3.6/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session) 1362 if errno_num == errno.ENOENT: 1363 err_msg += ': ' + repr(err_filename) -> 1364 raise child_exception_type(errno_num, err_msg, err_filename) 1365 raise child_exception_type(err_msg) 1366 FileNotFoundError: [Errno 2] No such file or directory: './bootstrap.pl': './bootstrap.pl'[/code]The other 3 sessions I started around the same time worked fine, and they all worked fine today, so I assume it was some transient quirk that doesn't need fixing, but I'm reporting it anyways. |
I just noticed from the [URL="https://www.gpu72.com/reports/overall/graph/quarter/"]GPU72 Quarterly Progress Page[/URL] that work dropped off sharply after May 21st to put it mildly. This doesn't seem to be a blip, either, as I went and looked back at the 6 month and yearly ones as well. What happened, did we lose someone?
|
SRBase decided to no longer pull their work for their BOINC from GPU72. They want to fast work that is found ahead of the wave front area.
Oops, that might not be it. But see here for that story [url]https://www.mersenneforum.org/showthread.php?t=25383[/url] |
Yeesh, that was ugly. Looks like SRBase is working directly through gimps to get that work (top 5 this year), which seems to make more sense than running it through GPU72. It's a big difference in mindset between trying to accomplish the work as efficiently as possible vs taking any willing computation cycles. Still, as you said, that is probably not the work that cut out in May, as they seem to have cut out a month earlier.
|
| All times are UTC. The time now is 22:33. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.