![]() |
[QUOTE=chalsall;556950]No more than half an hour. In some ranges, within ten minutes.[/QUOTE]
[U]Thank you[/U]! That's fast. I was figuring an hour, the interval Primenet updates its reports on [I]mersenne.org[/I]. I like to periodically stop for a while to see if I have any "hanging" assignments. I had one, age 0.74. Just a tiny fraction under 18 hours. I never go longer than 12 hours without submitting results. |
1 Attachment(s)
Here I think I am closing in at a good pace on 1 active user and 2 flat-lined users. Then [COLOR=Red][B]<BLAM!>[/B][/COLOR] The Romanian Rocket leaves me in the dust. :shock:
|
[QUOTE=chalsall;555034]With regards to TF'ing depth, going to 76 is pretty much optional right now.
We're still going to 77 in 9xM, to keep that range "neat" (less than 3,000 to go). And also in front of Cats 2 and 3 because most clients there haven't upgraded to 30.3b3. Cat 4 is really not much more than slow P-1'ers for the lower Cats, and are getting 76 already. Edit: Oh, also, the P-1 wavefront, in 100M, is also getting 76.[/QUOTE] Even though I knew the above was here, it took me a little bit to go back and find it among all the Colab postings. I started using a new piece of hardware yesterday that makes 76 bits a short trip. I am currently running 110M's to 76. For now, it appears that I am right on target. Sometime down the road, 77 will be needed. This is entirely doable as well. |
[QUOTE=storm5510;557558]I started using a new piece of hardware yesterday that makes 76 bits a short trip. I am currently running 110M's to 76. For now, it appears that I am right on target. Sometime down the road, 77 will be needed. This is entirely doable as well.[/QUOTE]
Excellent. Nice to see your increased compute. :tu: Please note that if you simply set the Work Option to be "What Makes Sense" or "Let GPU72 Decide" means that the Depth will be automatically set to what is needed most at that moment in time. |
[QUOTE=chalsall;557561]Excellent. Nice to see your increased compute. :tu:
[/QUOTE] I had to throttle it back. The heat was working my AC really hard. I have a set of south-facing patio doors, so the sun gets in this room this time of year. I have blinds on them, but it does not seem to help if I close them. This is the only time of the entire year when I have this problem. The GPU is running at 48°C. Default settings is 64°C. I think my heat source will transfer when it get cold. :grin: |
[QUOTE=Uncwilly;556843]It looks like some Colab results are not being noticed and posted to PrimeNet.[/QUOTE]
This again for my TF, P-1 looks ok. |
One of my WorkFetch processes has stopped working for no apparent reason. The [C]Low[/C] setting is 98,000,000. The [C]High[/C] is 125,000,000. As a test, I adjusted the [C]Pledge[/C] to 77. No change. I know I have a connection because Firefox is working normally. :confused:
|
[QUOTE=storm5510;564513]One of my WorkFetch processes has stopped working for no apparent reason. The [C]Low[/C] setting is 98,000,000. The [C]High[/C] is 125,000,000. As a test, I adjusted the [C]Pledge[/C] to 77. No change. I know I have a connection because Firefox is working normally. :confused:[/QUOTE]
No change on the server-side. Could you please give me more data to debug from? MISFIT or Manual, etc. Developers hate "it doesn't work". :smile: |
[QUOTE=chalsall;564519]No change on the server-side.
Could you please give me more data to debug from? MISFIT or Manual, etc. Developers hate "it doesn't work". :smile:[/QUOTE] I knew better... :blush: Below is GPU72config: [QUOTE]StagingFile:C:\mfaktc\worktodo.txt ReloadWhenBelow:1 FetchCount:10 Low:98000000 High:125000000 Pledge:75 #OPTION #WhatMakesSense = 0 #LowestTFLevel = 1 #HighestTFLevel = 2 #LowestExponent = 3 #OldestExponent = 4 Option:2 ReplaceNAwithDate:False URL:[URL]https://www.gpu72.com/account/getassignments/lltf/[/URL] UserID:storm5510 UserPWD:xxxxxx[/QUOTE]Below is a batch file I use to automate the process: [QUOTE]@echo off :top cls call mfaktc-2047.exe if %errorlevel% gtr 0 goto errHandler timeout 4 if exist c:\mfaktc\GPU72FETCH_logs\*.html ( del c:\mfaktc\GPU72FETCH_logs\*.html ) call c:\gpu72\gpu72workfetch c:\gpu72\gpu72config.txt if %errorlevel% gtr 0 goto errHandler echo. goto top :errHandler echo. echo %date% %time% - Either mfaktc or GPU72 fetch has produced a non-zero exit code. echo.[/QUOTE]I know that [I]mfaktc[/I] will [U]not[/U] generate an error code for simply running out of work despite what it places on the screen. I do not know about gpu72workfetch.exe. |
This is an additional to my above posting. I was unable to edit it.
I did more digging on my own and I found something. I looked at the compiled code of [I]gpu72workfetch.exe[/I] with a hex editor. Way down, I found a reference to MS Visual Studio 2010 -and- MS .Net Framework 4.0. That particular machine did not have .Net Framework 4 installed because I had to do a HD wipe and reload two weeks ago. [I]gpu72workfetch.exe[/I] does not display anything about a missing dependency when started. It should. Yes, it is mentioned in the "readme" files, but I never looked at them. My automatic fetch is working now. :smile: |
I have been gone for a week but today I got Colab to do something for just long enough to complete a TF about 12 hours ago, 2020-11-29 15:41:20, Unfortunately the completed assignment is still sitting in the notebook results. Colab has not been working very nice lately so is the server holding a grudge against it?
|
[QUOTE=DrobinsonPE;564795]I have been gone for a week but today I got Colab to do something for just long enough to complete a TF about 12 hours ago, 2020-11-29 15:41:20, Unfortunately the completed assignment is still sitting in the notebook results. Colab has not been working very nice lately so is the server holding a grudge against it?[/QUOTE]
On multiple runs today on different accounts, 35 minutes seemed to be the magic duration. These were CPU only. The few GPU runs did worse, though I did get a few completions. |
Stupid SQL Error...
Hey everyone.
Just a heads up... I made a mistake with a manual SQL statement which corrupted 2.6 million historical assignment records. If anyone sees anything strange with their overall summary reports, this is the reason. I'll restore from backups today. No impact on current assignments nor workflow... |
[QUOTE=chalsall;565446]
If anyone sees anything strange with their overall summary reports, this is the reason. I'll restore from backups today. No impact on current assignments nor workflow...[/QUOTE] Thanks. I was going to ask why mine went wrong suddenly. |
It seems something has changed for my manual colab. Currently I'm pulling about 520 GHzd/day on a T4. This means that I can do about 4 tests at n~103M from 76-77 bit and about 7.5 tests from 75-76 bit. What is the desired bit level we want to achieve currently?
I'm asking, because in 3 days, I need to reserve new work and I would like to do a manual GPU72 reservation of what makes most sense. If 77 bit is desired target, I will reserve 50 and if 76 bit is the desired target as of now, I'll reserve 100 tests. In both scenarious I will return work once a day or at most every second day :smile: Take care and lets hope my colab adventure last :smile: ... Now time to watch some SpaceX and hopefully a 12.5KM hop for starship :smile: |
[FONT="Arial Black"][COLOR="SeaGreen"]Talk about SpaceX test moved to:[/COLOR][/FONT]
[url]https://www.mersenneforum.org/showthread.php?p=565794[/url] |
1 Attachment(s)
Why is the P-1 progress ranking out of order?
[url]https://www.gpu72.com/reports/workers/p-1/[/url] Similar thing happens to the overall ranking, but only at later stages. |
You may want to give Colab a try again :)
I'm not sure, what has happened, but currently, I can clock on my main colab account, 600 GHz d/day - I've just 40 minutes ago launched a notebook through GPU72 with "Let GPU72 decide" marked - if that account can also do 600 GHz d/day, it may be that some of you (at least you in Europe or those of you who gave up previously due to way too short runtimes) should go ahead and give colab another go :smile:
Happy hunting and remember any contribution, however small it may be is better than no contribution at all :smile: Take care, stay safe and merry christmas to all :smile: Ps. should add, that I mostly get T4s in colab, so only 2 times have I had to reset to factory settings, to rid a P100 or K80 and get a T4 in return :smile: |
[QUOTE=ZFR;566651]Why is the P-1 progress ranking out of order?[/QUOTE]At a quick glance it appears to be in order (on "Done" column). You can click any column header to resort by that column.
|
[QUOTE=ZFR;566651]Why is the P-1 progress ranking out of order?
[url]https://www.gpu72.com/reports/workers/p-1/[/url] Similar thing happens to the overall ranking, but only at later stages.[/QUOTE] Happens now and then when someone jumps up in rank. It should fix itself in a few hours |
[QUOTE=James Heinrich;566678]At a quick glance it appears to be in order (on "Done" column). You can click any column header to resort by that column.[/QUOTE]
The first column (#). Clicking on it still causes it to be out of order and have duplicates. EDIT: Oops, petrw1 added a reply while I was posting. [QUOTE=petrw1;566686]Happens now and then when someone jumps up in rank. It should fix itself in a few hours[/QUOTE] OK. Thanks. |
I just reserved a P-1 here, first time. Is there a significance between what I would get from Primenet as opposed to here? :unsure:
|
[QUOTE=storm5510;566690]I just reserved a P-1 here, first time. Is there a significance between what I would get from Primenet as opposed to here? :unsure:[/QUOTE]
From my experience, currently Primenet is giving out P-1 in the 101 million range, while GPU72 is handing out ones in the 105 million range. |
[QUOTE=LOBES;566692]From my experience, currently Primenet is giving out P-1 in the 101 million range, while GPU72 is handing out ones in the 105 million range.[/QUOTE]
If the only difference is size, then I consider my question as answered. Thank you for the reply. :smile: |
Auto Fetch Timing out
Something down?
|
[QUOTE=petrw1;566902]Something down?[/QUOTE]
Same for me. Seems like the same effect [URL="https://mersenneforum.org/showpost.php?p=566808&postcount=1095"]like yesterday[/URL] from my perspective. Also around the same time. Chris? |
[QUOTE=Flaukrotist;566904][URL="https://mersenneforum.org/showpost.php?p=566808&postcount=1095"]like yesterday[/URL] from my perspective. Also around the same time. Chris?[/QUOTE]
Empirically, the Universe has a wicked sense of humor... Did an unscheduled update and reboot. Do things seem saner now? |
[QUOTE=chalsall;566937]Empirically, the Universe has a wicked sense of humor...
Did an unscheduled update and reboot. Do things seem saner now?[/QUOTE] Must be that Jupiter/Saturn thing... :alien: |
[QUOTE=chalsall;566937]Do things seem saner now?[/QUOTE]
Yes, alright now. Thanks. :thumbs-up: |
Never mind. it works now.
[COLOR="Gray"]I can get to the GPU72 home page but all of the my account pages just spin circles until timing out.[/COLOR] |
Get P-1 Assignments Failure
I just tried to get P-1 assignments with the default max of 110000000 and page only gives the message "[COLOR=red]Sorry... No assignments available which match your criteria."[/COLOR]
|
[QUOTE=James Heinrich;566678]At a quick glance it appears to be in order (on "Done" column). You can click any column header to resort by that column.[/QUOTE]
Nope. Sorting sucks (and always did) on GPU72. Say, Chris is a very good sysgeek and a wonderful entrepreneur, but he was never the best algorithmist :razz:. [COLOR=White](hopefully, this will tickle his pride)[/COLOR] |
[QUOTE=linament;566953]I just tried to get P-1 assignments with the default max of 110000000 and page only gives the message "[COLOR=red]Sorry... No assignments available which match your criteria."[/COLOR][/QUOTE]
Ignore the message, state how many you want, and then just press get assignments, and see what happens ... |
[QUOTE=petrw1;566686]Happens now and then when someone jumps up in rank. It should fix itself in a few hours[/QUOTE]
Still there... same ranking and duplicates. |
[QUOTE=linament;566953]I just tried to get P-1 assignments with the default max of 110000000 and page only gives the message "[COLOR=red]Sorry... No assignments available which match your criteria."[/COLOR][/QUOTE]
Sorry... The queries were constrained to only give P-1 assignments where TF'ing had already been done to 77. This has been updated to allow assignment at 76. |
[QUOTE=chalsall;277442]First of all, I must thank all of the GPU Workers who have signed up and been using the [URL="http://gpu.mersenne.info/"]GPU to 72 Tool[/URL].[/QUOTE]GPU72 status: [url=https://www.mersenne.ca/status/tf/20201228/0/1/0]done as of 2020-12-28[/url].
|
[QUOTE=James Heinrich;567614]GPU72 status: [URL="https://www.mersenne.ca/status/tf/20201228/0/1/0"]done as of 2020-12-28[/URL].[/QUOTE]
When did the working range jump to 1e9 - 1? :ermm: |
[QUOTE=storm5510;567639]When did the working range jump to 1e9 - 1? :ermm:[/QUOTE]
I believe it was when GIMPS went to V5 (many years ago). Though I'm not sure Prime95 yet supports LL that high. |
The first factors I see from upper 900M range are from May 2008, so probably sometime around then.
|
I really do not know where to put this. I will put it here. If it needs moved then do so.
[QUOTE]09:32:50.271 MISFIT GPU72 work fetcher. v1.0.4 by Scott Lemieux (SWL551) 09:32:50.271 09:32:50.271 09:32:50.271 Reading program configuration from c:\gpu72\gpu72config.txt 09:32:50.271 Checking remaining work in C:\mfaktc\worktodo.txt 09:32:50.271 Found 0 assignment lines 09:32:50.271 0 is < 1: IS time to reload 09:32:50.271 Contacting GPU72 with parameters: 09:32:50.271 number=5 09:32:50.271 low=98000000 09:32:50.271 high=115000000 09:32:50.271 pledge=75 09:32:50.271 option=2 09:32:50.271 user=storm5510 09:32:50.271 pwd=xxxxxxxxx 09:32:50.271 url=https://www.gpu72.com/account/getassignments/lltf/ 09:32:50.271 09:32:51.456 Logging HTML stream in GPU72FETCH_logs 09:32:51.456 09:32:51.456 Parsing HTML stream to get the assignments GPU72FETCH_logs 09:32:51.456 09:32:51.456 [B]Fatal Error: Index was out of range. Must be non-negative and less than the size of the collection.[/B] 09:32:51.456 Parameter name: [U]startIndex[/U][/QUOTE]I am having an issue with my work fetcher. "Less than the size of the collection." This seems to indicate too much distance between the [U]low[/U] and [U]high[/U] settings. If so, then where do they need to be? If not, then it is something I am not aware of. This had been really sluggish for several days. I was only able to run about two-thirds of what I normally could because of the lost time. Manual reserve from GPU72.com is working today. Yesterday, it did not. Anybody! :ermm: |
[QUOTE=storm5510;568190]Anybody! :ermm:[/QUOTE]
Something weird is going on with the MariaDB backend. Working it. |
[QUOTE=chalsall;568198]Something weird is going on with the MariaDB backend. Working it.[/QUOTE]
OK... I /think/ things are back to nominal. Had to "Optimize" the Assignment table. Please let me know if anyone sees any further weirdness. |
Colab Instance
Using the GPU72 TF on Colab with a CPU only session, I am looping on the following statements at varying intervals:
20210103_201124 ( 0:01): Sending Telemetry for 9c44f2a8453212b0ec80831737fb5742 (7306e513f0dae5e96200f19590c344c9) 20210103_201124 ( 0:01): Fetching payload... 20210103_201138 ( 0:02): Payload exited. Fetching next payload... |
[QUOTE=linament;568220]Using the GPU72 TF on Colab with a CPU only session, I am looping on the following statements at varying intervals:
20210103_201124 ( 0:01): Sending Telemetry for 9c44f2a8453212b0ec80831737fb5742 (7306e513f0dae5e96200f19590c344c9) 20210103_201124 ( 0:01): Fetching payload... 20210103_201138 ( 0:02): Payload exited. Fetching next payload...[/QUOTE] This is exactly true for me, too. Also CPU-only session which I just started a few minutes ago. Another CPU-only session from another account which is working on an exponent for some hours already has been giving me since one hour ago: [CODE]20210103_215236 (11:09): [Comm thread Jan 3 21:52] Updating computer information on the server 20210103_215237 (11:09): [Comm thread Jan 3 21:52] Sending expected completion date for M105088xxx: Jan 3 2021 20210103_215237 (11:09): [Comm thread Jan 3 21:52] PnErrorResult value missing. Full response was: 20210103_215237 (11:09): [Comm thread Jan 3 21:52] [Jan 3 21:52] Visit [URL]http://mersenneforum.org[/URL] for help. 20210103_215237 (11:09): [Comm thread Jan 3 21:52] Will try contacting server again in 70 minutes.[/CODE] |
Unscheduled downtime...
I've been fighting with this all day. I have no idea what's going on.
I'm going to take the GPU72 offline for about an hour. Sorry for the short notice, but at the moment GPU72 is effectively unusable. |
Whatever change you made fixed the ranking now.
|
[QUOTE=ZFR;568310]Whatever change you made fixed the ranking now.[/QUOTE]
Yeah... I /thought/ perhaps the problem was the long query times caused by my Stupid DBA error earlier, so I restored the dataset to be correct. I /think/ I've figured out the issue. An upgrade to MariaDB caused an uncommonly called Perl script to throw an error, which left a hanging lock. |
[QUOTE=chalsall;568334]Yeah... I /thought/ perhaps the problem was the long query times caused by my Stupid DBA error earlier, so I restored the dataset to be correct.
I /think/ I've figured out the issue. An upgrade to MariaDB caused an uncommonly called Perl script to throw an error, which left a hanging lock.[/QUOTE] Is everything fixed now? My assignments still show exponents that were already finished and submitted to GIMPS several hours ago. |
[QUOTE=ZFR;568339]Is everything fixed now? My assignments still show exponents that were already finished and submitted to GIMPS several hours ago.[/QUOTE]
Everything /should/ be back to nominal. Could you please PM me a couple of examples of the assignments in question? |
[QUOTE=chalsall;568347]Everything /should/ be back to nominal.
Could you please PM me a couple of examples of the assignments in question?[/QUOTE] Done. Check your PM. |
Problem is the DB name..lol
|
[QUOTE=chalsall;568347]Everything /should/ be back to nominal...[/QUOTE]
Still a no-go here. I checked to see if it was something particular to the one machine. I tried the same exact setup on my 2080 machine. Same result. Manual reserve works alright. |
P-1 Manual Assignments
Also, cannot retrieve P-1 manual assignments via "[SIZE=2]Get P-1 Assignments". Just receive a no assignments available.
[/SIZE] |
[QUOTE=ZFR;568348]Done. Check your PM.[/QUOTE]
Thanks. You helped me find a script I overlooked. Too much going on in "real life" for this to have happened... :sad: Please point anything else out which seems strange. |
[QUOTE=chalsall;568353]Thanks. You helped me find a script I overlooked.
Too much going on in "real life" for this to have happened... :sad: Please point anything else out which seems strange.[/QUOTE] Thanks. This has worked. I got new assignments and they went through fine. The overall progress has some oddities. [url]https://www.gpu72.com/reports/workers/[/url] A lot of people have factors missing? e.g. NickOfTime, #24, has just 1 factor, despite 1.4M GHzDays CraigMeyers #32 has 0, with 970k GHzDays. Is this normal? I've been going through that list before many times and I think I'd have noticed it if it was there before. |
[QUOTE=chalsall;568353]
Please point anything else out which seems strange.[/QUOTE] Something is still strange here: One of my Colab P-1 exponents [M]105088493[/M] was finished and reported to PrimeNet yesterday during the hickups somehow. But it was not sent as finished to the GPU72 server, so it is still in my list of assignments with Stage 2 99.47%. Now it finished on Colab a second time and PrimeNet communicated "Result not needed" but it is still not cleared from my list. Furthermore, the session loops through the following for half an hour before finally getting an assignment: [CODE] 20210104_200435 ( 0:29): Payload exited. Fetching next payload... 20210104_200445 ( 0:29): Sending Telemetry for 11bae926bb9ee69d603093941a93db31 (7d7795e43682f68cc3b2c37e83e1ada1) 20210104_200446 ( 0:29): Fetching payload... 20210104_200450 ( 0:29): CPU Payload starting... 20210104_200452 ( 0:29): [Main thread Jan 4 20:04] Mersenne number primality test program version 29.8 20210104_200452 ( 0:29): [Main thread Jan 4 20:04] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 1 MB, L3 cache size: 39424 KB 20210104_200452 ( 0:29): [Main thread Jan 4 20:04] Starting worker. 20210104_200452 ( 0:29): [Comm thread Jan 4 20:04] Updating computer information on the server 20210104_200452 ( 0:29): [Work thread Jan 4 20:04] Worker starting 20210104_200453 ( 0:29): [Comm thread Jan 4 20:04] Exchanging program options with server 20210104_200453 ( 0:29): [Comm thread Jan 4 20:04] Exchanging program options with server 20210104_200453 ( 0:29): [Comm thread Jan 4 20:04] Exchanging program options with server 20210104_200453 ( 0:29): [Main thread Jan 4 20:04] Restarting all worker threads. 20210104_200453 ( 0:29): [Comm thread Jan 4 20:04] Successfully quit GIMPS. 20210104_200453 ( 0:29): [Comm thread Jan 4 20:04] Done communicating with server. 20210104_200457 ( 0:29): [Work thread Jan 4 20:04] Worker stopped. 20210104_200457 ( 0:29): [Main thread Jan 4 20:04] Execution halted. 20210104_200457 ( 0:29): [Main thread Jan 4 20:04] Choose Test/Continue to restart. 20210104_200457 ( 0:29): CPU process finished. Launching checkpointer to send last results (if interrupted) 20210104_200457 ( 0:29): 20210104_200457 ( 0:29): Exiting.[/CODE] |
I am getting the same with exponent [url=https://www.mersenne.org/report_exponent/default.php?exp_lo=105083593&full=1]105083593[/url] at 99.97% completed.
|
I just checked my account and found a couple of things: (1) I have around double the amount of assignments I should have, 69. (2) My Primenet user ID field was blank in the account settings.
The assignment situation is probably my doing. Some of the assignment dates go back to 12/30/2020. I suspect that I am occasionally forgetting to submit them, "Alzheimer's Lite." Other times, I will send a bunch in and have some with a sentence in red text indicating I had sent them in previously. "Error 40," I believe it is. I manually retrieved a group of 30 earlier in the day. There is not many left at this moment. When complete, I will submit them, and then wait to see how many drop out of my list. What remains, I will run again. Hopefully no error 40 with any. I have had this happen in the past on repeats. I do not recall ever having looked at my account setting before. The ID field may have been blank for a long time. |
[QUOTE=pinhodecarlos;568349]Problem is the DB name..lol[/QUOTE]
[URL="https://mariadb.org/about/"]Nope.[/URL] . P.S. I will tell on you, to your wife! :razz: . |
While the problem with the stated exponent is resolved, my CPU-only Colab notebook is still not able to fetch new assignements and loops endlessly through the lines like given in my last post.
|
No time...
Hey everyone.
Sorry for any outstanding issues. I can't (yet) say why, but I have *zero* cycles for the next 48 hours. |
[QUOTE=chalsall;568520]Hey everyone.
Sorry for any outstanding issues. I can't (yet) say why, but I have *zero* cycles for the next 48 hours.[/QUOTE] Primenet accepted all of my reruns, and the automatic work fetch is working again. :smile: |
[QUOTE=LaurV;568437][URL="https://mariadb.org/about/"]Nope.[/URL]
. P.S. I will tell on you, to your wife! :razz: .[/QUOTE] Never seen a Maria being relational. Maria has been the most popular female name in Portugal for last 16 years. All my aunties are Maria, mother too, Maria something. Lol apologies for the thread hijack. |
Quite the same in Ro, albeit not in newer generations, but I have plenty of them in the family too, in the ancestry and old relatives. Mother and grandma were both Maria.
|
I have a couple assignments at GPU72 that GPU72 hasn't detected that I've finished, from last year.
Anyway, I'm not particularly concerned about that, but there may be some other exponents that failed to update around mid December. An example: [url]https://www.mersenne.org/report_exponent/default.php?exp_lo=103452421&full=1[/url] |
[QUOTE=Mark Rose;568698]I have a couple assignments at GPU72 that GPU72 hasn't detected that I've finished, from last year.
Anyway, I'm not particularly concerned about that, but there may be some other exponents that failed to update around mid December. An example: [url]https://www.mersenne.org/report_exponent/default.php?exp_lo=103452421&full=1[/url][/QUOTE] It looks like it skipped the 74th bit level. I've been noticing the same problem with some GPU-72 assignments recently. At first, I thought I messed up somehow, but now I'm not sure. |
I have four results from yesterday and today stuck as well.
no factor for M103618351 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs] no factor for M103623307 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs] no factor for M103625663 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs] no factor for M103625887 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs] |
Most recently, GPU-72 gave me [M]116345897[/M] and skipped the 73 to 74. I went back and ran it, and now primenet shows "no factors below 74" even though it is factored to 76.
[Sorry for the double post, I had to dig for that exponent, thanks.] |
[QUOTE=Runtime Error;568704]Most recently, GPU-72 gave me [M]116345897[/M] and skipped the 73 to 74. I went back and ran it, and now primenet shows "no factors below 74" even though it is factored to 76.
[Sorry for the double post, I had to dig for that exponent, thanks.][/QUOTE] That's a PrimeNet bug that's existed for many years. It only analyses if the line received increases the previous count, and doesn't take into account any higher TF results that already exist. A few years ago I went through everything under 100M and fixed all those by taking exponents one higher. |
Similar here, sometime ago we did a 67-to-68 for many exponents to fix the non-contiguity. PrimeNet is missing the intermediary bit. If the reports is found in the DB, then James/George may "fix" it. If not, then the 75 need to be re-done, and it will show the 76 too.
Edit: can you get back and report the 74-75 again? You may get a "range not needed" error, buy it may fix the DB. |
So I checked all my stuck assignments, and all had a skipped bit level. I'm filling the gap over the next 12 hours or so.
|
Still no time.
Sorry guys. Still no cycles. I've getting less than four (4#) hours of sleep per day (average; sometimes 0).
Please collect data, and work around any issues until I'm able to come back to this. |
I decided to stop. I see no point of doing any further TF work that Primenet will refuse. No, I am not attacking anyone here. I simply do not want to burn kWh if it is of no benefit to the project.
|
[QUOTE=storm5510;568782]I decided to stop. I see no point of doing any further TF work that Primenet will refuse. No, I am not attacking anyone here. I simply do not want to burn kWh if it is of no benefit to the project.[/QUOTE]
You're more than welcome to change to DCTF :big grin: |
[QUOTE=petrw1;568793]You're more than welcome to change to DCTF :big grin:[/QUOTE]
This seems like a lot of repetition. Why double-check TF's? :ermm: |
[QUOTE=storm5510;568889]This seems like a lot of repetition. Why double-check TF's? :ermm:[/QUOTE]
For fun; for finding factors. My personal little sub-sub-project whose goal is simply to get all 0.1M ranges of exponents to below 2,000 unfactored. Frivolous for now but one day our Great-Great -Grandchildren with Quantum computers will be fully-factoring all these exponents. I'm just giving them a head start. Explained in post 1 here: [url]https://www.mersenneforum.org/showthread.php?t=22476[/url] Surprisingly I've had very good interest and help. |
[QUOTE=storm5510;568782]I decided to stop. I see no point of doing any further TF work that Primenet will refuse. No, I am not attacking anyone here. I simply do not want to burn kWh if it is of no benefit to the project.[/QUOTE]
You may get TF work directly from Primenet. I´ve been doing that for years and never had results refused. |
I have never been one to give up on something which I believed to possibly be doable.
Yesterday, I did a manual reserve directly from the server. This was a group of 10. When finished, Primenet accepted them with no problems. I then reserved 20 more. Same result, no problems. I calculated how many it would take to run 12 hours, and pulled those in. I have [U]no problems[/U] with doing it this way. :smile: |
[QUOTE=storm5510;568903]I have never been one to give up on something which I believed to possibly be doable.
Yesterday, I did a manual reserve directly from the server. This was a group of 10. When finished, Primenet accepted them with no problems. I then reserved 20 more. Same result, no problems. I calculated how many it would take to run 12 hours, and pulled those in. I have [U]no problems[/U] with doing it this way. :smile:[/QUOTE] Yes, it always worked flawlessly for me. |
[QUOTE=chalsall;568770]Please collect data, and work around any issues until I'm able to come back to this.[/QUOTE]
Hey guys. A super quick update... I think the problem with the unneeded results was probably caused by incompleted inserts when I was having to restart the server so many times the other day. I suspect all such assignments have now been worked, and so this shouldn't be encountered moving forward. Sorry for any wasted cycles. Seriously bad timing... :sad: And, I can now talk a /little/ bit about what's been keeping me so busy lately... Please see [url]https://bimsafe.gov.bb/[/url] I didn't write the App, I but was responsible for a bunch of the back-end stuff that nobody ever sees, but needs to work. :chalsall: Barbados had an unfortunate super-spreader event when a group of individuals forgot their better judgment, and went on a "bus crawl" on Boxing Day. We're bringing it under control quite well, but let's just say we're all working full-out. Please feel free to download and install the app if you're interested. It collects NO Personally Identifiable Information. |
Did you choose the NextApp theme or is that something standard with the .bb methodology? Looks like the Layout 1 template.
I'm always looking for a good WordPress theme for my development. |
[QUOTE=chalsall;568952]And, I can now talk a /little/ bit about what's been keeping me so busy lately... Please see [url]https://bimsafe.gov.bb/[/url]
I didn't write the App, I but was responsible for a bunch of the back-end stuff that nobody ever sees, but needs to work. :chalsall:[/QUOTE]I am glad that you are able to share the big project that has tied you up. Being involved in big important things is great. But also I suspect a bit stress inducing. I hope your "super-spreader" event will be less of an issue than the one that happened in Washington DC on Wednesday. |
When you are rested up from your real work, I am still getting this repeating over and over in colab.
[CODE]Beginning GPU Trial Factoring Environment Bootstrapping... Please see https://www.gpu72.com/ for additional details. 20210111_151440 ( 0:03): GPU72 TF V0.423 Bootstrap starting (now working single assignments at a time)... 20210111_151440 ( 0:03): Working as "user notebook"... 20210111_151440 ( 0:03): Installing needed packages... 20210111_151500 ( 0:03): GPU not found. Make sure GPU is enabled (Runtime menu -> Change runtime type). 20210111_151500 ( 0:03): Running as CPU only... 20210111_151500 ( 0:03): GPU72 CPU Wrapper V0.05 starting... 20210111_151500 ( 0:03): Working as "user notebook"... 20210111_151500 ( 0:03): 20210111_151500 ( 0:03): Sending Telemetry for user notebook (user notebook) 20210111_151500 ( 0:03): Fetching payload... 20210111_151505 ( 0:03): CPU Payload starting... 20210111_151506 ( 0:03): [Main thread Jan 11 15:15] Mersenne number primality test program version 29.8 20210111_151506 ( 0:03): [Main thread Jan 11 15:15] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 256 KB, L3 cache size: 45 MB 20210111_151506 ( 0:03): [Comm thread Jan 11 15:15] Updating computer information on the server 20210111_151506 ( 0:03): [Main thread Jan 11 15:15] Starting worker. 20210111_151507 ( 0:03): [Work thread Jan 11 15:15] Worker starting 20210111_151507 ( 0:03): [Comm thread Jan 11 15:15] Exchanging program options with server 20210111_151507 ( 0:03): [Comm thread Jan 11 15:15] Exchanging program options with server 20210111_151507 ( 0:03): [Comm thread Jan 11 15:15] Exchanging program options with server 20210111_151507 ( 0:03): [Main thread Jan 11 15:15] Restarting all worker threads. 20210111_151507 ( 0:03): [Comm thread Jan 11 15:15] Successfully quit GIMPS. 20210111_151507 ( 0:03): [Comm thread Jan 11 15:15] Done communicating with server. 20210111_151512 ( 0:03): [Work thread Jan 11 15:15] Worker stopped. 20210111_151512 ( 0:03): [Main thread Jan 11 15:15] Execution halted. 20210111_151512 ( 0:03): [Main thread Jan 11 15:15] Choose Test/Continue to restart. 20210111_151512 ( 0:03): CPU process finished. Launching checkpointer to send last results (if interrupted) 20210111_151512 ( 0:03): 20210111_151512 ( 0:03): Exiting.[/CODE] |
[QUOTE=chalsall;568952]...Sorry for any wasted cycles. Seriously bad timing... :sad:
[/QUOTE] My auto fetch batch process is working fine now. The results are being accepted by Primenet without warnings. :big grin: |
[QUOTE=chalsall;568952]
And, I can now talk a /little/ bit about what's been keeping me so busy lately... Please see [URL]https://bimsafe.gov.bb/[/URL] I didn't write the App, I but was responsible for a bunch of the back-end stuff that [/QUOTE] Hey, that's nice. Thanks for sharing. |
I noticed that the GPU to 72 spider has reserved a lot of assignments ahead of the current DC wavefront: [url]https://mersenne.org/assignments/?exp_lo=60000000&exp_hi=60100000[/url]
However, no assignments in that range are available on the GPU to 72 website. Is there a particular reason for this? |
[QUOTE=ixfd64;569949]Is there a particular reason for this?[/QUOTE]
Something changed on the Primenet server around the beginning of the year, such that whenever GPU72 was told by Primenet that a candidate was available for reservation, the slaved mprime would also be given three DC candidates. I only noticed (and fixed) this yesterday. Finally had some cycles for some house-keeping, log reviews, etc. George &/| Aaron... Sorry for this, but if you could please unresearve any DC assignments held by GPU72, that would help clean up that area. Either that, or else wait the six months for them to expire. I'd do it myself if I could, but the DB has no knowledge of the AIDs. |
You can still log into the GPU Factoring account on the GIMPS website and unreserve active assignments. However, this will probably take a long time to do manually.
|
[QUOTE=chalsall;569953]
George &/| Aaron... Sorry for this, but if you could please unresearve any DC assignments held by GPU72, that would help clean up that area. Either that, or else wait the six months for them to expire..[/QUOTE] Easy, peasy. Should be done within a few minutes. |
[QUOTE=Prime95;569959]Easy, peasy. Should be done within a few minutes.[/QUOTE]
Coolness. Thanks. And, wow! I see about 100,000 DC candidates returned to the pool. When the Humans are distracted, the 'Bots will do the most annoying things... :wink: |
There are many old TF assignments mostly from March to July 2020 in the Cat 1 range.
Granted most were Cat 3 when assigned, but did something go wrong? [url]https://www.mersenne.org/assignments/?exp_lo=104280000&exp_hi=104897000&execm=1&exdchk=1&exfirst=1[/url] EDIT: Noticed the range of old assignments continues up to 104,897,000 but it can only show 1000 assignments, and there are more from 105,124,000 and up to 105,790,000, and even older from Feb 2020 above 106M. So maybe this is intended? EDIT2: Some more specific numbers: [CODE] Total Before Sep 1st 2020 (>~6 mdr) 104280000-104897000 + 105000000-105538009 (Cat1) 15,140 14,394 (95.1%) 105538010-105790000 + 106000000-106838000 + 107000000-107628848 (Cat2) 30,604 28,994 (94.7%) [/CODE] |
Notebook results not being auto-submitted
I haven't worked for several months, but my Colab results are not being auto-submitted. My results page says "automatic submission of results is activated for your account".
|
[QUOTE=ATH;572621]Granted most were Cat 3 when assigned, but did something go wrong?[/QUOTE]
Not /wrong/ exactly... :wink: The TF'ing is, currently, only a bit ahead of the FC Cat 1 wavefront (and that buffer is getting smaller every day). Thanks, Ben et al! (Seriously and sincerely!!! :tu:) We still have about 60 days or so of buffer. As in, we're not currently blocking anything. In fact, we're still also strategically feeding the other Cats optimally. We'll either find some way to pull ahead (a couple of paths are modeled, but they take (my) human cycles), or we'll release as required with the TF'ing sub-optimally done. Please trust me on this: I understand this is GIM[B][I][U]P[/U][/I][/B]S, not GIM[B][I][U]F[/U][/I][/B]S. GPU72 will not block any GIMPS progress nor milestones. Having said all of the above, if anyone has some GPUs they could throw our way doing TF'ing, it would be appreciated... :tu: |
Thanks for fixing the auto-submit.
|
[QUOTE=Chuck;572685]Thanks for fixing the auto-submit.[/QUOTE]
You're welcome... Somewhat ironically, this is an example of where "a human is still in the loop". In this case, me... :smile: Primenet requires that each CPU exchange deep knowledge interaction at least once every week or else it won't accept any results. This includes the "Virtual Machines" tied to your GPU72_TF machine. It's on my "to-do list" to automate that. But, unfortunately, my to-do list is rather long, and multidimensional... :wink: |
This exponent was automatically submitted with a factor found to Primenet, but it remains in my list of notebook instance results.
[CODE]UID: jaxbuilder/Colab, found 1 factor for M106579087 from 2^75 to 2^76 (partially tested) [mfaktc 0.21 barrett76_mul32_gs][/CODE] |
Why do I have all these repetitive entries on the GIMPS site? I have a bunch of entries showing zero credit for factoring to 76. This is just one example; a lot of exponents look the same way. I think these were before you fixed the auto-submit problem.
[CODE]105455461 No factors below 276 Assigned Assigned User Work Type Stage % Done Updated Expired 2020-03-28 GPU Factoring trial factoring 0.0 % 2021-02-22 History Date User Type Result 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2021-02-27 Chuck NF no factor from 2^75 to 2^76 2020-10-31 RichD0 NF no factor from 2^74 to 2^75 2020-04-02 SRBase NF no factor from 2^73 to 2^74[/CODE] |
| All times are UTC. The time now is 06:41. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.