![]() |
[QUOTE=storm5510;529544][U]Off-topic[/U]: My son is a big AMD fan. He recently built a new system. There was a compatibility issue with AMD video cards. He ended up with a GTX-1660. He doesn't understand it, nor does he like it. He'll go back to AMD sometime. I'll see if I can get my hands on his 1660 when he does.[/QUOTE]
I have been a computer professional since the 1970's, and AMD has had that problem [U]for this entire time[/U]. They simply cannot write drivers, period. I would never consciously use an AMD video card for my main system, no matter if it was linux or Windows. Now, if someone just gave me a Radeon VII, I would use it to crunch numbers in a separate box, but that's about it. Just my 2 cents worth. |
[QUOTE=PhilF;529504]If we take the future into consideration now, we should rename the effort to GPU92. :smile:[/QUOTE]
LOL... Domain registered... :wink: |
Thanks Oliver...
Another person who doesn't get enough credit for this hobby is [URL="https://www.gpu72.com/reports/worker/6e67460a77a11a707a665a6270df1a82/"]Oliver (TheJudger)[/URL].
Not only did he write mfaktc, but he also quite regularly does something like 1 or 2,000 THzD (!) of work in a week. Can't imagine his power bill... :wink: |
[QUOTE=chalsall;529547]And that is exactly why people are allowed to set their own "Pledge" level (and even range, if they so choose)....[/QUOTE]
The pledge setting does not seem to function this way now, so I did some searching on [I]PrimeNet[/I]. Exponents as high as 125-million have been factored to 2[SUP]74[/SUP], and many to 2[SUP]75[/SUP]. 2[SUP]77[/SUP] has been suggested as an end point, for now. The exponents I mention could be ran to 2[SUP]78[/SUP] or 2[SUP]79[/SUP]. Time, and technology, will tell. |
[QUOTE=storm5510;529620]The pledge setting does not seem to function this way now, so I did some searching on [I]PrimeNet[/I].[/QUOTE]
Not sure what you mean by this. The "Pledge" level should be honored for all of the work types except for "Let GPU72 Decide" (LG72D). "What Makes Sense", and all the other options, will give you something to TF up to the pledge level, but never further. In addition, if you specify a range, the results should be within that. For example, if you choose "Lowest Exponent", 98M Low, 74 Pledge that's what you should get. If you're /not/ seeing this behavior, please let me know. |
[QUOTE=chalsall;529628]Not sure what you mean by this.
The "Pledge" level should be honored for all of the work types except for "Let GPU72 Decide" (LG72D). "What Makes Sense", and all the other options, will give you something to TF up to the pledge level, but never further. In addition, if you specify a range, the results should be within that. For example, if you choose "Lowest Exponent", 98M Low, 74 Pledge that's what you should get. If you're /not/ seeing this behavior, please let me know.[/QUOTE] It's been ignored for several months. When I first started, everything I received was 72 bits. Later, it went to 73. Now at 74. I believe there is nothing available in your allocated area below what I am running now, and I have [U]no problem[/U] with 74. However, if there is a problem, I am sure you would like to find the cause. I just changed it back to option 1, LowestTFLevel. I had it at option 0. All other options, I've never changed. I'll see what happens and report back. |
[QUOTE=storm5510;529636]It's been ignored for several months. When I first started, everything I received was 72 bits. Later, it went to 73. Now at 74. I believe there is nothing available in your allocated area below what I am running now, and I have [U]no problem[/U] with 74. However, if there is a problem, I am sure you would like to find the cause.[/QUOTE]
Ah... I now understand what you're saying. This behavior is nominal. If the pledge is below what is available (within the range being requested), it is "bumped" up to the lowest next bit level. Currently, this is 74. There are some workers who still have their MISFIT config set to get 71 work! (Truely "fire-and-forget".) The GPU72 sub-project was created to (mostly) help the GIM[U][I][B]P[/B][/I][/U]S project. If people /really/ want to work to lower bit levels (which won't be needed for years), they're available directly from Primenet. :smile: |
Breadth Colab vs. Depth Kaggle
With the recent introduction of "LL TF (Breadth First)", "LL TF (Depth First)" assignments for Colab/Kaggle instances, I have recently started to use one access key to do "LL TF (Breadth First)" assignments on Colab and another access key to do "LL TF (Depth First)" assignments on Kaggle. I shutdown my instances at night. I have been getting 73 to 74 bit assignments for Breadth First and 76 to 77 bit assignments for Kaggle. This morning I started up the Colab instance a few hours before I started the Kaggle instance. I noticed when I checked on the Colab instance a few hours later, that the Colab instance had completed the 76 to 77 bit assignment that Kaggle had been working on the night before and then went on to work a 73 to 74 bit assignment. Is this the intended reaction?
BTW, I notice that Depth First is being assigned 76 to 77 bits in the 99M range. |
[QUOTE=linament;529650]Is this the intended reaction?[/QUOTE]
Yes. This is "sane" behavior. Once an assignment has had work started, it is the assignees until completion. Currently, the re-assignment code path doesn't look at the work preference, but instead just asks "anything outstanding I should do?". I don't have the cycles at the moment, but I could look into doing a "weight" on the temporal dimension, and only assign work which is older than (say) a week. [QUOTE=linament;529650]BTW, I notice that Depth First is being assigned 76 to 77 bits in the 99M range.[/QUOTE] Yup... The work Ben et al have been doing has caused the [URL="https://www.mersenne.org/thresholds/"]Cat 4 cut-off[/URL] to climb a lot faster than I had expected. So, for at least a little while, I want to build up a buffer in 99M, and try to fill in below as best we can over the next couple of weeks. Can you say "fun" boys and girls? I can! :chalsall: |
I was doing a little browsing on [U]GPU72.com[/U]. There is a left side menu entry marked, "Notebook Access Keys" with three options inside. I have been doing some reading about Google's Colab Notebook. That is all way over my head and seems like a lot of effort. If this on [U]GPU72[/U].com is related to Google's notebook then it seems extremely simple.
Anybody? :confused: |
[QUOTE=storm5510;529706]If this on [U]GPU72[/U].com is related to Google's notebook then it seems extremely simple.
Anybody? :confused:[/QUOTE]Chris has made it simple. If you have a google log in (like gmail, go to colab.google . com and in a separate tab go to GPU72 and setup a notebook key (copy the code (not just the key) that is shown.) Then back at colab, paste the copied code in (chose New Python 3 notebook from the menu. You will see [ ] , click inside that and paste the code there.) In the menu, set runtime to a GPU type. Then press the play button. Profit. |
I believe that should be [url]https://colab.research.google.com/[/url]
I'd also seen talk about such things in this thread but didn't know what it was about, but given Uncwilly's brief instructions above, it seems to work:[code]Beginning GPU Trial Factoring Environment Bootstrapping... Please see https://www.gpu72.com/ for additional details. 20191105_150342: GPU72 TF V0.32 Bootstrap starting... 20191105_150342: Working as "f74a87e94f97d3588a5ca5cccfadbe96"... 20191105_150342: Installing needed packages (1/3) 20191105_150344: Installing needed packages (2/3) 20191105_150349: Installing needed packages (3/3) 20191105_150356: Fetching initial work... 20191105_150357: Running GPU type Tesla P100-PCIE-16GB 20191105_150357: running a simple selftest... 20191105_150408: Selftest statistics 20191105_150408: number of tests 107 20191105_150408: successfull tests 107 20191105_150408: selftest PASSED! 20191105_150408: Starting trial factoring M95509121 from 2^75 to 2^76 (80.12 GHz-days) 20191105_150408: Exponent TF Level % Done ETA GHzD/D Itr Time | Class #, Seq # | #FCs | SieveRate | SieveP | Uptime 20191105_150417: 95509121 75 to 76 0.1% 1h42m 1127.38 6.396s | 0/4620, 1/960 | 42.81G | 6693.1M/s | 82485 | 0:01[/code]I'm not sure if it just does a single TF and then exits, or keeps looping... I guess I'll find out in 2 hours. :smile: |
[QUOTE=Uncwilly;529708]Chris has made it simple. If you have a google log in (like gmail, go to colab.google . com and in a separate tab go to GPU72 and setup a notebook key (copy the code (not just the key) that is shown.) Then back at colab, paste the copied code in (chose New Python 3 notebook from the menu. You will see [ ] , click inside that and paste the code there.) In the menu, set runtime to a GPU type. Then press the play button. Profit.[/QUOTE]
I managed to muddle my way though it. It's running. Do I need to leave the browser windows open? If so, it is not a problem. There is a menu entry under "Runtime" called "Interrupt Execution." Is this the proper way to stop the process, in case I need to? [CODE]Beginning GPU Trial Factoring Environment Bootstrapping... Please see https://www.gpu72.com/ for additional details. 20191105_155730: GPU72 TF V0.32 Bootstrap starting... 20191105_155730: Working as "d395e1d04a122be8365b3727a298c8c0"... 20191105_155730: Installing needed packages (1/3) 20191105_155740: Installing needed packages (2/3) 20191105_155749: Installing needed packages (3/3) 20191105_155820: Fetching initial work... 20191105_155823: Running GPU type Tesla P100-PCIE-16GB 20191105_155823: running a simple selftest... 20191105_155833: Selftest statistics 20191105_155833: number of tests 107 20191105_155833: successfull tests 107 20191105_155833: selftest PASSED! 20191105_155833: Starting trial factoring M95497499 from 2^75 to 2^76 (80.13 GHz-days) 20191105_155833: Exponent TF Level % Done ETA GHzD/D Itr Time | Class #, Seq # | #FCs | SieveRate | SieveP | Uptime 20191105_155846: 95497499 75 to 76 0.1% 1h43m 1116.52 6.459s | 0/4620, 1/960 | 42.81G | 6628.6M/s | 82485 | 0:02 20191105_155948: 95497499 75 to 76 1.0% 1h41m 1123.65 6.418s | 45/4620, 10/960 | 42.81G | 6670.9M/s | 82485 | 0:03 20191105_160048: 95497499 75 to 76 2.3% 1h40m 1116.34 6.460s | 100/4620, 22/960 | 42.81G | 6627.6M/s | 82485 | 0:04 20191105_160157: 95497499 75 to 76 3.3% 1h39m 1117.73 6.452s | 145/4620, 32/960 | 42.81G | 6635.8M/s | 82485 | 0:05 20191105_160257: 95497499 75 to 76 4.4% 1h38m 1118.59 6.447s | 196/4620, 42/960 | 42.81G | 6640.9M/s | 82485 | 0:06 20191105_160403: 95497499 75 to 76 5.4% 1h37m 1122.60 6.424s | 240/4620, 52/960 | 42.81G | 6664.7M/s | 82485 | 0:07 20191105_160506: 95497499 75 to 76 6.5% 1h36m 1118.59 6.447s | 292/4620, 62/960 | 42.81G | 6640.9M/s | 82485 | 0:08 20191105_160610: 95497499 75 to 76 7.5% 1h35m 1115.83 6.463s | 337/4620, 72/960 | 42.81G | 6624.5M/s | 82485 | 0:09 20191105_160715: 95497499 75 to 76 8.5% 1h34m 1118.25 6.449s | 381/4620, 82/960 | 42.81G | 6638.9M/s | 82485 | 0:10 20191105_160819: 95497499 75 to 76 9.6% 1h32m 1122.08 6.427s | 436/4620, 92/960 | 42.81G | 6661.6M/s | 82485 | 0:11 20191105_160923: 95497499 75 to 76 10.6% 1h31m 1121.55 6.430s | 484/4620, 102/960 | 42.81G | 6658.5M/s | 82485 | 0:13 20191105_161028: 95497499 75 to 76 11.7% 1h30m 1121.20 6.432s | 537/4620, 112/960 | 42.81G | 6656.4M/s | 82485 | 0:14 20191105_161132: 95497499 75 to 76 12.7% 1h30m 1110.33 6.495s | 577/4620, 122/960 | 42.81G | 6591.8M/s | 82485 | 0:15 20191105_161236: 95497499 75 to 76 13.8% 1h28m 1122.60 6.424s | 624/4620, 132/960 | 42.81G | 6664.7M/s | 82485 | 0:16 20191105_161341: 95497499 75 to 76 14.8% 1h27m 1122.42 6.425s | 664/4620, 142/960 | 42.81G | 6663.7M/s | 82485 | 0:17 20191105_161445: 95497499 75 to 76 15.8% 1h26m 1118.94 6.445s | 721/4620, 152/960 | 42.81G | 6643.0M/s | 82485 | 0:18 20191105_161549: 95497499 75 to 76 16.9% 1h25m 1121.73 6.429s | 772/4620, 162/960 | 42.81G | 6659.5M/s | 82485 | 0:19 [/CODE]If I want to run this on another machine, do I have to create a different key instance? I am thinking about my older HP. It has a really slow GPU and I generally avoid this type of work on it. [I]I apologize for all these questions. I've tread into an area I have no experience with. :smile:[/I] |
[QUOTE=storm5510;529725]I managed to muddle my way though it. It's running.
...I've tread into an area I have no experience with. :smile:[/QUOTE]I likewise. Perhaps there is another introductory thread that I missed where this is all explained? Some FAQ:[LIST][*]do I need to leave the browser window open, or does it run in the background?[*]it seemed to fetch 3 assignments, does it quit after than and I need to restart it, or does it keep fetching more work?[*]I see manual results on gpu72.com that need to be manually submitted (at least until Chris gets around to automating it)[*]can I run more than one instance (per Google account)?[/LIST] |
[QUOTE=James Heinrich;529731]I likewise. Perhaps there is another introductory thread that I missed where this is all explained?[/QUOTE]
Thanks for the "ping" guys. This is a "hoot", but things have been moving so quickly that there isn't yet a FAQ. Thank you for your below: [QUOTE=James Heinrich;529731]Some FAQ:[LIST][*]do I need to leave the browser window open, or does it run in the background?[*]it seemed to fetch 3 assignments, does it quit after than and I need to restart it, or does it keep fetching more work?[*]I see manual results on gpu72.com that need to be manually submitted (at least until Chris gets around to automating it)[*]can I run more than one instance (per Google account)?[/LIST][/QUOTE] 1. Any "interactive" Notebook sessions are shut down shortly after the browser is closed. 1.1. Kaggle lets you "commit" a Notebook, where-in every Section runs, in order, until the last executable exists. 1.2. TL;DR: Leave your browser open if possible. 2. The GPU72_TF Notebook fetches three (3#) TF assignments initially and then gets to work. 2.1. Assignments are first "reissued" from previous Notebook runs which have been "killed" (RIP), and then new assignments as specified by the AKey's work preference. 2.2. Once an assignment is completed is reported back to GPU72, and another assignment is fetch. 3. Yeah... Sorry. I subscribe strongly to "Never send a human to do a machine's job". But often achieving that ideal involves a human. In this case, it involves my time... :wink: 3.1. I have mapped in my head a solution space for this (read: automatically submitting back to Primenet the Instance(s)' results), but things have been a little hectic in the last few weeks. 3.1.1. Still on one of my whiteboards, as well as in my pen-and-paper workbook. 4. Nominally ill-advised. Although there could be some workflows where this would make sense (constrained human resources, for example). 4.1. Empirical experimentation suggests that each Colab Account gets ~12 to 16 hours of GPU compute per day. 4.2. Kaggle is contrained to ~30 hours of P100 GPU per week per account. If you're creative, you can actually get ~38.99 hours... 4.3. Interestingly, different Google Accounts seem to be thusly individually temporally constrained. Even when running within the same browser context (and thus OS fingerprint, IP address, and even MAC address). |
Thanks, that helps.
I also discovered I don't need to copy-paste code per [i]Uncwilly[/i]'s post, I just need to click the magic Colaboratory link on gpu72.com after creating a NAK and copy-paste in the Access Key. I always have two browsers open with my home and work Google accounts signed in, so I fired up a second instance on my other browser and it seems to run fine (except my first attempt got me a "Tesla P100-PCIE-16GB" (1140 GHd/d) and the second a notably slower "Tesla K80" (390 GHd/d), luck of the draw I guess). |
Looking at your charts in [url]www.mersenne.ca[/url] for GPU-TF vs. GPU-LL performance it seems these Tesla 100 and K80 are relatively much better at LL than TF. I assume LL includes P-1.
For this reason, I would prefer to use these GPUs (especially the K80) for P-1 rather than TF. Have people had much luck running CUDA-P1 in CoLab or Kaggle? |
[QUOTE=petrw1;529740]For this reason, I would prefer to use these GPUs (especially the K80) for P-1 rather than TF.[/QUOTE]
As the "owner" of the resources, you're free to do whatever you want with them. Please know, though, that Primenet is not currently lacking in either LL'ing nor P-1'ing resources. [QUOTE=petrw1;529740]Have people had much luck running CUDA-P1 in CoLab or Kaggle?[/QUOTE] My understanding is that both CUDA P-1 and LL code have been successfully built and run on both Colab and Kaggle. I also (possibly correctly; possibly not) understand that the OpenCL LL code implementation is actually more efficient than the native CUDA one. Outside of my experience space to understand why. To say again what I've said before... The GPU72_TF experiment was a "proof-of-concept". Just seeing if what we thought might be possible actually was. Once that knowledge was established, other things can then be done... |
[QUOTE=chalsall;529742]Please know, though, that Primenet is not currently lacking in either LL'ing nor P-1'ing resources.
[/QUOTE] I'd have to agree. Thx |
I got up early this morning and found my colaboratory instance had stopped. Looking at the details, I saw "spider" so I figured someone had been working on it during the wee hours of the morning. The spider appeared to be functioning properly the last time I checked.
I am still running 2[SUP]74[/SUP] locally. It is getting close to 98-million. I am wondering what happens when the 99's are complete. I changed the "High" value in the [I]GPU72config[/I] file to 110,000,000. However, I do not know if the allocation from [I]PrimeNet[/I] goes that far. If the allocation does not go that far, then I imagine there will be a wrap-around back to smaller exponents running to 2[SUP]75[/SUP]. That will be fine. At 2[SUP]76[/SUP], I will stop because my colab instance can run those quite a bit faster than my 1080. In the interim, something else may come down the road. :popcorn: |
[QUOTE=James Heinrich;529738]my first attempt got me a "Tesla P100-PCIE-16GB" (1140 GHd/d) and the second a notably slower "Tesla K80" (390 GHd/d), luck of the draw I guess.[/QUOTE]I lost my connection to the P100 and I've got a K80 on both accounts now. :sad:
What I noticed is that the K80 is a dual-GPU model and mfaktc is of course using only one GPU, so the throughput is half what is shown on my [url=https://www.mersenne.ca/mfaktc.php?filter=K80]mfaktc table[/url], which makes sense. |
[QUOTE=James Heinrich;529816]I lost my connection to the P100 and I've got a K80 on both accounts now. :sad:
What I noticed is that the K80 is a dual-GPU model and mfaktc is of course using only one GPU, so the throughput is half what is shown on my [url=https://www.mersenne.ca/mfaktc.php?filter=K80]mfaktc table[/url], which makes sense.[/QUOTE] Might be just coincidence, but I tried resetting and restarting the runtimes and after a try or two usually get a P100... |
[QUOTE=kracker;529823]Might be just coincidence, but I tried resetting and restarting the runtimes and after a try or two usually get a P100...[/QUOTE]Lucky you. I tried restarting 5 times each and get nothing but K80. :sad:
|
[QUOTE=James Heinrich;529816]I lost my connection to the P100 and I've got a K80 on both accounts now. :sad:
What I noticed is that the K80 is a dual-GPU model and mfaktc is of course using only one GPU, so the throughput is half what is shown on my [URL="https://www.mersenne.ca/mfaktc.php?filter=K80"]mfaktc table[/URL], which makes sense.[/QUOTE] Perhaps what's going on there is that you have two accounts running on a single Public IP address. [I]Colab[/I] probably sees this as a double-instance. Therefore, K80. I switched browsers on my HP earlier today so both my desktops would be using Firefox. They keep each other synced. I also got a K80 on the HP. I ended up deleting my instance on [I]Colab[/I] and recreated it with the same code. P100 the first try. [U] I only run one computer with it[/U]. So, you would probably have to drop one as well to get a P100 again. |
Is GPU72.com down for maintenance at present?
|
[QUOTE=bayanne;529902]Is GPU72.com down for maintenance at present?[/QUOTE]
It was at the time of this writing. I found my [I]Colab[/I] instance stopped again this morning. Paying more attention, the details read, "Connection timed out." So, I wrote a short batch file to "ping" an external IP address every 60 seconds. Perhaps this will keep it awake. |
[QUOTE=bayanne;529902]Is GPU72.com down for maintenance at present?[/QUOTE]
Not ***scheduled*** maintenance... Another bad SeaCRAP HD... GPU72 is back up for the moment. I need to schedule a swap-out with 1and1. There will be at least one additional offline period, hopefully only a half-hour or so. I will try to give as much notice as I can (1and1 only give about a four-hour window of possibility). Going to be an amusing day... |
[QUOTE=chalsall;529908]Going to be an amusing day...[/QUOTE]
OK... Quick update... Because I've had so much unreliability with this machine's HDs over the years, I've decided to spin up a new dedicated server, and transition over. This is instead of swapping out (yet another) SeaCRAP drive, and having the RAID1 rebuild. TL;DR: GPU72 should remain stable; its IPs will transition over the next few days. I'll let everyone know when that's (temporally) programmed for. |
I'm thinking I might want to document what happens when bringing a "virgin public-facing server" online. Most people have no idea what it's like.
I haven't even spun up the server which answers port 80 requests yet, and this is what I'm currently seeing: [CODE][root@72116a7 ~]# tcpdump -nl port 80 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 21:22:17.852926 IP 185.53.88.39.62644 > 74.208.169.89.http: Flags [S], seq 2761647998, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:20.846592 IP 185.53.88.39.62644 > 74.208.169.89.http: Flags [S], seq 2761647998, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:21.588242 IP 13.57.244.232.53726 > 74.208.27.191.http: Flags [S], seq 1435274884, win 29200, options [mss 1460,sackOK,TS val 1397441 ecr 0,nop,wscale 9], length 0 21:22:21.766143 IP 193.200.164.135.35414 > 74.208.169.89.http: Flags [S], seq 2070858583, win 32120, options [mss 1460,sackOK,TS val 12283524 ecr 1543503872,nop,wscale 0], length 0 21:22:21.839465 IP 13.57.244.232.53728 > 74.208.27.191.http: Flags [S], seq 3925785016, win 29200, options [mss 1460,sackOK,TS val 1397504 ecr 0,nop,wscale 9], length 0 21:22:22.007267 IP 13.52.104.84.50254 > 74.208.27.191.http: Flags [S], seq 1030701951, win 29200, options [mss 1460,sackOK,TS val 324447040 ecr 0,nop,wscale 9], length 0 21:22:22.238599 IP 193.200.164.135.35901 > 50.21.176.101.http: Flags [S], seq 347881487, win 32120, options [mss 1460,sackOK,TS val 13263582 ecr 218103808,nop,wscale 0], length 0 21:22:22.876213 IP 185.53.88.39.65490 > 74.208.169.89.http: Flags [S], seq 652498964, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:25.885735 IP 185.53.88.39.65490 > 74.208.169.89.http: Flags [S], seq 652498964, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:26.294492 IP 51.38.189.70.61000 > 70.35.195.244.http: Flags [S], seq 1925924080, win 1024, length 0 21:22:26.382673 IP 193.200.164.135.idp-infotrieve > 50.21.179.44.http: Flags [S], seq 3123960651, win 32120, options [mss 1460,sackOK,TS val 9465531 ecr 1577058304,nop,wscale 0], length 0 21:22:27.902216 IP 185.53.88.39.51850 > 74.208.169.89.http: Flags [S], seq 303222851, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:28.523814 IP 51.38.189.70.61000 > 70.35.195.244.http: Flags [S], seq 1925924080, win 1024, length 0 21:22:29.120483 IP 193.200.164.135.63547 > 62.151.183.227.http: Flags [S], seq 959754598, win 32120, options [mss 1460,sackOK,TS val 5323616 ecr 973078528,nop,wscale 0], length 0 21:22:30.911449 IP 185.53.88.39.51850 > 74.208.169.89.http: Flags [S], seq 303222851, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:32.923773 IP 185.53.88.39.54571 > 74.208.169.89.http: Flags [S], seq 749253661, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:34.506304 IP 193.200.164.135.35414 > 74.208.169.89.http: Flags [S], seq 2070858583, win 32120, options [mss 1460,sackOK,TS val 12283524 ecr 1543503872,nop,wscale 0], length 0 21:22:35.918028 IP 185.53.88.39.54571 > 74.208.169.89.http: Flags [S], seq 749253661, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:37.947237 IP 185.53.88.39.57365 > 74.208.169.89.http: Flags [S], seq 3087588616, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:39.771201 IP 193.200.164.135.35901 > 50.21.176.101.http: Flags [S], seq 347881487, win 32120, options [mss 1460,sackOK,TS val 13263582 ecr 218103808,nop,wscale 0], length 0 21:22:40.940817 IP 185.53.88.39.57365 > 74.208.169.89.http: Flags [S], seq 3087588616, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:41.034219 IP 193.200.164.135.51870 > 216.250.114.228.http: Flags [S], seq 2348093212, win 32120, options [mss 1460,sackOK,TS val 8450569 ecr 1023410176,nop,wscale 0], length 0 21:22:42.969197 IP 185.53.88.39.60118 > 74.208.169.89.http: Flags [S], seq 1478602993, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:43.349271 IP 193.200.164.135.63547 > 62.151.183.227.http: Flags [S], seq 959754598, win 32120, options [mss 1460,sackOK,TS val 5323616 ecr 973078528,nop,wscale 0], length 0 21:22:43.887361 IP 193.200.164.135.idp-infotrieve > 50.21.179.44.http: Flags [S], seq 3123960651, win 32120, options [mss 1460,sackOK,TS val 9465531 ecr 1577058304,nop,wscale 0], length 0 21:22:45.978833 IP 185.53.88.39.60118 > 74.208.169.89.http: Flags [S], seq 1478602993, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:47.990786 IP 185.53.88.39.63029 > 74.208.169.89.http: Flags [S], seq 2981964487, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:48.843445 IP 193.200.164.135.12072 > 70.35.207.84.http: Flags [S], seq 3654138408, win 32120, options [mss 1460,sackOK,TS val 6646050 ecr 16777216,nop,wscale 0], length 0 21:22:50.026514 IP 193.200.164.135.52503 > 74.208.82.104.http: Flags [S], seq 3524245071, win 32120, options [mss 1460,sackOK,TS val 971533 ecr 1962934272,nop,wscale 0], length 0 21:22:50.483613 IP 13.52.104.84.52384 > 74.208.27.191.http: Flags [S], seq 811130140, win 29200, options [mss 1460,sackOK,TS val 324454159 ecr 0,nop,wscale 9], length 0 21:22:51.000529 IP 185.53.88.39.63029 > 74.208.169.89.http: Flags [S], seq 2981964487, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:51.511316 IP 13.52.104.84.52384 > 74.208.27.191.http: Flags [S], seq 811130140, win 29200, options [mss 1460,sackOK,TS val 324454416 ecr 0,nop,wscale 9], length 0 21:22:52.051318 IP 193.200.164.135.35414 > 74.208.169.89.http: Flags [S], seq 2070858583, win 32120, options [mss 1460,sackOK,TS val 12283524 ecr 1543503872,nop,wscale 0], length 0 21:22:52.136877 IP 193.200.164.135.51870 > 216.250.114.228.http: Flags [S], seq 2348093212, win 32120, options [mss 1460,sackOK,TS val 8450569 ecr 1023410176,nop,wscale 0], length 0 21:22:52.306202 IP 193.200.164.135.35901 > 50.21.176.101.http: Flags [S], seq 347881487, win 32120, options [mss 1460,sackOK,TS val 13263582 ecr 218103808,nop,wscale 0], length 0 21:22:53.014729 IP 185.53.88.39.49467 > 74.208.169.89.http: Flags [S], seq 2751108481, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:53.527195 IP 13.52.104.84.52384 > 74.208.27.191.http: Flags [S], seq 811130140, win 29200, options [mss 1460,sackOK,TS val 324454920 ecr 0,nop,wscale 9], length 0 21:22:53.935789 IP 193.200.164.135.35901 > 50.21.176.101.http: Flags [S], seq 347881487, win 32120, options [mss 1460,sackOK,TS val 13263582 ecr 218103808,nop,wscale 0], length 0 21:22:55.631452 IP 13.57.244.232.53728 > 74.208.27.191.http: Flags [S], seq 3925785016, win 29200, options [mss 1460,sackOK,TS val 1405952 ecr 0,nop,wscale 9], length 0 21:22:55.636659 IP 13.57.244.232.53726 > 74.208.27.191.http: Flags [S], seq 1435274884, win 29200, options [mss 1460,sackOK,TS val 1405953 ecr 0,nop,wscale 9], length 0 21:22:56.023902 IP 185.53.88.39.49467 > 74.208.169.89.http: Flags [S], seq 2751108481, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:57.193133 IP 66.249.88.59.62591 > 216.250.114.228.http: Flags [S], seq 3033941043, win 62920, options [mss 1430,sackOK,TS val 2239715846 ecr 0,nop,wscale 8], length 0 21:22:57.591107 IP 13.52.104.84.52384 > 74.208.27.191.http: Flags [S], seq 811130140, win 29200, options [mss 1460,sackOK,TS val 324455936 ecr 0,nop,wscale 9], length 0 21:22:57.904296 IP 193.200.164.135.idp-infotrieve > 50.21.179.44.http: Flags [S], seq 3123960651, win 32120, options [mss 1460,sackOK,TS val 9465531 ecr 1577058304,nop,wscale 0], length 0 21:22:58.042139 IP 185.53.88.39.52241 > 74.208.169.89.http: Flags [S], seq 3834427809, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:22:58.125704 IP 66.249.88.58.61608 > 216.250.114.228.http: Flags [S], seq 3204750274, win 62920, options [mss 1430,sackOK,TS val 198912902 ecr 0,nop,wscale 8], length 0 21:22:58.193495 IP 66.249.88.59.62591 > 216.250.114.228.http: Flags [S], seq 3033941043, win 62920, options [mss 1430,sackOK,TS val 2239716846 ecr 0,nop,wscale 8], length 0 21:22:59.112125 IP 193.200.164.135.idp-infotrieve > 50.21.179.44.http: Flags [S], seq 3123960651, win 32120, options [mss 1460,sackOK,TS val 9465531 ecr 1577058304,nop,wscale 0], length 0 21:22:59.124828 IP 66.249.88.58.61608 > 216.250.114.228.http: Flags [S], seq 3204750274, win 62920, options [mss 1430,sackOK,TS val 198913902 ecr 0,nop,wscale 8], length 0 21:23:00.193657 IP 66.249.88.59.62591 > 216.250.114.228.http: Flags [S], seq 3033941043, win 62920, options [mss 1430,sackOK,TS val 2239718846 ecr 0,nop,wscale 8], length 0 21:23:00.878451 IP 193.200.164.135.63547 > 62.151.183.227.http: Flags [S], seq 959754598, win 32120, options [mss 1460,sackOK,TS val 5323616 ecr 973078528,nop,wscale 0], length 0 21:23:01.050719 IP 185.53.88.39.52241 > 74.208.169.89.http: Flags [S], seq 3834427809, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 21:23:01.126693 IP 66.249.88.58.61608 > 216.250.114.228.http: Flags [S], seq 3204750274, win 62920, options [mss 1430,sackOK,TS val 198915903 ecr 0,nop,wscale 8], length 0[/CODE] I won't bore you with what's been asked on port 22... :wink: |
[QUOTE=chalsall;529978]...I won't bore you with what's been asked on port 22... :wink:[/QUOTE]
I do not understand much of that nor do I want any explanation. I will simply add that I am pleased others do and leave it go there. [U]A theory[/U]: Some of us get P100's and other's get K80's in the Notebook. This may be nothing more than time sharing, equal allotment of resources, whatever. I am in error about my web page timing out. It happened again. It was a device disconnect in the Notebook and not a web timeout. An option to restart resulted in a K80 GPU where it had been a P100 before. I shut all of it down and logged out of Google. Logging back in later reconnected it to a P-100. All this was on my HP. I created a second Notebook and started it on my primary i7 desktop. It decided on a P100. My HP has been running a P-100 instance as I wrote above. I suspect both will be disconnected by morning. If this ends up being the case then perhaps my theory may be partially correct, at least. |
Yes, as far as I can tell you randomly get a P100 or a K80, whatever happens to be available. I get a K80 about 80% of the time. The disconnection also appears normal, click reconnect and hit the play button and it should resume.
|
[QUOTE=James Heinrich;529996]Yes, as far as I can tell you randomly get a P100 or a K80, whatever happens to be available. I get a K80 about 80% of the time. The disconnection also appears normal, click reconnect and hit the play button and it should resume.[/QUOTE]
I agree that it is random, and also the amount of time a person has used a particular processor. I realized late last night that it was the page itself I needed to keep active. After some searching, I found a Firefox extension called [I]Session Alive[/I] which does exactly that. It asks for two things. The URL to the web page and the number of minutes to keep it going. Once supplied, a person simply starts it. Other browser most likely have something similar which can be attached. I was sitting here early this morning when my HP disconnected at their end. The page was still alive so all I had to do was clear everything and restart it from the round icon on the left, near the top. I [U]did[/U] [U]not[/U] use "reconnect" in the small dialog that appeared. It went back to the P-100 which it had been running. I made a note of the time it stopped so I can get an interval of when this may happen. ============================ [U]Edit/Update[/U]: Both my instances just shut down. Attempting to restart results in a message, "Failed to assign a backend." I'll hazard a guess and say someone is working on something somewhere. |
[QUOTE=storm5510;529993][U]A theory[/U]: Some of us get P100's and other's get K80's in the Notebook. This may be nothing more than time sharing, equal allotment of resources, whatever.[/QUOTE]
Interesting... The P100s on Colab are a new offering, if I'm not mistaken. For the last ~8 weeks, I've been getting only K80s. I didn't see a coveted T4 after a couple of days of running scripted mfaktc. The last day or so I've been getting a P100 ~50% of the time. Cool! These are the same as what Kaggle offer, constrained to ~38.9 hours a week... |
No Backends Available....
About one-third of the time.
|
[QUOTE=chalsall;530030]Interesting... The P100s on Colab are a new offering, if I'm not mistaken.
For the last ~8 weeks, I've been getting only K80s. I didn't see a coveted T4 after a couple of days of running scripted mfaktc. The last day or so I've been getting a P100 ~50% of the time. Cool! These are the same as what Kaggle offer, constrained to ~38.9 hours a week...[/QUOTE] I think [B]James Heinrich[/B] was correct in his K80 comment. The output says 393 GHz-d/Day. The progress seems to go much faster. James wrote that these are dual core GPU's. My HP has been running a K80 since noon at lower TF levels, end bits of 2[SUP]74[/SUP] and 2[SUP]75[/SUP].. My i7 is still running a P-100, and also running lower bit levels. Everything previous has been 2[SUP]76[/SUP]. I had seen T4 in the options. I did not know what ti was until I did a search. An animation on Nvidia's web site shows a comparison to other models. It really goes! If Google had written, "Failed to assign a backend" as "No GPU available," it would have been far better. I would have simply stopped and waited several hours instead of trying to figure out what was wrong on my end. |
[QUOTE=storm5510;530063]If Google had written, "Failed to assign a backend" as "No GPU available," it would have been far better. I would have simply stopped and waited several hours instead of trying to figure out what was wrong on my end.[/QUOTE]
If you follow the link in the pop-up it tells you what the deal is. GPU's are available to those who are using them interactively. |
[QUOTE=Uncwilly;530065]If you follow the link in the pop-up it tells you what the deal is. GPU's are available to those who are using them interactively.[/QUOTE]
I have probably seen this but did not pay any attention to it. Live and learn. :smile: |
I was looking at my manual assignments on [I]GPU72[/I] a short time ago. I found four which were two days old. They apparently never made it into [I]mfaktc's[/I] worktodo file. This happens occasionally. I manually added them.
|
For the [url=https://www.gpu72.com/account/instances/results/]result lines[/url] that we're currently copy-pasting to manual_results, is it possible to prepend the user/computer identifier string, using the configured InstanceName, e.g:[quote][COLOR="Red"]UID: JamesHeinrich/ColabA, [/COLOR]no factor for M99330661 from 2^74 to 2^75 [mfaktc 0.21 barrett76_mul32_gs][/quote]I assume GPU72 knows our PrimeNet username somewhere, if not that may need to be configurable from somewhere.
|
[QUOTE=James Heinrich;530147]I assume GPU72 knows our PrimeNet username somewhere, if not that may need to be configurable from somewhere.[/QUOTE]
Good idea! Thanks. GPU72 doesn't currently know /everyone's/ PNUN -- only those currently using the Proxy. Being able to set this by way of the Web UI is mapped for the near future. This will actually be required for the automatic submission of Notebook results. Importantly, GPU72 does not currently, nor will it ever need to, know the worker's PN PW. Quick update: rsyncing of all the data from the old server to the new is almost complete. The next step will be to first have the old server use the MariaDB on the new server, and then I'll update the DNS records (which can take up to 24 hours to propagate fully). Currently programmed for tomorrow around 1400 UTC. |
[QUOTE=James Heinrich;530147]For the [URL="https://www.gpu72.com/account/instances/results/"]result lines[/URL] that we're currently copy-pasting to manual_results, is it possible to prepend the user/computer identifier string, using the configured InstanceName, e.g:I assume GPU72 knows our PrimeNet username somewhere, if not that may need to be configurable from somewhere.[/QUOTE]
I have used this in [I]mfaktc[/I] before. It seem that, if [I]PrimeNet[/I] didn't assign it, then it ignores the user info. This, and the lack of an assignment ID. |
[QUOTE=James Heinrich;530147]For the [URL="https://www.gpu72.com/account/instances/results/"]result lines[/URL] that we're currently copy-pasting to manual_results, is it possible to prepend the user/computer identifier string, using the configured InstanceName, e.g:I assume GPU72 knows our PrimeNet username somewhere, if not that may need to be configurable from somewhere.[/QUOTE]
This is configurable in mfaktX ini file. By the way, James, any issue with the "income" in the last two days for you site? :razz: |
[QUOTE=LaurV;530166]This is configurable in mfaktX ini file.[/QUOTE]For standalone use, of course. But I'm talking about the lines from Colaboratory as reported here:
[url]https://www.gpu72.com/account/instances/results/[/url] [QUOTE=LaurV;530166]By the way, James, any issue with the "income" in the last two days for you site? :razz:[/QUOTE]Income? Not sure what you mean? |
The lines for colab are reported by mfaktc that Chris' script uploads into colab. That file he can change and upload the proper one, or you can edit by yourself in the colab folder and add the user/computer lines.
And by "income" I mean me going through your 3G exponents to 69-70 bitlevel, 120 thousand exponents per day for the last two days. The "connection" between the two things, is that somewhere in the middle of that process I realized (reading more through your site) that I could also get credit for the about 3000 factors, and just added the user/computer to the factor lines, as described above. But you can see from the IP address that it was me before, too. |
[QUOTE=LaurV;530171]or you can edit by yourself in the colab folder and add the user/computer lines.[/quote]I would if I knew how/where to do that, but I don't. Can you point me in the right direction?
[QUOTE=LaurV;530171]And by "income" I mean me going through your 3G exponents to 69-70 bitlevel, 120 thousand exponents per day for the last two days. But you can see from the IP address that it was me before, too.[/quote]No, I hadn't noticed a particular spike. That would be about 5000GHd/d, or roughly 20% of the project throughput, but the numbers vary a fair bit from day to day (due to relatively low number of participants therefore less average smoothing effect). I do see a minor spike on 08-Nov, but nothing like the massive 250000+ spikes from [i]ramgeis[/i] has been known to contribute once a month or so (he missed October, but overall throughput is up so maybe he's reporting results daily instead of monthly now). But any contributions are welcome. We're on the verge of having everything below 4G cleared to 2[sup]68[/sup]. :smile: BTW: if you missed getting credit for factors, just email me a list of the factors and your user/comp name and I'll make sure the credit gets attached, it's simple enough. |
2 Attachment(s)
[QUOTE=James Heinrich;530173]
BTW: if you missed getting credit for factors, just email me a list of the factors and your user/comp name and I'll make sure the credit gets attached, it's simple enough.[/QUOTE] I have no idea if I missed any credit, because I don't know where to look for. In fact, as explained above, I had no idea that the work is credited in any way, before reading in your page, the night before last night, when I changed the ini files to report the user/computer too. Some work was done before that also, from the same computer, and moving about 280000 exponents from 68 to 69/70/72 bits can be clearly seen in the visualization tool. Related of how to change the ini files for colab, you can click on "files" and then click the "up dir" (the dot-dot entry) then you can find the files in the home folder. Asuming you run your own copy (and didn't directly launch the one from Chris' folder) then you can edit the ini file [ATTACH]21282[/ATTACH]. [ATTACH]21283[/ATTACH] However, according with my (long ago) knowledge, Primenet won't cope well with lines having userid/computer in front. This knowledge of mine may be outdated... We don't use user/comp when we report with MISFIT. Edit: when you edit the file, you don't necessarily need to change the user name. I won't feel sorry If I receive some credit for the work done by yourself. That is why I let it in clear :razz: |
[QUOTE=LaurV;530189]I have no idea if I missed any credit, because I don't know where to look for.[/quote]Just send me a list of all the factors you've ever found above 1G and I'll assign credit where missing. Or PM me the IP you would've submitted from and I can probably pull up the list from the logs.
[QUOTE=LaurV;530189]However, according with my (long ago) knowledge, Primenet won't cope well with lines having userid/computer in front. This knowledge of mine may be outdated...[/QUOTE]Your knowledge is outdated, PrimeNet has no problem with UID lines. [QUOTE=LaurV;530189]Related of how to change the ini files for colab, you can click on "files" and then click the "up dir" (the dot-dot entry) then you can find the files in the home folder. Asuming you run your own copy (and didn't directly launch the one from Chris' folder) then you can edit the ini file[/QUOTE]I'm not smart enough to run my own copy, I just click Chris' link from GPU72 and click the (>) play button. The /home/ directory is empty, I can't find anything to edit. I still think it would be trivial for Chris to prepend the UID segment to the result lines displayed if missing from the actual result lines, hence my request. |
1 Attachment(s)
Ok, you are right about my knowledge being outdated.
I just sent this to PrimeNet and it didn't have any problem digesting it. [ATTACH]21286[/ATTACH] About that credit, there is nothing important, you can forget it. The issue is that the tasks run from ramdisk (you imagine that at over 200k assignments per day, I won't use the hdd/sdd for it!) and once sent (with curl, and confirmed by your page) the reports are deleted. That is 100 kilobytes every 7 minutes!! But everything I reported in the last time had the user id and computer name, and everything that I reported before came from the same IP, so I thought you can match them. Anyhow, it is not important, I was just wondering if the traffic causes any issue, and it seems that not. That is good, I will let it run for few more days, and move a good chunk of those expos to 70 bits. I just found the 70 bits is the sweetest spot for this activity (higher needs longer time per exponent, therefore less factors, which I do not like, and lower it loses too much time with overhead, like operating with files and writing on monitor, in fact, 68 and 69 bits take exactly the same amount of time, which is about 70% of what 70 bits take, then the 71 bits take double time, as it is normal, therefore for lower bits the timing is given by the overhead, and not by the effective TF work). The question where could I look to see what and if I am or was credited for, still stands. |
[QUOTE=James Heinrich;530193]I still think it would be trivial for Chris to prepend the UID segment to the result lines displayed if missing from the actual result lines, hence my request.[/QUOTE]
Indeed trivial. Probably activated (for those who's PN UN is already known) later today. SO EVERYONE KNOWS: GPU72 will be going into maintenance mode in about seven (7#) minutes. This should only take about twenty (20#) minutes or so, but I have to get every step correct... |
Is Ryan AWOL?
He'd be missed.
Or working on something big requiring patience? |
[QUOTE=chalsall;530201]...SO EVERYONE KNOWS: GPU72 will be going into maintenance mode in about seven (7#) minutes. This should only take about twenty (20#) minutes or so, but I have to get every step correct...[/QUOTE]
Thanks for the info. :smile: [QUOTE=James Heinrich]Your knowledge is outdated, [I]PrimeNet[/I] has no problem with UID lines.[/QUOTE] Then why is everything I run with [I]mfaktc[/I], user ID included, being lumped into "Manual Testing?" It has always been this way. This is not a complaint. Just a curiosity... |
[quote]The server is temporarily offline for maintaince.
Please try again in about an hour.[/quote] You know, this is quite refreshing. It is much better than most sysadmins who don't care if people get a time wasting delay then a 404 error. Kudos! |
[QUOTE=PhilF;530214]You know, this is quite refreshing. It is much better than most sysadmins who don't care if people get a time wasting delay then a 404 error.[/QUOTE]
Thanks. That is very much appreciated. One of the things about this kind of work is most people have no idea what goes on behind the scenes. We're only really noticed when things /aren't/ working, which should be rarely... :wink: The DB transfer is going slower than I had hoped. Because this is a new FS, I can't just rsync over the database files "in situ". I need to export - scp - import. The latter is happening now... |
[QUOTE=LaurV;530198]The question where could I look to see what and if I am or was credited for, still stands.[/QUOTE]I found 39 factors without user identification between 3321M-3738M from your IP (plus another 975 with UID). I have added your name to those factors. There is (currently) no way for you to get a list of factors you've found.
|
[QUOTE=chalsall;530215]The latter is happening now...[/QUOTE]
OK, she's back. Appears "happy" (or, at least, content). Just so everyone knows, what is happening is the old server is using the new server's Maria DB (by way of a secure TCP connection). The next step is for the DNS to be updated and propagated, but you shouldn't notice anything during that process (unless you start "digging" (read: "A" and "AAAA" record lookups) on the domains). As always, new oddities should please be pointed out as noticed. Thanks. |
Thanks Oliver...
That helps, a lot!!! :smile:
|
[QUOTE=petrw1;530202]He'd be missed.
Or working on something big requiring patience?[/QUOTE] This may answer your question: [url]https://mersenneforum.org/showthread.php?p=530126#post530126[/url] |
[QUOTE=James Heinrich;530216]I found 39 factors without user identification between 3321M-3738M from your IP (plus another 975 with UID). I have added your name to those factors. There is (currently) no way for you to get a list of factors you've found.[/QUOTE]
Well, then my suspicion is that something is wrong there. There should have been about three times that amount. Are you sure that you are recording all the submissions? A negative answer should also justify why there is no "spike" in the graph. In the first days there was nobody else moving numbers in that range, and there should be about 3000 factors. I was not sure, therefore after our former discussion, I started logging/recording the factors. Since my last post, I have about 5600 factors and moved about 400k exponents (i.e. about 200k, two bitlevels) with the same hardware and setup as before. Do your records match? (the "no factor" I reported, but didn't keep the record; should I? The table seems that didn't move, but I assume you rebuild it once or twice per day, and that is when I sleep, i.e. in few minutes... 1:00 AM here...) |
[QUOTE=LaurV;530308]Since my last post, I have about 5600 factors...
I assume you rebuild it once or twice per day, and that is when I sleep[/QUOTE]The number of factors seems approximately right, checking for new factors in 3320M-3739M since 2019-11-10 00:00:00 I get:[code]mysql> SELECT "371x" AS `mrange`, COUNT(*) AS `howmany`, DATE(`timestamp_found`) AS `date` FROM `known_factors_371` WHERE (`timestamp_found` > "2019-11-10") GROUP BY `date` ASC; +--------+---------+------------+ | mrange | howmany | date | +--------+---------+------------+ | 371x | 432 | 2019-11-11 | | 372x | 1251 | 2019-11-10 | | 372x | 2318 | 2019-11-11 | | 373x | 1845 | 2019-11-10 | +--------+---------+------------+[/code]The chart numbers are updated as the results come in, but it takes some time to work through the large number of results so you won't see the chart numbers change immediately; the charts are also rebuilt every night approx 2am UTC. |
[QUOTE=James Heinrich;530147]For the [url=https://www.gpu72.com/account/instances/results/]result lines[/url] that we're currently copy-pasting to manual_results, is it possible to prepend the user/computer identifier string, using the configured InstanceName...[/QUOTE]
Hey. OK, finally had the ninety seconds it took to apply this delta. I copy-and-pasted the results from my own workers into Primenet's manual submission page, and the results were accepted. However, I don't see any indication on any of the views I can access of this knowledge being captured. [CODE]UID: MYPNUN/A3_SS, no factor for M99338087 from 2^76 to 2^77 [mfaktc 0.21 barrett87_mul32_gs] UID: MYPNUN/A3_BU_1, no factor for M99900599 from 2^76 to 2^77 [mfaktc 0.21 barrett87_mul32_gs] [/CODE] |
[QUOTE=chalsall;530483]Hey. OK, finally had the ninety seconds it took to apply this delta.
However, I don't see any indication on any of the views I can access of this knowledge being captured.[/QUOTE]It is being captured, the raw submission text is captured, and is used as the basis for generating the "pretty" public-facing values. Thanks for adding that. |
I'm having more and more times where my Colab instance times out quickly, or I can't connect to a GPU instance, and generally I'm not having much luck getting sustained throughput (but any throughput is more than I would otherwise be able to contribute, so can't really complain).
But I ran into something new today (on both instances):[code]Beginning GPU Trial Factoring Environment Bootstrapping... Please see https://www.gpu72.com/ for additional details. 20191119_150834: GPU72 TF V0.32 Bootstrap starting... 20191119_150834: Working as "10bcb2acdbb748af637902b48f4240e3"... 20191119_150834: Installing needed packages (1/3) 20191119_150843: Installing needed packages (2/3) 20191119_150853: Installing needed packages (3/3) 20191119_150925: Fetching initial work... 20191119_150926: Bootstrap finished. Exiting.[/code] |
I reported the same on the colab thread.
Chris said they made a change on their end and mfaktc is no longer working. [QUOTE=James Heinrich;530991]I'm having more and more times where my Colab instance times out quickly, or I can't connect to a GPU instance, and generally I'm not having much luck getting sustained throughput (but any throughput is more than I would otherwise be able to contribute, so can't really complain). But I ran into something new today (on both instances):[code]Beginning GPU Trial Factoring Environment Bootstrapping... Please see https://www.gpu72.com/ for additional details. 20191119_150834: GPU72 TF V0.32 Bootstrap starting... 20191119_150834: Working as "10bcb2acdbb748af637902b48f4240e3"... 20191119_150834: Installing needed packages (1/3) 20191119_150843: Installing needed packages (2/3) 20191119_150853: Installing needed packages (3/3) 20191119_150925: Fetching initial work... 20191119_150926: Bootstrap finished. Exiting.[/code][/QUOTE] |
[QUOTE=James Heinrich;530991]I'm having more and more times where my Colab instance times out quickly, or I can't connect to a GPU instance, and generally I'm not having much luck getting sustained throughput (but any throughput is more than I would otherwise be able to contribute, so can't really complain).[/QUOTE]
[QUOTE=petrw1;530993]I reported the same on the colab thread. Chris said they made a change on their end and mfaktc is no longer working.[/QUOTE] Please see this post for a workaround. [url]https://www.mersenneforum.org/showpost.php?p=530962&postcount=571[/url] |
[QUOTE=James Heinrich;530991]
20191119_150834: GPU72 TF V0.32 Bootstrap starting... 20191119_150834: Working as [SPOILER]0bcb2acdbb748af637902b48f4240e3[/SPOILER]... 20191119_150834: Installing needed packages (1/3) 20191119_150843: Installing needed packages (2/3) 20191119_150853: Installing needed packages (3/3) 20191119_150925: Fetching initial work... 20191119_150926: Bootstrap finished. Exiting.[/QUOTE] I received the same a few hours ago. My typical instance run time is around nine hours. I have two notebooks which I alternate between. Perhaps leaving each rest a day helps my run time... |
But the fix that Uncwilly posted does work:[list=1][*]click "+ Code", copy-paste that line into the new box that appears:
[code]!apt-get install cuda-cudart-10-0[/code][*]click (►) to run the code, it takes about 10 seconds (and doesn't show a clear "done", just note when the circle stops spinning). [*]click the (►) on the main section and it will run as it did before.[/list] |
Also, no need to open any +code, every time you run something, there is a small rectangle appearing in your terminal window, that rectangle is an input box where you can write any OS commands.
|
[QUOTE=James Heinrich;531040]But the fix that Uncwilly posted does work:[LIST=1][*]click "+ Code", copy-paste that line into the new box that appears:
[code]!apt-get install cuda-cudart-10-0[/code][*]click (►) to run the code, it takes about 10 seconds (and doesn't show a clear "done", just note when the circle stops spinning).[*]click the (►) on the main section and it will run as it did before.[/LIST][/QUOTE] It works well. I take it that this is a [U]temporary[/U] fix? |
[QUOTE=storm5510;531075]It works well. I take it that this is a [U]temporary[/U] fix?[/QUOTE]
Either mfaktc is recompiled, or it the code is added to the main notebook(you don't have to necessarily add a new code section). Even if it's not recompiled, why is it "temporary"? FYI, there is a Run All option under runtime, or Ctrl F9. |
[QUOTE=kracker;531079]Either mfaktc is recompiled, or it the code is added to the main notebook(you don't have to necessarily add a new code section). Even if it's not recompiled, why is it "temporary"?
FYI, there is a Run All option under runtime, or Ctrl F9.[/QUOTE] [I]cuda-cudart-10-0[/I] is a DLL, (Dynamic Link Library). [I]mfaktc[/I] usually looks for these in its own folder, in a local environment. The output of [I]Colab[/I] looks like [I]mfakc[/I], but I don't believe it is. It may be a rewrite in a different language based on its original code. It might be [I]Python[/I]. I have seen this mentioned here before. |
[QUOTE=storm5510;531090][I]cuda-cudart-10-0[/I] is a DLL, (Dynamic Link Library). [I]mfaktc[/I] usually looks for these in its own folder, in a local environment. The output of [I]Colab[/I] looks like [I]mfakc[/I], but I don't believe it is. It may be a rewrite in a different language based on its original code. It might be [I]Python[/I]. I have seen this mentioned here before.[/QUOTE]
(google) colab is running a linux/ubuntu OS running jupyter notebook "on top of" python, chalsall uses the main notebook only to download perl scripts, which basically do most of the work. Maybe someone can explain to me for linux since I don't know much about it, is there a inherit reason the package manager can't be used for cuda libraries other than the effort to install them? Also I would assume they would come with the drivers... guessing not. EDIT: DLL is almost always a windows term for a library. |
Hey Guys...
FINALLY back online! I had to replace my main MB, CPU, and RAM. Also lost two of four 2 TB HDs (yes, of course I have backups, but they're mostly "in the cloud", so take a while to download). I would have been back online last night, but BL&P decided to shed load just at the end of a CentOS 7.7 install (please forgive me for this, but FSCK ME!!!)... I've applied the delta; version 0.33 of the payload is now being downloaded, which includes the needed apt install. |
Was there a power surge or something?
Otherwise, a simple outage shouldn't cause damage to hardware. |
[QUOTE=ixfd64;531117]Was there a power surge or something? Otherwise, a simple outage shouldn't cause damage to hardware.[/QUOTE]
Yup... I don't know if a phase got out of sync or just spiked, but the fused power bar feeding my machine (downstream from a UPS) almost caught fire! I'm not joking. |
[QUOTE](please forgive me for this, but FSCK ME!!!)...[/QUOTE]
Say ten WTFs and 10 STFUs and your [STRIKE]sines will be converted to cosines.[/STRIKE] sins will be forgiven. :cmd: |
[QUOTE=chalsall;531121]Yup... I don't know if a phase got out of sync or just spiked, but the fused power bar feeding my machine (downstream from a UPS) almost caught fire!
I'm not joking.[/QUOTE] This raises a couple of questions:[LIST=1][*]How did the UPS fare? [*]Did the fuse in the power bar blow?[/LIST]It also gives "fused power bar" a whole other meaning... |
[QUOTE=Dr Sardonicus;531164]This raises a couple of questions:[/QUOTE]
1. Completely cooked. 2. The fuse blew, and it appears there was an arc across the fuse afterward, which caused a burnt area around the switch. For those who might be interested in wasting a bit of time, I used to hang out on a local Barbadian blog called "Barbados Underground". Yesterday I was asked to speak on the current issue by the Blogmaster. I [URL="https://barbadosunderground.net/2019/11/19/barbados-gone-dark-power-outage/comment-page-2/#comment-1267128"]rendered a brief position[/URL], but as usual there things quickly fell back into visceral partisan politics. Just to share, I really appreciate the Mersenne Forum being here for "weirdos" like me (and, perhaps, a few others). Arguments are healthy, so long as they are respectful, honest, and productive. :tu: |
(my emphasis)[QUOTE=chalsall;531172]Yesterday I was asked to speak on the [b]current[/b] issue by the Blogmaster.[/QUOTE]
:tu: Wow. UPS completely cooked, AND (possibly) an arc across a blown fuse downstream. Sounds like a massive power surge. Hmm. (Reads blog post supplied. Goes to statement from Roger Blackman, Managing Director of BLPC)[quote]The outage events which occurred this week are extraordinary events originating with a switch failure in one of our Spring Garden substations, and during that restoration process, a second event occurred on Tuesday morning with a fault on one of our generating units. In both cases system protection response is being investigated.[/quote]I'm not sure of the timeline WRT your hardware bake-off. I am not sufficiently versed in matters electrical to know whether either occurrence could possibly explain your misfortune. I also don't know how many customers got electrical equipment cooked by the failure. The statement manages to use "events" twice and "event" once in that short excerpt. As I have indicated before, overuse has reduced "event" to its meaning in Relativity -- "a point in the space-time continuum." I also notice the official statement contains the abomination "hone in on," which is an illiterism whose usage in place of "home in on" has been metastasizing. |
[QUOTE=Dr Sardonicus;531183]I also notice the official statement contains the abomination "hone in on," which is an illiterism whose usage in place of "home in on" has been metastasizing.[/QUOTE]
Thanks. I needed that! :smile: |
"Hone" is an interesting word.
[QUOTE]honed; honing Definition of hone (Entry 1 of 3) transitive verb 1 : to sharpen or smooth with a whetstone 2 : to make more acute, intense, or effective : whet helped her hone her comic timing— Patricia Bosworth hone noun Definition of hone (Entry 2 of 3) : whetstone hone verb (2) honed; honing Definition of hone (Entry 3 of 3) intransitive verb 1 [B]dialect : yearn —often used with for or after[/B] 2 dialect : grumble, moan[/QUOTE]I first picked it up in the first sense, immediately above. A line in a folk song actually handed down in my mother's family, (though it is well known,) uses the word in this sense. The song is Barbara Allen. I can only find different versions, thus far which contain fragments of the rather involved tale in the version I learned. In any case, the lines in question are[INDENT]lightly tripped she down the stair he trembled like an aspen tis vain, tis vain, my dear young man to hone for Barbara Allen [/INDENT]This aside, I think an argument can be made that 'honing' as in sharpening is a sort of narrowing down; as in narrowing the options down to a sharp edge of a conclusion or an arrival. This might also be related to the sense of 'honing one's skills.' I posted in Muzak a recording of Barbara Allen very similar to what I know. [URL]https://www.mersenneforum.org/showpost.php?p=531220&postcount=937[/URL] |
1 Attachment(s)
[QUOTE=kladner;531222]...This aside, I think an argument can be made that 'honing' as in sharpening is a sort of narrowing down; as in narrowing the options down to a sharp edge of a conclusion or an arrival. This might also be related to the sense of 'honing one's skills.'
[/QUOTE] [U]There is a third[/U]: Many years ago, I worked in a manufacturing facility which made rear lighting assemblies for automotive and heavy truck use. This included trailers. I worked in injection molding where all the lenses were made. All the larger lenses had an area made up of concentric circles beginning near the center and extending outwards to the edge. This part of the mold was known as the "honed" area. The molded lenses were tested on an hourly basis around the clock. Heated "shots" of molten plastic, around 400°F, would tend to polish these rings. If the honing rings became too shiny, the lenses would fail the quality test. This test was done in a 50 foot tunnel painted flat-black on the inside. The illuminated light assembly on the wall at one end. Light sensing devices at the opposite end would measure the amount of light scattering at shallow angles relative to 90° from the dead-center of the lens. A low reading would indicate too much light was traveling directly from the center of the lens. If this happened, then the mold would be disassembled and the individual rings would be honed to the proper dullness. 80,000 to 100,000 shots was typical between tear-downs. Attached in an image of a honed lens. So, the next time you get close to the rear of a semi-trailer, you will know why the lights are so bright. [I]We seem to have gotten way [U]off-topic[/U][/I]. :smile: |
[QUOTE=storm5510;531266]All the larger lenses had an area made up of concentric circles beginning near the center and extending outwards to the edge.[/QUOTE]Isn't that a [url=https://en.wikipedia.org/wiki/Fresnel_lens]Fresnel lens[/url]?
|
[QUOTE=James Heinrich;531268]Isn't that a [URL="https://en.wikipedia.org/wiki/Fresnel_lens"]Fresnel lens[/URL]?[/QUOTE]
I believe it is. Just on a smaller scale than a lighthouse. LED arrays have made all this quite obsolete now, at lease in automotive safety lighting. Lenses are now designed around the arrangement of the LED's. I loved that job for all the scientific aspects of it, and it gave me a sense of responsibility. I was in my early 20's back then so having that was unique. |
Just in case anyone is interested in wasting some more time...
The "conversation" on [URL="https://barbadosunderground.net/2019/11/24/power-probe-blp-a-must/"]Barbados Underground[/URL] continued. With a bit of a [URL="https://barbadosunderground.net/2013/01/25/notes-from-a-native-son-an-open-door-immigration-policy-can-also-be-letting-in-trojan-horses/comment-page-1/#comment-1267732"]tangent into immigration policy[/URL]. It's sometimes fun being me... :chalsall: |
[QUOTE=chalsall;531458]Just in case anyone is interested in wasting some more time...
The "conversation" on [URL="https://barbadosunderground.net/2019/11/24/power-probe-blp-a-must/"]Barbados Underground[/URL] continued. With a bit of a [URL="https://barbadosunderground.net/2013/01/25/notes-from-a-native-son-an-open-door-immigration-policy-can-also-be-letting-in-trojan-horses/comment-page-1/#comment-1267732"]tangent into immigration policy[/URL]. It's sometimes fun being me... :chalsall:[/QUOTE] Entertaining.... |
Quick update...
Hey All. Sorry for my "lurking" (including even PMs)...
I'm still "limping" after the power failure... Can't go into all the details, but along with other "fun" the new MB I purchased, an ASRock H310CM-HDV, won't support both the onboard Intel-based video and a discrete GPU card at the same time. It's supposed to, but refuses to "in situ". And, of course, neither support more than two monitors at a time... I can't work effectively with only two monitors!!! Anyway, nothing I can do about that until tomorrow, so I'm going to try to get some issues out of my queue today. P.S. I find it amusing that I have become so dependent on so much screen real estate. Remember how much we all used to accomplish with only 40 (or if we were lucky, 80) columns of text? :smile: |
[QUOTE=chalsall;531799] Remember how much we all used to accomplish with only 40 (or if we were lucky, 80) columns of text? :smile:[/QUOTE]
Indeed. And 2K of RAM. My first computer, a Southwest Technical Products kit based on the Motorola 6800 processor, came with 2K of RAM standard. When I bought my kit I purchased it with the optional 2K upgrade, so I was really uptown with my 4K of memory. :) |
[QUOTE=chalsall;531799]...I can't work effectively with only two monitors!!!
P.S. I find it amusing that I have become so dependent on so much screen real estate. Remember how much we all used to accomplish with only 40 (or if we were lucky, 80) columns of text? :smile:[/QUOTE] "I can't work effectively with only two monitors." :shock: When I started trade-school back in 1987, I saw both 40 and 80 columns. The vast majority were monochrome, and MS-DOS was the rule of each day. It's amazing how some of us got by with only "C:\>" on the screen when we started up, if we had a hard-drive inside. Many did not. Then, it was startup with a five-and-a-quarter floppy. The full-height floppy drives in the IBM's grunted all the way. :smile: |
I still miss my C>64.
|
I decided to do an experiment with my HP. I added a second hard drive, and moved the plugs to that drive and installed Ubuntu on it. This kept the Windows drive intact. After some studying here, and other places, I have managed to do with it what I wanted. [I]Mprime[/I] seems to do really well.
This machine has a GPU in it, and I saw a lot of references to Nvidia during the OS install. I looked around here today and I saw no mention of any program which could use the GPU in a Linux environment. I found that rather amazing. Perhaps I did not look where I should have. Is there no such animal? |
[QUOTE=storm5510;531824]
This machine has a GPU in it, and I saw a lot of references to Nvidia during the OS install. I looked around here today and I saw no mention of any program which could use the GPU in a Linux environment. I found that rather amazing. Perhaps I did not look where I should have. Is there no such animal?[/QUOTE] What GPU? Every gpu compute program used for GIMPS is available for linux as well, although you may need to compile some of them. |
[QUOTE=kracker;531827]What GPU? Every gpu compute program used for GIMPS is available for linux as well, although you may need to compile some of them.[/QUOTE]
Nvidia GTX 750Ti. Not the sharpest knife in the drawer, but it is consistent. |
[QUOTE=storm5510;531851]Nvidia GTX 750Ti. Not the sharpest knife in the drawer, but it is consistent.[/QUOTE]
After installing the drivers, I would try [URL="https://mersenneforum.org/mfaktc/mfaktc-0.21/mfaktc-0.21.linux64.cuda65.tar.gz"]this[/URL], or if that doesn't work the many binaries floating around in the colab thread that might work... if nothing else, you'll have to compile from source(which I doubt). |
[QUOTE=kracker;531895]After installing the drivers, I would try [URL="https://mersenneforum.org/mfaktc/mfaktc-0.21/mfaktc-0.21.linux64.cuda65.tar.gz"]this[/URL], or if that doesn't work the many binaries floating around in the colab thread that might work... if nothing else, you'll have to compile from source(which I doubt).[/QUOTE]
I found the archive above. I have it all unpacked into a folder. I try to run it, like this: [CODE]./mfaktc.exe[/CODE]I get this error message: [CODE] ./mfaktc.exe error while loading shared libraries libcudart.so.6.5: cannot open shared object file: No such file or directory. [/CODE]This file is present in a folder called "lib" below the parent just as it is in the archive. :confused: |
[QUOTE=storm5510;531909]I found the archive above. I have it all unpacked into a folder. I try to run it, like this:
[CODE]./mfaktc.exe[/CODE]I get this error message: [CODE] ./mfaktc.exe error while loading shared libraries libcudart.so.6.5: cannot open shared object file: No such file or directory. [/CODE]This file is present in a folder called "lib" below the parent just as it is in the archive. :confused:[/QUOTE] I would try moving the libraries from lib to the same folder mfaktc.exe is. |
[QUOTE=kracker;531913]I would try moving the libraries from lib to the same folder mfaktc.exe is.[/QUOTE]
If that doesn't work, try the executable [URL="https://mersenneforum.org/showpost.php?p=525235&postcount=19"]I posted way back in September[/URL]. This is the same exec I include in the GPU72 TF Colab bootstrap payload. |
[QUOTE=kracker;531913]I would try moving the libraries from lib to the same folder mfaktc.exe is.[/QUOTE]
I tried this earlier. I get icons with a red x and a lock symbol on them. It's trying to create a link instead of moving the file. It says in the properties, "Link (broken)(inode/symlink)." [QUOTE=chatsall]If that doesn't work, try the executable [URL="https://mersenneforum.org/showpost.php?p=525235&postcount=19"]I posted way back in September[/URL]. This is the same exec I include in the GPU72 TF Colab bootstrap payload.[/QUOTE] I get the same message, but different library name. libcudart.so.10.0. I believe this refers to CUDA 10. A few months back, on the Windows drive, the 10 version stopped working. I had to revert it back to 80 to get it to run again. |
| All times are UTC. The time now is 01:02. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.