mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

chalsall 2020-04-19 19:25

Just saw the funniest thing in one of my log files...

[CODE][19/Apr/2020:19:22:34 +0000] gpu72.com 192.71.42.108 - - "GET /robots.txt HTTP/1.1" 200 176 "-" "Go-http-client/1.1"
[19/Apr/2020:19:22:34 +0000] gpu72.com 192.36.53.165 - - "GET /humans.txt HTTP/1.1" 301 240 "-" "Go-http-client/1.1"[/CODE]

Great sense of humor. I wonder if there's an RFC for this. The fetcher has quite a bit of IP space...

James Heinrich 2020-04-19 19:46

[url]https://www.robotstxt.org/[/url]
[url]http://humanstxt.org/[/url]

chalsall 2020-04-19 20:02

[QUOTE=James Heinrich;543193][url]http://humanstxt.org/[/url][/QUOTE]

Cool. Thanks for the knowledge. I'd never heard about this initiative, but I like it. Logical, and potentially useful meta-data. Good place for a copyright et al notice... :smile:

Chuck 2020-04-24 03:48

Is SRBase and BOINC now out of the picture?

Uncwilly 2020-04-24 04:02

[QUOTE=Chuck;543613]Is SRBase and BOINC now out of the picture?[/QUOTE]
SRBase and their BOINC folks are now getting exponents directly from PrimeNet. I recently had a PM exchange with KEP. They are still turning in exponents (you can see this in the recently cleared list). There are other developments that are in the works. I don't want to speak out of school at the moment.

ixfd64 2020-04-24 23:17

I'm disappointed to learn that GPU to 72 no longer supports BOINC due to a service mark dispute. Hopefully this won't mean the end of BOINC integration for GIMPS as a whole.

chalsall 2020-04-24 23:39

[QUOTE=ixfd64;543673]I'm disappointed to learn that GPU to 72 no longer supports BOINC due to a service mark dispute.[/QUOTE]

To be clear, I'm more than happy to continue to support Reb's BOINC efforts. However, GPU72 doesn't have the type of work available which they want (72 to 73 work in the higher ranges, which won't be useful for years...). The specialized API I built for them still exists, if they ever decide they'd like to do deeper work again.

[QUOTE=ixfd64;543673]Hopefully this won't mean the end of BOINC integration for GIMPS as a whole.[/QUOTE]

They're now getting work directly from Primenet. And, the REAL GPU72 BOINC system is in early alpha...

Mark Rose 2020-04-25 00:45

Exciting times in the TF world!

(I've read the locked thread)

chalsall 2020-05-09 01:01

LMH TF -- Back by popular demand!
 
So, it looks like we're going to be OK with regards to TF'ing firepower. So...

By the request of some (but particularly Uncwilly), I have imported some candidates in the 332M range (AKA the 100M digit candidates) for some quality TF'ing time. Thanks to George for expiring a bunch of /really/ old assignments.

For those who want to work there, the [URL="https://www.gpu72.com/account/getassignments/lltf/"]LLTF Assignment Form[/URL] again has the LMH Bit-first and Depth-first options. The former goes to 76 bits, while the later goes to 78.

I'm currently testing to ensure that the Colab code-paths will handle this OK. So, currently, a couple of my instances, and all of Uncwilly's, are doing 332M work. Tomorrow I'll update the Instance Assignment form to let people opt in to do this work type.

Which brings up a question: just how deep to people want to take these? 78 isn't "optimal", but a 77 to 78 run will take about two hours (on a P100). Should I have a Breath first (75), Nominal (78), and Deep (81?).

Feedback appreciated.

kladner 2020-05-09 01:47

The idea of ripping through TF is attractive, but I'm not really drawn to the upper reaches. On the other hand, I am mostly into diddling Colab for all the GHz-d I can get. :smile: I don't care what GPU72 deems necessary as long as it advances the main project. What makes Sense, or Let GPU Decide. Just say.
I will note the saying, "If it's free it's not the product. You are."
But, I just signed off my paid account, with 4 P100s, called up a free account and got 2 T4s. This is approximately equal to 3 P100s. I usually stop this kind of run after 5-6 hours, and line up the paid account to run overnight.
3 free accounts and one paid let's me keep as much running as I have time to set up. I can always find at least one that will run with GPUs.

Uncwilly 2020-05-09 04:55

[QUOTE=chalsall;544925]I'm currently testing to ensure that the Colab code-paths will handle this OK. So, currently, a couple of my instances, and all of Uncwilly's, are doing 332M work.[/QUOTE]I requested a manual one for my integrated GPU. And I have set MISFIT to target that range (I hope I got it right.) With the T4's I have at the moment, 75->76 is taking ~21 minutes per. Hoping to find a factor or 2 (still running ~9% behind expected).

:chalsall:
:clap:
:ttu:


All times are UTC. The time now is 22:52.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.