mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

kladner 2015-09-01 21:42

:smile:Having run tests, CUFFTbench, and Threadbench, I am now 4 hours from completing a 36M LLDC on the MSI GTX580. One thing, I picked up this assignment from PrimeNet instead of GPU72. I have checked it with with PrimeNet, but of course, it does not show on GPU72.

Chris, may I send you the information privately? I should have had P95 acquire an assignment through the proxy, but did not think it through at the time. Of course, if it doesn't match, I won't want to report it. :razz:

EDIT: This is only a test. I will go back to LLTF on that card in 3h 34m. :smile:

Chuck 2015-09-02 00:22

[QUOTE=chalsall;409354]
I would ask that those who are doing LLTF'ing, please "pledge" to at least 74, or choose the "Let GPU72 Decide" option (which will take candidates to 75 currently). There are many "big guns" who are currently doing "breadth first" (read: TF'ing from 71 to 72); yes, that work has to be done in the future, but we have hungry P-1'ers and LL'ers to feed right now.
[/QUOTE]

GUILTY as charged....I liked to help empty a box in the "71" column of the available assignments report by working 71—>72 and being able to complete 200/day, but I know my time is better spent with "Let GPU72 Decide", so I have gone back to that work.

kladner 2015-09-02 01:43

Sorry for this extended digression.....

[QUOTE=kladner;409364]
[QUOTE]EDIT: This is only a test. I will go back to LLTF on that card in 3h 34m. :smile:[/QUOTE]

Sigh. It did not match the only previous result. I will get back to this after I put some LLTF work through the card. Even though it passed -r 1 at the settings used, I probably need to run with more conservative settings.

flagrantflowers 2015-09-02 01:51

[QUOTE=kladner;409382]
Sigh. It did not match the only previous result.[/QUOTE]


Of it could be correct and the other incorrect. No reason to delete data.

kladner 2015-09-02 01:57

No. I'll submit it.

This is the [URL="http://www.mersenne.org/report_exponent/?exp_lo=36114457&full=1"]item in question[/URL].

wombatman 2015-09-02 02:40

To test the decreased RAM and see if it makes it stable, I'll double-check it.

kladner 2015-09-02 02:48

Cool! Thanks. :smile:

wombatman 2015-09-02 03:25

Just for my reference, how far did you drop your RAM speeds to get stability?

Mark Rose 2015-09-02 04:48

[QUOTE=chalsall;409358]Yes. Sorry, I should have been clearer...

LG72D will balance between going to 75 or 74, depending on our current "real-time" situation. Right now we're really short of candidates without P-1 done at even 74, so the "AI" decided we needed some "to 74" work done.

Thanks for pitching in! :smile:[/QUOTE]

I have a little under three days of DCTF to finish up, which is queued for tomorrow, then I'll let GPU72 decide for a while. I know my contribution won't be enough though.

dragonbud20 2015-09-02 08:35

Is GPU to 72 being really slow to give out assignments for anyone else? I've noticed the stats pages being slow over the past few days and occasionally the assignments page; just now it took over a minute for me to get 30 assignments. Basically any page containing data that changes seems very slow to load.

Mark Rose 2015-09-02 12:55

I've experienced the same for the last week or so.

airsquirrels 2015-09-02 13:06

I have been traveling and was just this morning able to configure a good portion of the system for LL work, should start seeing those results today.

chalsall 2015-09-02 14:01

Thanks for all the resources guys! :smile:

With regards to the slowness being experienced on GPU72... Between approximately 15 and 30 minutes after each hour the system does a bunch of work to update the stats. With the huge amount of results being submitted recently, this has increased the amount of processing going on.

To add to this, Google's spider has been fetching many pages and graphs recently, some of which are very expensive to render.

I'm working on caching some of the more expensive datasets.

kladner 2015-09-02 14:05

[QUOTE=wombatman;409390]Just for my reference, how far did you drop your RAM speeds to get stability?[/QUOTE]

Running at 1600MHz (DDR) should be good.

Mark Rose 2015-09-02 14:59

[QUOTE=chalsall;409411]Thanks for all the resources guys! :smile:

With regards to the slowness being experienced on GPU72... Between approximately 15 and 30 minutes after each hour the system does a bunch of work to update the stats. With the huge amount of results being submitted recently, this has increased the amount of processing going on.

To add to this, Google's spider has been fetching many pages and graphs recently, some of which are very expensive to render.

I'm working on caching some of the more expensive datasets.[/QUOTE]

Perhaps block the URLs with the graphs with robots.txt?

chalsall 2015-09-02 18:19

[QUOTE=Mark Rose;409425]Perhaps block the URLs with the graphs with robots.txt?[/QUOTE]

Good idea.

But, since anyone can accesses these graphs and reports (and I have found that those URLs mentioned in robots.txt as being off-limits are often of particular interest to non-well-behaved 'bots) I think it would be best to cache the data.

airsquirrels 2015-09-02 19:26

I'll be the first to admit to being an obsessive stat watcher and graph loader. I will try to restrain myself.

chalsall 2015-09-02 22:41

[QUOTE=airsquirrels;409454]I'll be the first to admit to being an obsessive stat watcher and graph loader. I will try to restrain myself.[/QUOTE]

Please don't worry. This is something I need to deal with.

I never imagined GPU72 would last as long as it has; a query which when originally written referenced 100,000 records now references over 2,300,000....

petrw1 2015-09-03 04:01

[QUOTE=chalsall;409189]Just so everyone knows, all DCTF candidates to 70 are now assigned.

I have adjusted the DCTF assignment page such that if MISFIT or mfloop is the requester and the "pledge" is 70, it is automatically bumped up to 71. This is so no workers are left without work.[/QUOTE]

And mine are all completed....on to 70-71

Madpoo 2015-09-03 04:05

[QUOTE=chalsall;409411]Thanks for all the resources guys! :smile:

With regards to the slowness being experienced on GPU72... Between approximately 15 and 30 minutes after each hour the system does a bunch of work to update the stats. With the huge amount of results being submitted recently, this has increased the amount of processing going on.

To add to this, Google's spider has been fetching many pages and graphs recently, some of which are very expensive to render.

I'm working on caching some of the more expensive datasets.[/QUOTE]

Tell Google (and crawlers in general) to avoid those pages. mersenne.org is configured to have crawlers ignore a bunch of stuff that's just related to reports on various things which are computationally expensive if they start crawling, and of little benefit for driving traffic from search engines. :smile:

Things like the /report_exponent/ pages, or anywhere that only works if you're logged in like /results/ etc.

If those are worthwhile to get indexed by a search engine, try just setting up the site in Google Webmaster Tools and configure a custom crawl rate so it's slow enough not to hammer the site and cause issues.

Mark Rose 2015-09-03 07:01

I've always found Google's crawl to be reasonable. I have had issues with Bing and Baidu slamming small sites heavily all at once. That was years ago though.

chalsall 2015-09-03 18:37

[QUOTE=Mark Rose;409481]I've always found Google's crawl to be reasonable. I have had issues with Bing and Baidu slamming small sites heavily all at once. That was years ago though.[/QUOTE]

Agreed. Googlebot is very well behaved.

But, like I said before, I have found that many bad bots love to explore the URLs forbidden in robots.txt. ("Bad bot! Bad!")

It can take a little while for Fail2Ban to trigger in these cases; I've seen 'bots request hundreds of URLs in a few seconds before they're blocked.

dragonbud20 2015-09-04 04:39

[QUOTE=chalsall;409411]I'm working on caching some of the more expensive datasets.[/QUOTE]

I'm just starting to get into coding in general but i don't know much about websites. what exactly does this mean in this context?

kladner 2015-09-04 05:10

[QUOTE=dragonbud20;409555]I'm just starting to get into coding in general but i don't know much about websites. what exactly does this mean in this context?[/QUOTE]
I should probably let Chris answer for himself. That said, I think this means storing a pre-generated presentation of a data set, rather than generating the presentation on demand. It allows the data, and presumably graphics, to be updated when other demands are low, gives a much faster response to the inquirer, and lets the server better answer other demands.

wombatman 2015-09-04 05:14

[QUOTE=wombatman;409387]To test the decreased RAM and see if it makes it stable, I'll double-check it.[/QUOTE]

Not sure if you're ktony or Dennis Moran, but mine matched with Dennis Moran.

kladner 2015-09-04 06:04

[QUOTE=wombatman;409557]Not sure if you're ktony or Dennis Moran, but mine matched with Dennis Moran.[/QUOTE]

I'm ktony. Thanks for running that. It lets me know I need to pull back and test some more.

Dubslow 2015-09-04 06:22

[QUOTE=dragonbud20;409555]I'm just starting to get into coding in general but i don't know much about websites. what exactly does this mean in this context?[/QUOTE]

[QUOTE=kladner;409556]I should probably let Chris answer for himself. That said, I think this means storing a pre-generated presentation of a data set, rather than generating the presentation on demand. It allows the data, and presumably graphics, to be updated when other demands are low, gives a much faster response to the inquirer, and lets the server better answer other demands.[/QUOTE]

More or less.

Currently whenever someone requests a page, all the graphs on that page are rendered (drawn) by the web server specifically for your request. So if 10 people happen to look at the page at the same time, the server will generate the graph ten different times, even though obviously to you or me all 10 people will get the same exact graph, with 9/10 of the CPU time expended being completely wasted.

Caching means when the server renders a graph, it stores that rendered graph -- then when the next person comes asking for the graph, instead of completely regenerating it from scratch, it can just send the saved result from the previous request.

The catch of course is that you don't necessarily get live data. But in this context, as in many others, getting data that's an hour old is no problem at all, so generating the graphs once per hour instead of once per request is a massive time and energy saver on Chris' CPUs, with little or no impact to downstream users like you and me.

As an aside, there are of course better possible strategies to caching than just regenerating after a fixed time period -- large websites like Google or reddit or StackExchange etc likely have massively complex caching strategies to best serve the needs of webgoers, probably combining many sub strategies such as fixed-time cache regenerating, regenerating after a certain number of hits on some data, regenerating only when the underlying data changes, and probably a dozen other more advanced strategies as well. In GPU72's case it's probably a fixed-time regeneration (or *maybe* based on when the underlying data changes? Chris?)

Madpoo 2015-09-04 15:12

[QUOTE=Dubslow;409561]As an aside, there are of course better possible strategies to caching than just regenerating after a fixed time period -- large websites like Google or reddit or StackExchange etc likely have massively complex caching strategies to best serve the needs of webgoers, probably combining many sub strategies such as fixed-time cache regenerating, regenerating after a certain number of hits on some data, regenerating only when the underlying data changes, and probably a dozen other more advanced strategies as well. In GPU72's case it's probably a fixed-time regeneration (or *maybe* based on when the underlying data changes? Chris?)[/QUOTE]

Without going into too much boring detail:
It's common to have a tiered architecture where you have the presentation or web server, a middle tier "service" layer, and then a back end data layer (SQL or whatever). The front end web server is obvious... it handles requests from visitors and spits out HTML. Pretty simple. It'll request things from the service layer which in turn will take those requests and turn them into whatever is needed to access the data layer. Could be a sproc, could be an actual select statement, whatever... you get the idea. Maybe there are multiple things on the data layer like SQL, or a search index, or even flat files, whatever.

The websites I admin use a caching layer, Couchbase. It tends to float between the layers... we're using it as a key/value caching solution. You might have heard the term NoSQL.

We cache things between the data and service layers, and each entry in there has an expiration so the web server will know how stale that data is.

As an example we can relate to, if we had this setup on Primenet (we don't) it would look something like this... you request the history for a range of exponents. Right now it has to look at the database for each exponent and pull data from several different tables together and generates an array of things. Some of it is pretty easy to pull (how high it's been factored) and some of it takes longer (pulling all of the results that have ever been checked in, especially popular ECM targets like M1277).

What if instead of grabbing that from SQL each time and doing whatever processing, you stuff that into a Couchbase "document" for that exponent and give it an expiration of like an hour or whatever? Then when you have something like the full history of M1277 which can take a while to show all the ECM stuff because it's parsing each and every historical result, it just reads all of that from Couchbase in the blink of an eye?

The trick with it all is to set the expiration so you're not serving stale content when people expect it to be fresh, and it also helps if these are popular things being requested, otherwise it's not likely to save much time if it's cached but nobody uses it between that first database hit and when it expires.

To avoid some of that, you can change the service layer to do a dual commit... when it's writing to the database, also update the cache store while you're at it. Let's say someone checks in a new result for M1277. It would update the database and also add a new entry to Couchbase and bump out that expiration time. If you're consistent with that, you can set the expiration to be days or weeks.

It is VERY worthwhile to have caching if your site is heavy on the SQL side of things. You can only optimize a query so much, but at some point it's still going to be slow when it's handling lots of traffic.

The TL;DR is this... caching=good.

chalsall 2015-09-05 20:42

[QUOTE=Madpoo;409592]The TL;DR is this... caching=good.[/QUOTE]

I'm currently lurking guys. Having to deal with Atoms (shudder) and Humans (double shudder).

The Pool Plumbing guys managed to route a drain such that there is an airlock (I told them I thought there was an error but they said "Everything is fine"), and the Diamond Bright guys applied the render when it was too dry.

chalsall 2015-09-07 23:33

[QUOTE=chalsall;409704]I'm currently lurking guys. Having to deal with Atoms (shudder) and Humans (double shudder).[/QUOTE]

A quick update... Spent the last two days pumping 38K litres of water back into the pool (good thing it rained heavily today).

Should have "Start Up" tomorrow.

Mark Rose 2015-09-08 18:10

A breaker tripped Friday night, so I missed 4 THzd of work over the weekend. I'll have to rearrange the power configuration tonight. I'll also be running at reduced throughput during the day for the next few days. I should be back to full tilt Friday evening.

airsquirrels 2015-09-10 12:12

It must be calamity time. I had a main water pump fail and have only some servers running until I can do some maintenance. Busy work week.

dragonbud20 2015-09-10 23:44

[QUOTE=airsquirrels;410019]It must be calamity time.[/QUOTE]
Looks like somethings up... SoCal's in a heatwave so I can't run above about 50% capacity without cooking either myself or my computer

chalsall 2015-09-10 23:50

[QUOTE=dragonbud20;410055]Looks like somethings up... SoCal's in a heatwave so I can't run above about 50% capacity without cooking either myself or my computer[/QUOTE]

LOL... Don't cook either. :smile:

So everyone knows, LG72D is currently sometimes assigning TF'ing work to 74 to hand out to P-1'ers.

Not optimal, but better than at 73.

Madpoo 2015-09-11 04:04

[QUOTE=dragonbud20;410055]Looks like somethings up... SoCal's in a heatwave so I can't run above about 50% capacity without cooking either myself or my computer[/QUOTE]

I'm in LA myself right now, installing some stuff into a datacenter.

Now, I've been in colocations around the country and have some experience in the cold/warm aisles. I have to say though that the [B]humidity[/B] in this LA location is frustrating!

I like to wrap my cables with id labels that have a number on them, so I can easily trace what goes where. Well, the darn labels get all gummy from the high humidity and peel off after a few hours. It's crazy. I don't know how people here can stand it. :smile:

That said, at least the ambient temps are kept to 70'ish F. But that RH must be pretty high... seems like > 50% in there, maybe > 60% outside? Good gravy.

I know... the tropically located folks in here are going to one-up those figures with their tales of 99% humidity, and now I can feel your pain a little bit more. :smile:

frmky 2015-09-11 06:09

It's not normally THIS humid here. But this week and especially today have been miserable.

Mark Rose 2015-09-11 13:05

[QUOTE=chalsall;410056]So everyone knows, LG72D is currently sometimes assigning TF'ing work to 74 to hand out to P-1'ers.

Not optimal, but better than at 73.[/QUOTE]

What's the best way to tell where the P-1 wave front is? I think it would be super useful if there were a page that showed where the LL and P-1 wave fronts are according to GPU72, including perhaps the buffer.

I'm also curious how far behind LL is from P-1, then we could figure out the amount of necessary work to catch up to the P-1 wave, which would allow going to 75 before P-1'ing.

James Heinrich 2015-09-11 14:05

[QUOTE=Madpoo;410073]at least the ambient temps are kept to 70'ish F. But that RH must be pretty high... seems like > 50% in there, maybe > 60% outside? Good gravy.[/QUOTE]Sounds like a normal summer 1000+km north in Ontario, ~27x°C (80F) with 75% humidity feels like 36°C (97F) is a perfectly normal August day. And six months later it's 70°C (160F) colder :rajula:

dragonbud20 2015-09-11 14:14

Yeah the humidity this week has been pretty bad. Normally it's around 20% so this is rather high especially for this time of year.

chalsall 2015-09-11 14:25

[QUOTE=James Heinrich;410092]Sounds like a normal summer 1000+km north in Ontario, ~27x°C (80F) with 75% humidity feels like 36°C (97F) is a perfectly normal August day.[/QUOTE]

LOL... The other day here it was 31 degrees with 86% humidity... "Felt like" 43 degrees! My workmen and I were melting!!!

chalsall 2015-09-11 14:40

[QUOTE=Mark Rose;410090]What's the best way to tell where the P-1 wave front is? I think it would be super useful if there were a page that showed where the LL and P-1 wave fronts are according to GPU72, including perhaps the buffer.[/QUOTE]

Probably the best report to look at would be the [URL="https://www.gpu72.com/reports/available/"]Available Assignments[/URL] page; you can see that the P-1 "wave back" is at 72M. If you click on the "No P-1" column header you can see we don't have much at 75 nor 74. Keep in mind that GPU72 releases some candidates at 75 (or 74 if we're "starved") without P-1 done for Primenet to assign to P-1'ers.

[QUOTE=Mark Rose;410090]I'm also curious how far behind LL is from P-1, then we could figure out the amount of necessary work to catch up to the P-1 wave, which would allow going to 75 before P-1'ing.[/QUOTE]

For that, take a look at the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]Estimated Completion[/URL] report. The P-1'ers are about 430 days ahead of the LL'ers.

Mark Rose 2015-09-11 15:11

[QUOTE=chalsall;410095]If you click on the "No P-1" column header you can see we don't have much at 75 nor 74. Keep in mind that GPU72 releases some candidates at 75 (or 74 if we're "starved") without P-1 done for Primenet to assign to P-1'ers.[/quote]

I never thought to click on those. I always thought they were just sorting headers, not to new pages :)

[quote]For that, take a look at the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]Estimated Completion[/URL] report. The P-1'ers are about 430 days ahead of the LL'ers.[/QUOTE]

Thanks :)

dragonbud20 2015-09-12 19:37

[QUOTE=chalsall;410095]Keep in mind that GPU72 releases some candidates at 75 (or 74 if we're "starved") without P-1 done for Primenet to assign to P-1'ers.[/QUOTE]

Why is it done like this? I assume it's because from a computing power/time required standpoint it easier to trial factor below a certain point and easier to P-1 past that and at the current exponent the bit depth line is around 75? did I get any of that right?

chalsall 2015-09-12 19:55

[QUOTE=dragonbud20;410163]Did I get any of that right?[/QUOTE]

Not exactly... :smile:

The reason we try to go to 75 "bits" currently is that is where the "cross over point" is. That is, where it is more efficient to TF on a GPU (based on the statistical probability of finding a factor) than running *two* LL's on the _same_ GPU. This is based on an extensive analysis that James did for us many moons ago.

The reason GPU72 releases some candidates without P-1 done is many do P-1'ing directly through Primenet. Ideally we'd only ever release at 75 bits, but sometimes we're a bit short, so we release at 74 instead. This isn't the end of the world; any such candidates are then recaptured for TF'ing to 75 bits if the P-1 run didn't find a factor. In fact, except for very unusual situations, no candidates are assigned for LL'ing at less than 75 bits nor without P-1 being done.

One last point -- my understanding is it is better for the P-1'ers to work with higher TF'ed candidates because it allows them to search a higher range. Someone with a better understanding of the math would have to explain exactly why.

Edit: Should have been clearer. No candidates are released for LL'ing at less than appropriate TF'ing; below 70M that's 74, above 70M that's 75.

Edit 2: Sorry... After far too busy a day dealing with Humans dealing with Atoms (badly)... My above edit should have been "below 65M that's 74, above 65M that's 75.

kladner 2015-09-15 16:12

This morning, one of my GTX 580s completed a DC assignment, which matched the existing result. I submitted it on the Manual Results page about 4.5 hours ago, and it still has not shown as completed in GPU72. I obtained the assignment via the P95 proxy, and moved it to the CuLu worktodo.txt. PrimeNet shows it as successfully completed.

Have I bollixed things in some way?

chalsall 2015-09-15 16:26

[QUOTE=kladner;410346]Have I bollixed things in some way?[/QUOTE]

Please PM me the candidate in question and I'll drill down.

Completed LL/DC work which goes through the proxy will be seen immediately, but there is a secondary spider which checks for such completions for cases where the candidate is checked out via the proxy but submitted manually; that can take a bit longer.

I'd be interested in knowing exactly what is happening in this particular case, to ensure the system is sane.

kladner 2015-09-15 16:33

[QUOTE=chalsall;410347]Please PM me the candidate in question and I'll drill down.

.....[/QUOTE]
Done.

EDIT: And the Eagle has landed! Thanks!

chalsall 2015-09-15 16:50

[QUOTE=kladner;410348]Done.[/QUOTE]

OK, thanks.

I reran the secondary script with a larger number of queries, and it found your completion without any further modifications to the script. As in, the code was sane, but it might have taken up to six hours to find this.

As always, if anyone sees anything strange, please bring it to our attention.

kladner 2015-09-15 18:11

[QUOTE=chalsall;410349]OK, thanks.

I reran the secondary script with a larger number of queries, and it found your completion without any further modifications to the script. As in, the code was sane, but it might have taken up to six hours to find this.

As always, if anyone sees anything strange, please bring it to our attention.[/QUOTE]

Thanks for the explanation. Even six hours would not be disturbing if the possibility of the delay is known.

VictordeHolland 2015-09-16 00:01

It might not be so bad to run P-1 before TF74-75.
If P-1 (before TF75) finds a factor you 'save' the TF74-75 test.
If you do TF74-75 first and find a factor you 'save' the P-1 test.

The question is: which one would you rather save? A day of CPU core time (P-1) or a couple of hours on a GPS (TF74-75)?

Prime95 2015-09-16 00:21

[QUOTE=VictordeHolland;410397]
The question is: which one would you rather save? A day of CPU core time (P-1) or a couple of hours on a GPS (TF74-75)?[/QUOTE]

That's comparing apples and oranges. Better would be to compare GPU TF74-75 to GPU P-1. I don't know if we have benchmarks for GPU P-1 program.

On the other hand, it may be a bit much for GPU72 to manage releasing the exponent for P-1 and reacquiring it for TF75.

chalsall 2015-09-16 14:44

[QUOTE=Prime95;410398]On the other hand, it may be a bit much for GPU72 to manage releasing the exponent for P-1 and reacquiring it for TF75.[/QUOTE]

Actually, that isn't a problem at all. In fact, it's already being done -- if "Spidy" sees a candidate about to be assigned for LL'ing at 74 it grabs it to take to 75.

But ideally the sequence would be TF'ing to 75, then P-1, then LL. airsquirrels has agreed move a lot of his firepower over to LLTF'ing for a while, so we should be good.

dragonbud20 2015-09-18 02:04

I was thinking about the predicament we had the past few days with a lack of TF power and it though of something that might be useful for figuring out what type of work to do. Would it be possible to have some sort of indicator on the assignment page or anywhere really on the gpu72 site that indicated when TF or DC needed more work done or we don't have enough buffer for one or the other? I personally do all my assignments manually so I know it could help people who do the same although I'm not sure what could be done for people who get all of their work using misfit or spiders or whatever(I'm not familiar with automating mfactc at all I don't know if those terms are right). I don't know if this has been considered but it would be a nice little feature.

LaurV 2015-09-18 03:43

[QUOTE=chalsall;410473]But ideally the sequence would be TF'ing to 75, then P-1, then LL.[/QUOTE]
Actually, I think the ideal sequence for a nvidia gpu would be "TF to 73, P-1, TF to 74, forget about 75" :razz: (well, I didn't switch to new cards like 9xx yet, which seem to be more efficient to TF, but at least this is the case for 580s and Titan, which are very efficent on LL, and almost sure that is the case unde 70M exponents). What you say may be still ok for an amd gpu, which can't do efficient LL.

Mark Rose 2015-09-18 04:21

[QUOTE=dragonbud20;410720]I was thinking about the predicament we had the past few days with a lack of TF power and it though of something that might be useful for figuring out what type of work to do. Would it be possible to have some sort of indicator on the assignment page or anywhere really on the gpu72 site that indicated when TF or DC needed more work done or we don't have enough buffer for one or the other? I personally do all my assignments manually so I know it could help people who do the same although I'm not sure what could be done for people who get all of their work using misfit or spiders or whatever(I'm not familiar with automating mfactc at all I don't know if those terms are right). I don't know if this has been considered but it would be a nice little feature.[/QUOTE]

DCTF isn't needed for another year or so. But some of us want to finish it all right now, and at our current rate that will be done in about eight months.

LLTF is needed. There are two main "waves" in the LLTF exponents. One for being released for a full LL test, the other for P-1 factoring. Ideally, for current exponents, everything should be factored to 75 bits. This is largely happening before exponents are being released for full LL, but not for P-1. Ideally it would be done before both, but it's more important that 75 bits be reached before LL, so P-1 gets released early. If you get exponents with Let GPU72 Decide it will take care of handing out the most important work first. If you pick What Make Sense, a bit level of 74 or 75 is the most helpful.

LaurV 2015-09-18 04:55

[QUOTE=dragonbud20;410720]Would it be possible to have some sort of indicator on the assignment page or anywhere really on the gpu72 site that indicated when TF or DC needed more work done or we don't have enough buffer for one or the other?[/QUOTE]
This "feature" exists already, for automatic assignments, it is called "what makes sense" or "let gpu72 decide".
Misfit is a bliss, unless you use linux, and even on linux there is a good script to fetch exponents from gpu72, so why do "manual" assignments?

dragonbud20 2015-09-18 05:20

:grin:I really should figure out how to get misfit working. From what everyone says I'm guessing it would make keeping my cards fed a lot easier. honestly I've downloaded it but I;m not really sure how to get it working.

EDIT: spent a few minutes cracking at it and I think I have all my gpu work getting and result sending automated now. thanks guys for pushing me up off my ass for a bit.

Walt 2015-09-19 02:56

Chalsall,

You have asked people to move to LLTF over DCTF to help keep ahead of the P-1 wave, but the Chris Halsall account is doing DCTF. Are you doing some of the triple checking that has been discussed and that's why you aren't taking your own advice? :smile:

-Walt

LaurV 2015-09-19 03:34

Haha, that is a good one, and I would like to sustain you and hit this guy in the nose :razz: but actually he took the advice, you look to his assignment queue (first table) and he has 3100++ LLTF expos, and only 14 DCTF. What you see now is liquidating the DCTF queue. I mean, if he is like me, we queue work for at least 3-4-5 days or a week.

chalsall 2015-09-19 17:48

[QUOTE=Walt;410817]You have asked people to move to LLTF over DCTF to help keep ahead of the P-1 wave, but the Chris Halsall account is doing DCTF. Are you doing some of the triple checking that has been discussed and that's why you aren't taking your own advice? :smile:[/QUOTE]

LOL... I was wondering when someone was going to call me out on that... :smile:

The reason is (ironically) I don't have have a running GPU myself (well, actually that's not entirely true, I have a 580 kindly donated by Jerry and a 480 I purchased here in Bim, but I use those for private, sensitive computer vision compute; electricity is simply too expensive here to run such hardware locally unless there's a privacy reason to do so).

The little (~500 GHd/d) I contribute is via a rented EC2 "cg1.4xlarge" spot instance which I constrain to be at no more than $0.145 an hour -- rather unpredictable as to its availability. For the most part I tend to use this to "clean up" certain ranges; make things "round", finish work abandoned by others, etc.

Ever heard the saying "Do what I say, not what I do."? I hope this doesn't come across as disingenuous.

Walt 2015-09-19 18:39

Thanks. I just wondered. It seemed strange. I assumed since you have a good view of the big picture, you were doing specific tasks that needed to be done. Sounds like that's the case.

-Walt

VBCurtis 2015-09-19 20:51

[QUOTE=chalsall;410855]LOL... I was wondering when someone was going to call me out on that...

Ever heard the saying "Do what I say, not what I do."? I hope this doesn't come across as disingenuous.[/QUOTE]

Your labor and brainpower are vastly more valuable than your silicon or electronpower. Thank you!

chalsall 2015-09-19 21:32

[QUOTE=VBCurtis;410864]Your labor and brainpower are vastly more valuable than your silicon or electronpower. Thank you![/QUOTE]

You are very kind. Thank you.

airsquirrels 2015-09-20 20:10

I have betrayed AMD and added an NVIDIA Rig to the mix with an array of cards (690, two 980s and a Titan X). Eventually these will be doing CudaLucas LL work but I have them also contributed to LLTF for now. Should add some extra boost.

flashjh 2015-09-20 22:26

[QUOTE=airsquirrels;410908]I have betrayed AMD and added an NVIDIA Rig to the mix with an array of cards (690, two 980s and a Titan X). Eventually these will be doing CudaLucas LL work but I have them also contributed to LLTF for now. Should add some extra boost.[/QUOTE]

What system can host that many cards?

James Heinrich 2015-09-20 23:58

[QUOTE=flashjh;410918]What system can host that many cards?[/QUOTE]There's a fair number of motherboards that will support quad video cards. [URL="http://www.geforce.com/whats-new/guides/how-to-build-a-quad-sli-system#2"]This page[/URL] has a more restrictive list of boards that support 4-way SLI (as in 4 identical cards working together) as opposed to just getting 4 random cards working in the same system. Major considerations include a case that supports (at least) 8 slots (rather than the usual 7), and a very hefty power supply.

flashjh 2015-09-21 00:48

[QUOTE=James Heinrich;410926]There's a fair number of motherboards that will support quad video cards. [URL="http://www.geforce.com/whats-new/guides/how-to-build-a-quad-sli-system#2"]This page[/URL] has a more restrictive list of boards that support 4-way SLI (as in 4 identical cards working together) as opposed to just getting 4 random cards working in the same system. Major considerations include a case that supports (at least) 8 slots (rather than the usual 7), and a very hefty power supply.[/QUOTE]

Indeed I've done it before, but cooling is the biggest problem. Just wondering what the configuration is, but the added TF or LL capability is surely a good thing.

airsquirrels 2015-09-21 00:51

For 8 cards, I use SuperMicros dual Xeon E5v3 GPU server systems which use 16x PCIe lane switches.

For this system and most 4 card systems I use i7 5930s (44 PCIe lanes more cost effective than the 5960 8 core), this particular system is a gigabyte X99-UD3P which will give me Qty 4 PCI 3.0 8x slots. For most work this is ok but I'm still researching why mfaktc/o are slowing down with less lanes despite not needing much data transfer bandwidth. I use a case that has space below the 4th slot for a double slot card but in reality there are many cards that only need 1 slot once on water blocks. Corsair AX 1200 or 1500 power supplies round out the requirements.

I think it would be impossible to get decent performance on air but with liquid cooling card density is less of an issue. The random cards come from what I have been able to pick up cheap on clearance from the local MicroCenter.

LaurV 2015-09-21 06:39

[QUOTE=airsquirrels;410908]I have betrayed AMD and added an NVIDIA Rig to the mix with an array of cards (690, two 980s and a Titan X). Eventually these will be doing CudaLucas LL work but I have them also contributed to LLTF for now. Should add some extra boost.[/QUOTE]
Ha! Now you have 5 GPUs (because 690 has two) and you can try CRAPLLA, or how it was called! :razz: :razz:

flashjh 2015-09-21 12:37

[QUOTE=LaurV;410954]Ha! Now you have 5 GPUs (because 690 has two) and you can try CRAPLLA, or how it was called! :razz: :razz:[/QUOTE]

:smile: :tu:

cuBerBruce 2015-09-21 19:14

User dh1 has found a factor! Hooray!

This means I have regained my status as the top TFer on GPU72, excluding everyone who has found at least one factor. :smile:

flashjh 2015-09-21 19:17

[QUOTE=cuBerBruce;410982]User dh1 has found a factor! Hooray!

This means I have regained my status as the top TFer on GPU72, excluding everyone who has found at least one factor. :smile:[/QUOTE]

Congrats! Thanks for the help :smile: (^^ that's good stuff right there)

LaurV 2015-09-22 10:20

You only have ~150 expos, when it is supposed that you find a factor every ~73. You are still "in the distribution". Be patient, if your hardware is not in the weeds (i.e. giving random results), then the factors will come... will come...

cuBerBruce 2015-09-22 12:33

[QUOTE=LaurV;411036]You only have ~150 expos, when it is supposed that you find a factor every ~73. You are still "in the distribution". Be patient, if your hardware is not in the weeds (i.e. giving random results), then the factors will come... will come...[/QUOTE]

A few days ago, I had my teeny tiny GPU do [url=http://www.mersenne.ca/exponent/73380271]M73380271[/url] as a sanity check. I wasted 5 hours, but the known factor for that exponent was found, so I believe the hardware is fine. (And of course, I had run the mfaktc self-test before doing any assignments.) I think Chris just keeps giving me bad assignments.

Madpoo 2015-09-22 15:29

[QUOTE=cuBerBruce;411041]I think Chris just keeps giving me bad assignments.[/QUOTE]

Well, you know he's hogging the good ones for himself, right? :smile:

cuBerBruce 2015-09-25 19:19

I'm now 0 for 200 in factoring assignments. 188 TF assignments (177 from GPU72, 11 from Primenet), 12 P-1 assignments.

petrw1 2015-09-25 19:34

[QUOTE=cuBerBruce;411273]I'm now 0 for 200 in factoring assignments. 188 TF assignments (177 from GPU72, 11 from Primenet), 12 P-1 assignments.[/QUOTE]

I've gone as many as 430 TF in the current DCTF and LLTF ranges between factors ... and I'm sure others can top this.

Mark Rose 2015-09-25 20:28

I've got a streak of 383 no factors found results in my last 1000 assignments.

cuBerBruce 2015-09-25 20:41

Just in case I wasn't clear, I'm referring to a streak before finding my [b]first[/b] factor, not merely a streak [b]between[/b] factors. Still, it wouldn't surprise me if others have had longer streaks before finding their first factor.

chalsall 2015-09-25 21:15

[QUOTE=cuBerBruce;411277]Still, it wouldn't surprise me if others have had longer streaks before finding their first factor.[/QUOTE]

Statistics has no memory.

Patience grasshopper.

cuBerBruce 2015-09-28 15:06

[QUOTE=chalsall;411280]Patience grasshopper.[/QUOTE]

Well, my waiting is over. In the first TF attempt that I started after the supermoon total eclipse, I finally found my first factor for GIMPS. :big grin:

:faf:

So when is the next total eclipse?

James Heinrich 2015-09-28 15:14

[QUOTE=cuBerBruce;411472]So when is the next total eclipse?[/QUOTE][url=https://en.wikipedia.org/wiki/January_2018_lunar_eclipse]2018-Jan-31[/url]. Although there are some penumbral and a partial eclipse [url=https://en.wikipedia.org/wiki/List_of_21st-century_lunar_eclipses]before then[/url].

Uncwilly 2015-09-28 16:02

[QUOTE=cuBerBruce;411472]So when is the next total eclipse?[/QUOTE]
[QUOTE=James Heinrich;411473][url=https://en.wikipedia.org/wiki/January_2018_lunar_eclipse]2018-Jan-31[/url]. Although there are some penumbral and a partial eclipse [url=https://en.wikipedia.org/wiki/List_of_21st-century_lunar_eclipses]before then[/url].[/QUOTE]

Maybe this [URL="http://www.eclipse2017.org/2017/path_through_the_US.htm"]one (2017)[/URL] will lead to the next prime.

science_man_88 2015-09-28 16:13

[QUOTE=Uncwilly;411480]Maybe this [URL="http://www.eclipse2017.org/2017/path_through_the_US.htm"]one (2017)[/URL] will lead to the next prime.[/QUOTE]

maybe but: [url]http://heavy.com/news/2015/09/next-lunar-eclipse-supermoon-super-blood-moon-schedule-time-date-2018-2033-prediction-nasa-scientists-north-america-calendar/[/url] suggest the next time the circumstances of this particular combination comes up is 2033.

chalsall 2015-09-28 16:39

[QUOTE=cuBerBruce;411472]Well, my waiting is over. In the first TF attempt that I started after the supermoon total eclipse, I finally found my first factor for GIMPS. :big grin:[/QUOTE]

So the "big event" during the eclipse took place. :wink:

It is rarely a bad idea to keep non-perishables on hand.

Uncwilly 2015-09-28 17:49

[QUOTE=science_man_88;411481]maybe but: [url]http://heavy.com/news/2015/09/next-lunar-eclipse-supermoon-super-blood-moon-schedule-time-date-2018-2033-prediction-nasa-scientists-north-america-calendar/[/url] suggest the next time the circumstances of this particular combination comes up is 2033.[/QUOTE]I watched the last one.

Madpoo 2015-09-28 19:46

[QUOTE=chalsall;411487]So the "big event" during the eclipse took place. :wink:

It is rarely a bad idea to keep non-perishables on hand.[/QUOTE]

I guess the people predicting some sort of event during the lunar eclipse must have had something far more spectacular in mind than cuberbruce finally factoring something. :smile: They should have set their sights much *much* lower.

kladner 2015-09-28 22:20

[QUOTE=cuBerBruce;411472]Well, my waiting is over. In the first TF attempt that I started after the supermoon total eclipse, I finally found my first factor for GIMPS. :big grin:

So when is the next total eclipse?[/QUOTE]

Happy congratulations are in order! May you see MANY MORE, and soon!

cuBerBruce 2015-09-29 02:36

[QUOTE=kladner;411518]Happy congratulations are in order! May you see MANY MORE, and soon![/QUOTE]

Thanks, Carnac(?). When you said "soon," it seems you weren't kidding! My 2nd factor only took 3 attempts. :grin:

kladner 2015-09-29 03:11

[QUOTE=cuBerBruce;411538]Thanks, Carnac(?). When you said "soon," it seems you weren't kidding! My 2nd factor only took 3 attempts. :grin:[/QUOTE]
Congratulations and Felicitations! :party:
Maybe it [U]is[/U] Super Moon Eclipse fallout. Today has been a two-factor day (so far!) for me, as well. This after scattered single factors every three or four weeks for months. Perhaps it was just a good wish which rewarded us both. :smile:

Kieren

VictordeHolland 2015-10-07 00:30

According to the work distribution ( [URL]http://www.mersenne.org/primenet/[/URL][B] ) [/B]There are now 2000+ P-1 assignments assigned in the 80-81M range (most of them are not even TFed beyond 71 bits yet), maybe GPU72 should hand back more assignments to Primenet for P-1 even if they are 'only' TFed to 74bits ??

chalsall 2015-10-07 02:18

[QUOTE=VictordeHolland;412109]According to the work distribution ( [URL]http://www.mersenne.org/primenet/[/URL][B] ) [/B]There are now 2000+ P-1 assignments assigned in the 80-81M range (most of them are not even TFed beyond 71 bits yet), maybe GPU72 should hand back more assignments to Primenet for P-1 even if they are 'only' TFed to 74bits ??[/QUOTE]

Yup. Have you looked at who "own" those assignments above 80M?

kladner 2015-10-07 02:30

[QUOTE=chalsall;412116]Yup. Have you looked at who "own" those assignments above 80M?[/QUOTE]
With all sincerity, please explain.

chalsall 2015-10-07 02:45

[QUOTE=kladner;412117]With all sincerity, please explain.[/QUOTE]

I've tried to manage TF'ing to optimal levels. But then, suddenly, and randomly, there is a great demand for P-1 candidates by only a few people.

My spider only pulls its "ripcord" every five minutes or so, to lessen the load on Primenet during its observation.

As I've said before, it's not the end of the day if a candidate is released to P-1'ers before optimal TF'ing.

But, it might be nice if the P-1'ers took candidates in small batches. I hope that makes sense. And I seriously need to get out more....

chalsall 2015-10-07 02:53

[QUOTE=kladner;412117]With all sincerity, please explain.[/QUOTE]

A follow up.

Just look at [URL="http://www.mersenne.org/assignments/?exp_lo=80000000&exp_hi=90000000"]this query[/URL] to get an idea what we're dealing with.

No disingenuous assumption intended, but a stupid AI has a bit of a problem dealing with that with a scarce resource on a five minute timescale.

LaurV 2015-10-07 03:55

What do you mean by "small batches"? If I have a computer with 4 GPUs and I do P-1 and queue my work for a week, or say 10 days (I don't do P-1 right now, neither have 4GPUs in a single rig anymore, but this is just an example, and I sometime queue TF work for so many days, so it is not exaggerated), then with the current ranges and bounds, I would need about 50 minutes for stage 1 and 40 for stage 2, that is 1 and a half hour for one assignment.

I know it sounds like the "a hen and a half make an egg and a half in a day and a half, how many eggs do nine hens in nine days", but well, I can do (24 hours / 1.5 hours/expo)=16 expos per day, then times 4 GPUs, times 10 days, there are 640 expos for just one poor little system :smile:... How many assignments would a guy get, if he has 5 systems like that? (from his TF results, he has!).

I think let them be, nobody blame you for it. He is finding his share of factors, no fool play detected.

LaurV 2015-10-07 13:05

There might be another reason for that too.. Remember my computer called "pinch", used to get LLCD assignments over the gpu72 proxy. Since you started feeding me with P-1 assignments instead, I quit and start getting PrimeNet assignments directly. There were a couple of posts about this in the past, in this very thread. PrimeNet was giving me the right assignments I asked for, i.e. DC. Today, after this discussion, I enabled the proxy again. Remark that the computer is set to get First Time Tests. Guess what kind off assignments I have? Of course, you guessed right: I just got 22 pfactor lines in the last hour. And they keep coming.. (I will do them, don't worry, I transferred them to one gtx 580 card).


All times are UTC. The time now is 22:10.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.