mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

James Heinrich 2013-01-08 23:41

[QUOTE=swl551;324090]74 is the new 72![/QUOTE]Starting at about 57M, I'd say that's true.

According to my chart:
46M-56M = 2[sup]73[/sup]
57M-72M = 2[sup]74[/sup]
73M-90?M = 2[sup]75[/sup]

swl551 2013-01-08 23:56

[QUOTE=James Heinrich;324092]Starting at about 57M, I'd say that's true.

According to my chart:
46M-56M = 2[sup]73[/sup]
57M-72M = 2[sup]74[/sup]
73M-90?M = 2[sup]75[/sup][/QUOTE]

Another item to consider. Prior to 0.20 most people ran multiple instances of mfaktc on one card to max out the card. The aggregated throughput went up, but the time to factor went down. With 0.20 you only run one instance so factoring say " 77M 73,74 " might take around 80 minutes instead of say "140 minutes" with 0.19 running 4 instances.

When you look at the cut-off for running LL tests this reduction in processing time widens the gap making higher factoring more viable.

(I'm sure you already know all this....)

James Heinrich 2013-01-09 00:40

[QUOTE=Prime95;324091]Question: Look at row 47M, the cyan color indicates we should TF to 2^73, but the 2LL column indicates the TF breakeven is 72.3 bits. Am I missing something?[/QUOTE]I have changed how the last-two-columns breakeven points are calculated and displayed. Do they make more sense now?

kracker 2013-01-09 00:50

I agree, it shouldn't be 72.. it should be higher maybe only a little... (atleast for mfaktc)

But for mfakto, they stay the same, for reasons easily known. :smile:

chalsall 2013-01-09 14:45

[QUOTE=kracker;324100]I agree, it shouldn't be 72.. it should be higher maybe only a little... (atleast for mfaktc)

But for mfakto, they stay the same, for reasons easily known. :smile:[/QUOTE]

Another thing to consider is that we currently are only [I]just[/I] keeping (slightly) ahead of the LL wavefront. I don't think it makes sense to change the "release level" until and unless we pull further ahead. And certainly we should not be pulling in candidates below 60M for further TFing.

Having said that, anyone who wants to can request candidates which are currently held for P-1'ing and take them up to 74. Or simply pledge to take regular TF candidates to 74 instead of 73.

James Heinrich 2013-01-09 14:54

[QUOTE=chalsall;324146]Another thiing to consider is that we currently are only [I]just[/I] keeping (slightly) ahead of the LL wafe-front.[/QUOTE]It's easy to forget that for every exponent you take 2[sup]73[/sup]-2[sup]74[/sup] you could take [I]two[/I] from 2[sup]72[/sup]-2[sup]73[/sup] or [I]four[/I] from 2[sup]71[/sup]-2[sup]72[/sup]. Extra TF is nice and all that, but not if we fall behind the wavefront.

ixfd64 2013-01-09 17:57

I've noticed that most assignments in the 60M range are released after they're factored to 73 bits, yet others are still reserved for trial factoring after being factored to that level. Is there any reason for this inconsistency?

flashjh 2013-01-09 18:05

[QUOTE]=ixfd6Is there any reason for this inconsistency?[/QUOTE]

The ones held back are for P-1, the system releases some because only about a 1000 are kept at a time.

chalsall 2013-01-09 18:08

[QUOTE=ixfd64;324173]I've noticed that most assignments in the 60M range are released after they're factored to 73 bits, yet others are still reserved for trial factoring after being factored to that level. Is there any reason for this inconsistency?[/QUOTE]

The system keeps a cache of 1000 candidates TFed to 63 (or higher) for P-1 assignment.

davieddy 2013-01-09 19:06

[QUOTE=chalsall;324146]Another thiing to consider is that we currently are only [I]just[/I] keeping (slightly) ahead of the LL wavefront. I don't think it makes sense to change the "release level" until and unless we pull further ahead. And certainly we should not be pulling in candidates below 60M for further TFing.
[/QUOTE]
That's the most sensible suggestion I've heard from you (which isn't saying much).

Now how about fast reliable LL testers allocated 60M expos, leaving the tail to new/slower participants?

D

c10ck3r 2013-01-09 20:17

[QUOTE=chalsall;324175]The system keeps a cache of 1000 candidates TFed to 63 (or higher) for P-1 assignment.[/QUOTE]
63?

chalsall 2013-01-09 21:14

[QUOTE=c10ck3r;324196]63?[/QUOTE]

s/63/73/ ....

chalsall 2013-01-10 00:16

[QUOTE=davieddy;324187]Now how about fast reliable LL testers allocated 60M expos, leaving the tail to new/[B][I][U]slower[/U][/I][/B] participants?[/QUOTE]

Like you, pray tell?

ET_ 2013-01-10 08:46

[QUOTE=davieddy;324187]That's the most sensible suggestion I've heard from you (which isn't saying much).

Now how about fast reliable LL testers allocated 60M expos, leaving the tail to new/slower participants?

D[/QUOTE]

I'm working on it since 6/2012...

Luigi

Aramis Wyler 2013-01-10 22:56

Visualizing the Wavefronts.
 
While I've been contributing to prime95 for more years than I can remember, I've been mostly oblivious about what numbers are assigned to me or where the current wavefront is or if potential ll numbers have been sufficiently pre-factored.

While I still run first time LL tests on my cpus, I've been trying lately to do trial factoring on my GPU. I was getting them straight from prime95 until I realised that being a part of GPU to 72 didn't mean joining a prime95 team, per se. Since then (the other day) I've been getting my numbers from them.

Still, I'm having trouble working out where the work most needs done. I queued up a week or so of TF'ing 60m range primes to 73, but despite using the visualization tool I'm having a hard time following exactly where the wavefront is, and if I should be factoring a larger set to 71, then 72, then maybe 73 rather than factoring a number up to 73 then moving on to the next.

Also it seems like new 100m digit primes (the next prize) aren't being factored in the gpu to 72 project, or if they are I can't parse it out.

Basicly I'm looking for help interpreting the relevant graphs on where the ll wavefront is, how fast it's moving, and where the TF wavefront is and how fast it's moving. I suspect there may be multiple LL wavefronts, one in the 60m range (?) and one in the 100m range.

chalsall 2013-01-10 23:37

[QUOTE=Aramis Wyler;324322]Still, I'm having trouble working out where the work most needs done. I queued up a week or so of TF'ing 60m range primes to 73, but despite using the visualization tool I'm having a hard time following exactly where the wavefront is, and if I should be factoring a larger set to 71, then 72, then maybe 73 rather than factoring a number up to 73 then moving on to the next.[/QUOTE]

The most important work, at the moment, is trial factoring (TFing) to 73 in the 61M range. If you (or anyone) doesn't take the candidates to 73 there, then someone else will have to.

[QUOTE=Aramis Wyler;324322]Also it seems like new 100m digit primes (the next prize) aren't being factored in the gpu to 72 project, or if they are I can't parse it out.[/QUOTE]

You are correct -- candidates above 65M are not available from the GPU72 project currently. The "100m" factoring project you are probably thinking about is coordinated by Uncwilly [URL="http://www.mersenneforum.org/forumdisplay.php?f=46"]here[/URL].

[QUOTE=Aramis Wyler;324322]Basicly I'm looking for help interpreting the relevant graphs on where the ll wavefront is, how fast it's moving, and where the TF wavefront is and how fast it's moving. I suspect there may be multiple LL wavefronts, one in the 60m range (?) and one in the 100m range.[/QUOTE]

It's a little complicated... But basically there's really only one LL wavefront -- currently at just above 60M. [URL="http://www.mersenne.info/trial_factored_tabular_delta_7/1/0/"]This chart[/URL], vs [URL="http://www.mersenne.info/exponent_status_tabular_delta_7/1/0/"]this[/URL] will (hopefully) help you.

(Yes, the next prize is in the 332M range, but very few people expend the months or years required to do a single LL test up there -- our goal is to find the next biggest known prime, not to win money.)

Aramis Wyler 2013-01-10 23:46

Thank you very much!

Prime95 2013-01-10 23:51

[QUOTE=Aramis Wyler;324322]Still, I'm having trouble working out where the work most needs done.[/QUOTE]

The short answer (chalsall will correct me if I'm wrong): It doesn't matter -- don't sweat it.

Ideally, all 60M exponents will be TF'ed to at least 2^73 by a GPU. GPU72 will make sure this happens. If you only TF to 2^71, then GPU72 will reassign the number to someone else to take it the rest of the way. Either way (doing it in stages or all at once), the work is needed and will get done.

If I were you, I'd either a) go straight to 2^73 to reduce the headache of getting assignments and reporting results, or b) only go to the next bit level to maximize the "thrill" of finding more factors per unit of GPU time. Your personal preferences will dictate your choice.

The 100M exponents are mostly for the fool-hearty. I'd only TF a 100M exponent if requested by a user with a respected reputation.

Uncwilly 2013-01-11 02:53

[QUOTE=Prime95;324330]The 100M exponents are mostly for the fool-hearty. I'd only TF a 100M exponent if requested by a user with a respected reputation.[/QUOTE]Thanks for that ringing endorsement.:sirrobin:

I think that the running of LL's on 100M exponents is not wise ATM. I am trying to make sure that those that do start them, have numbers that have had enough TF on them to make sure that they are not squandering their effort (when it could have been factored reasonably.)

lycorn 2013-01-12 22:21

GPUto72 has just found the 10000th factor!
Congrats to all involved.

chalsall 2013-01-14 21:02

Should we start taking DCTF to 71 instead of 70?
 
Just putting this out there for discussion...

A user e-mailed me asking why we're not taking DCTF to 77 instead of 70. I pointed them to James' analysis and (hopefully) explained that even with the new GPU seiving it would only make sense to go to 71 in the current range we're working. And, also, that the LLTF work is really the most important at the moment.

However, it raises the question: for those who are doing DCTF work (where we're currently over 500 days ahead of the wave), should we bump the release level to 71, and perhaps bring back in some candidates in the 30M, 31M and 32M regions to go from 70 to 71?

While I personally don't think this is the best thing for GIMPS, I also think that people should be able to do what they want to do.

Is this wanted by anyone?

If so, I'd suggest we simply increase the current release level for DCTF, and bring in a few candidates at a time at the top of 30M for processing, and work down until we meet the wavefront, Then start working upwards from 31M.

Thoughts?

Prime95 2013-01-14 22:09

[QUOTE=chalsall;324713] should we bump the release level to 71, and perhaps bring back in some candidates in the 30M, 31M and 32M regions to go from 70 to 71?[/QUOTE]

No. James' table improperly estimates the DCTF crossover. Since these exponents have already had P-1 done, TF to 2^71 will find fewer factors than LLTF. James should be able to compute a "proper" crossover based on finding approximately 1 factor per 100 DCTF exponents (GPU72 should be able refine this approximation by calculating how many factors are being found DCTFing to 2^70).

chalsall 2013-01-14 22:15

[QUOTE=Prime95;324719]Since these exponents have already had P-1 done, TF to 2^71 will find fewer factors than LLTF.[/QUOTE]

Very good point. Supported by [URL="https://www.gpu72.com/reports/factor_percentage/"]the statistics[/URL].

ckdo 2013-01-14 22:39

As stated previously, I advocate taking the remains of 30M-32M to 2^70 (even if I'll continue DCTF-69 for the time being).

While we are here, gpu72.com's SSL certificate apparently doesn't cover the language specific subdomains. :no:

chalsall 2013-01-14 23:07

[QUOTE=ckdo;324724]While we are here, gpu72.com's SSL certificate apparently doesn't cover the language specific subdomains. :no:[/QUOTE]

Yes, I know. And I warned people about this when I first enabled SSL. I'm using a free "Class 1" cert from StartSSL which is only for a single domain name on a single IP.

I don't feel like spending the $6 a month for six different IPs, or the $60 a year for a class 2 cert, which would be required. I figure the $50 a month I already spend on the server enough... :wink:

But I also mentioned before that when accessing the different language subdomains using SSL that while most browsers will present a warning, all traffic will still be encrypted if you say "I know what I'm doing".

bcp19 2013-01-15 04:44

[QUOTE=chalsall;324713]Just putting this out there for discussion...

A user e-mailed me asking why we're not taking DCTF to 77 instead of 70. I pointed them to James' analysis and (hopefully) explained that even with the new GPU seiving it would only make sense to go to 71 in the current range we're working. And, also, that the LLTF work is really the most important at the moment.

However, it raises the question: for those who are doing DCTF work (where we're currently over 500 days ahead of the wave), should we bump the release level to 71, and perhaps bring back in some candidates in the 30M, 31M and 32M regions to go from 70 to 71?

While I personally don't think this is the best thing for GIMPS, I also think that people should be able to do what they want to do.

Is this wanted by anyone?

If so, I'd suggest we simply increase the current release level for DCTF, and bring in a few candidates at a time at the top of 30M for processing, and work down until we meet the wavefront, Then start working upwards from 31M.

Thoughts?[/QUOTE]
Personally I think we are in a grey area here since the CPU usage is no longer a factor... I had switched my systems around and wanted to make sure there were no 'lost' exponents from my moving, so I only added DCTF work to my GPUs until I was sure all the old LLTF's were completed. At 32M, the 480 was pumping out approx 104 DCTF a day. Cudalucas data shows it would take ~37 hours to run a 35M exp on a GTX 480. I figure this means ~34 hours for a 32M exp. This would equal around 145 DCTF per Culu run, which starts to get borderline to bump up a bit level.

Using the same calculations, my 560 would take ~63 hours for that 35M exp and does ~60 32M exp per day. The math works out to roughly 57.5 hours for a 32M exp, or 144 exponects per Culu run. Looks like it works out basically the same.

So, with the new .20 it looks like 33M might be the new rollover to 71 bits on GPUs, and definitely at 34M.

On the other end of things, a 61M exp would take ~120 Culu hours (240 for the 2LLs) versus 1hr 5 min for 2^72-2^73, so technmically, 2^74 could be done on those, IF we had more GPUs running to keep ahead of the wavefront.

chalsall 2013-01-15 13:50

[QUOTE=bcp19;324773]So, with the new .20 it looks like 33M might be the new rollover to 71 bits on GPUs, and definitely at 34M.[/QUOTE]

Thank's for that information. Interesting.

So, a question directly to you (as our largest DCTF producer by far)... Do you want to start taking 33Ms up to 71?

ET_ 2013-01-15 13:53

[QUOTE=chalsall;324799]Thank's for that information. Interesting.

So, a question directly to you (as our largest DCTF producer by far)... Do you want to start taking 33Ms up to 71?[/QUOTE]

If it may be of any use for your decision, I am taking a small batch of DCTF for every big batch of LLTF...

Luigi :rolleyes:

chalsall 2013-01-15 13:57

[QUOTE=ET_;324800]If it may be of any use for your decision, I am taking a small batch of DCTF for every big batch of LLTF...[/QUOTE]

My decision is largely based on what people want to do. We have [I][U]lots[/U][/I] of lead-time on the DC wavefront. Would you like to take 33Ms to 71 as well?

ET_ 2013-01-15 15:54

[QUOTE=chalsall;324801]My decision is largely based on what people want to do. We have [I][U]lots[/U][/I] of lead-time on the DC wavefront. Would you like to take 33Ms to 71 as well?[/QUOTE]

Well, I usually take what makes sense... (I feel like playing tennis...)

Luigi

chalsall 2013-01-15 16:10

[QUOTE=ET_;324807]Well, I usually take what makes sense... (I feel like playing tennis...)[/QUOTE]

Well... What makes sense for the GIMPS project at the moment is for all GPU resources to be focused on LLTFing.

But... some like to do other things. And I subscribe to the GIMPS community's philosophy that anyone should be able to do with their own hardware, electricity and time whatever they want so long as it doesn't negatively impact the project nor other participants.

This is the reason for the question(s) above...

And, while I'm "talking", another thing possibly worth doing is taking the DC P-1 wavefront (high 45M to 50M) from the current 71 to 72. They haven't had P-1 done (by definition), so James' analysis holds as is there.

ET_ 2013-01-15 16:56

[QUOTE=chalsall;324808]Well... What makes sense for the GIMPS project at the moment is for all GPU resources to be focused on LLTFing.

But... some like to do other things. And I subscribe to the GIMPS community's philosophy that anyone should be able to do with their own hardware, electricity and time whatever they want so long as it doesn't negatively impact the project nor other participants.

This is the reason for the question(s) above...

And, while I'm "talking", another thing possibly worth doing is taking the DC P-1 wavefront (high 45M to 50M) from the current 71 to 72. They haven't had P-1 done (by definition), so James' analysis holds as is there.[/QUOTE]

Doing some P-1 too. :flex:

Luigi

chalsall 2013-01-15 17:07

[QUOTE=ET_;324810]Doing some P-1 too. :flex:[/QUOTE]

Excellent!!! :smile:

It can be argued that LL P-1'ing is currently more important than LLing (as long as enough memory is available and S2 is done).

If only we had a GPU P-1 program.... :wink:

firejuggler 2013-01-15 17:27

I find strange that the TF proposed in manual assignement are in the 79M-80 M range while so much work has to be done before we reach these highs.

ET_ 2013-01-15 17:28

[QUOTE=chalsall;324811]Excellent!!! :smile:

It can be argued that LL P-1'ing is currently more important than LLing (as long as enough memory is available and S2 is done).

If only we had a GPU P-1 program.... :wink:[/QUOTE]

We have a GPU ECM program... maybe in the next months we'll have some extensions. :wink:

Luigi

chalsall 2013-01-15 17:34

[QUOTE=firejuggler;324812]I find strange that the TF proposed in manual assignement are in the 79M-80 M range while so much work has to be done before we reach these highs.[/QUOTE]

You're talking Primenet assignments. Those usually end up going to CPUs, which almost never finish them.

chalsall 2013-01-15 17:34

[QUOTE=ET_;324813]We have a GPU ECM program... maybe in the next months we'll have some extensions. :wink:[/QUOTE]

Let us pray... :smile:

kladner 2013-01-15 17:34

[QUOTE=firejuggler;324812]I find strange that the TF proposed in manual assignement are in the 79M-80 M range while so much work has to be done before we reach these highs.[/QUOTE]

Wow! It's true! And only doing 71-72, to boot.

firejuggler 2013-01-15 17:48

YupChalsall. My bad, shouldn't having post this here.just got an assig in the 78.8 from 69 to 70 bits. Those take about 11 minutes and a half with mfaktc 0.20.
Taking one of the GPU 2 72 assig from 71 to 73 bits take me 3 hours. that's 180 minutes.
so ...
69 to 70 11.5 minutes
70 to 71 23 minutes
71 to 72 46 minutes
72 to 73 92 minutes
73 to 74 184 minutes.
Sould I "push' those manual primenet assig to 74 bits or 73 (which would take approximatly the same amount of time)?

chalsall 2013-01-15 17:59

[QUOTE=firejuggler;324818]Sould I "push' those manual primenet assig to 74 bits or 73 (which would take approximatly the same amount of time)?[/QUOTE]

Your choice. If you take them further you'll get the credit on Primenet.

Primenet still has many (CPU) clients asking for TF work. It hands them out as requested much further above, but the "close work" (some might understand the term "wet work") is being left to GPU72....

firejuggler 2013-01-15 18:26

cookie cutter, bleeding edge (work)?

chalsall 2013-01-15 18:32

[QUOTE=firejuggler;324822]cookie cutter, bleeding edge (work)?[/QUOTE]

The latter works better than the former.... :smile:

kracker 2013-01-15 19:14

[QUOTE=chalsall;324824]The latter works better than the former.... :smile:[/QUOTE]

Anyone got cookies? I'm hungry!

On a more serious thought... imho I think 70 is enough on DC TF, I don't think it's worth it, you're only saving one "[SIZE=1]short[/SIZE]" test, but just my personal opinion....

chalsall 2013-01-15 19:25

[QUOTE=kracker;324833]... imho I think 70 is enough on DC TF, I don't think it's worth it, you're only saving one "[SIZE=1]short[/SIZE]" test, but just my personal opinion....[/QUOTE]

This is not a matter of opinion. This is a matter of fact.

The problem is we're not yet sure exactly where the evidence tells us the curves cross....

kracker 2013-01-15 19:39

[QUOTE=chalsall;324838]This is not a matter of opinion. This is a matter of fact.

The problem is we're not yet sure exactly where the evidence tells us the curves cross....[/QUOTE]

I see. :smile: but over 70 I believe, is almost useless.

EDIT: Meh, stupid me, I just about said that in my prev. post

kladner 2013-01-15 19:40

[QUOTE=chalsall;324811]Excellent!!! :smile:

It can be argued that LL P-1'ing is currently more important than LLing (as long as enough memory is available and S2 is done).

If only we had a GPU P-1 program.... :wink:[/QUOTE]

+1! However, with 2 GPUs running mfaktc 2.0, I still need to do something with the CPU. Hence, I'm now running all six cores on P-1. With 4 HighMem workers I've seen Relative Primes drop to 432 at times, instead of 480. I still get B-S kicking in at the E=6 level.

This will all come to a screeching halt for ten days, starting in a day and a half. Thanks to the generosity of friends we are going to the Philippines. I have never been outside of North America before. (That includes pretty far down in Mexico, but Mexico is still North America, geologically.)

chalsall 2013-01-15 19:42

[QUOTE=kracker;324841]I see. :smile: but over 70 I believe, is almost useless.[/QUOTE]

Don't believe.

Know.

kracker 2013-01-15 19:49

[QUOTE=kladner;324842]
This will all come to a screeching halt for ten days, starting in a day and a half. Thanks to the generosity of friends we are going to the Philippines. I have never been outside of North America before. (That includes pretty far down in Mexico, but Mexico is still North America, geologically.)[/QUOTE]

Damn lucky you...

Hmmm... does Canada count when you're in the US? :missingteeth:

kladner 2013-01-15 19:52

[QUOTE=kracker;324845]
Hmmm... does Canada count when you're in the US? :missingteeth:[/QUOTE]

What? You mean the 51st state? :sirrobin:

chalsall 2013-01-15 19:53

[QUOTE=kracker;324845]Hmmm... does Canada count when you're in the US? :missingteeth:[/QUOTE]

No.

kracker 2013-01-15 20:04

[QUOTE=chalsall;324847]No.[/QUOTE]

Crap.

chalsall 2013-01-15 20:07

[QUOTE=kracker;324849]Crap.[/QUOTE]

Indeed....

kracker 2013-01-15 20:21

[QUOTE=chalsall;324850]Indeed....[/QUOTE]

Must be nice, living in Barbados... Hmm...

chalsall 2013-01-15 20:57

[QUOTE=kracker;324851]Must be nice, living in Barbados... Hmm...[/QUOTE]

Not so very much.

We've been asking for a Freedom of Information Act for years.

It's always promised, but somehow never delivered...

Since 1966....

Aramis Wyler 2013-01-15 21:28

[QUOTE=kladner;324842]+1! However, with 2 GPUs running mfaktc 2.0, I still need to do something with the CPU. Hence, I'm now running all six cores on P-1. With 4 HighMem workers I've seen Relative Primes drop to 432 at times, instead of 480. I still get B-S kicking in at the E=6 level.
[/QUOTE]

Damn, that's a lot of RAM. I only have 1 core doing P-1, but that 1 core uses up all of my memory. All the other cores do LL work, and the GPU does TF.

James Heinrich 2013-01-15 21:31

[QUOTE=Aramis Wyler;324853]Damn, that's a lot of RAM. I only have 1 core doing P-1, but that 1 core uses up all of my memory. All the other cores do LL work, and the GPU does TF.[/QUOTE]Current range of work should use up to about 12-13GB of RAM per stage2 exponent. So if you have a 6-core, 64GB system, 4x high-mem workers is a very reasonable setting.

kladner 2013-01-15 21:40

[QUOTE=Aramis Wyler;324853]Damn, that's a lot of RAM. I only have 1 core doing P-1, but that 1 core uses up all of my memory. All the other cores do LL work, and the GPU does TF.[/QUOTE]

This system is a x6 1090t with 16GB RAM. It now seems that it can handle 480 Rel. Primes with 4 HighMem workers out of 6. I have 12.5GB allowed for P95. It is set to pause if I run Adobe Bridge or Photoshop.

c10ck3r 2013-01-16 01:57

Wha...?
 
:tantrum:Strange DCTF results?
[URL]http://mersenne.info/trial_factored_tabular_delta_30/2/30000000/[/URL]

petrw1 2013-01-16 02:07

1 Attachment(s)
[QUOTE=kracker;324845]Damn lucky you...

Hmmm... does Canada count when you're in the US? :missingteeth:[/QUOTE]

Oh yeah!!! :razz:

LaurV 2013-01-16 03:31

[QUOTE=c10ck3r;324886]:tantrum:Strange DCTF results?
[URL]http://mersenne.info/trial_factored_tabular_delta_30/2/30000000/[/URL][/QUOTE]
On the left columns, if you click "tabular data" and play with the time line you will see that they come from bringing up 67 bit, whose column is not on the "changes" table anymore (why? no idea), and that confused the calculus, as they have to "match" (sum of positives plus sum of negative), therefore it may look like for 34M there are some "disappearences" (minus in 69 but no plus after it, they did go nowhere). It seems the numbers are right, but wrong placed, a display-only problem.

bcp19 2013-01-16 04:28

[QUOTE=chalsall;324799]Thank's for that information. Interesting.

So, a question directly to you (as our largest DCTF producer by far)... Do you want to start taking 33Ms up to 71?[/QUOTE]
Having the 33M at 71 will work for me, as that is where I am currently working. Adding that bit level may also 'shorten' the DC wavefront as it were.

c10ck3r 2013-01-16 04:55

[QUOTE=LaurV;324893]On the left columns, if you click "tabular data" and play with the time line you will see that they come from bringing up 67 bit, whose column is not on the "changes" table anymore (why? no idea), and that confused the calculus, as they have to "match" (sum of positives plus sum of negative), therefore it may look like for 34M there are some "disappearences" (minus in 69 but no plus after it, they did go nowhere). It seems the numbers are right, but wrong placed, a display-only problem.[/QUOTE]

I'm aware, I was just laughing at the supposed increase in unfactored exponents for the 30-40M range as a whole :razz:

chalsall 2013-01-16 14:31

[QUOTE=bcp19;324899]Having the 33M at 71 will work for me, as that is where I am currently working. Adding that bit level may also 'shorten' the DC wavefront as it were.[/QUOTE]

OK, great. I've adjusted the DCTF Get Assignment form, all the reports, and the release spider.

Also, if anyone wants to, I've also adjusted the "No P-1 Done" option to take the DC P-1 candidates from 71 to 72 (working from the top down) for anyone who's interested in doing that work.

flashjh 2013-01-16 19:05

Maybe this should be in the Unhappy Me Thread, but anyway.

Some not so good news is that my father-in-law is not doing well, but we knew this was coming for some time.

Unfortunately, we are the only ones capable of helping him now. So, after a good run of ~13 months on G72, I'm going to have to sell off the systems and relocate. It could be as soon as 2 weeks or as long as as a couple of months. I'll keep the systems up and running for as long as I can. Hopefully, once we get settled in our new place I can get something put back together.

If anyone is interested in some great TF GTX 580s or a whole i7 system, PM me.

chalsall 2013-01-16 19:25

[QUOTE=flashjh;324932]Some not so good news is that my father-in-law is not doing well, but we knew this was coming for some time. Unfortunately, we are the only ones capable of helping him now.[/QUOTE]

I'm so sorry.

To share... I lost my father at the beginning of this year.

Even though we weren't very close, and it was expected (he was 87, and sick for a long time) it was still a deep shock.

Some good advice I received from an old family friend -- drink lots of water. And feel free to cry... :cry:

[QUOTE=flashjh;324932]So, after a good run of ~13 months on G72, I'm going to have to sell off the systems and relocate.[/QUOTE]

Do what you need to do. And thanks for everything you've done.

LaurV 2013-01-17 02:03

Jerry, thank you a lot for what you did here around, and we hope your father-in-law get well soon, and we will see you back on the barricades, maybe even sooner, with a small "put back together thing".

Your "windows builds" for different tools helped me a lot, personally, and saved me a lot of time.

FWIW, I lost my father-in-law in December, my wife was very sad and desperate, but the sadness is slowly passing, and in time we only remember good things.

kladner 2013-01-17 03:25

Best wishes for all concerned, Jerry. You (and your throughput!) will be missed.
Take care,
Kieren

flashjh 2013-01-17 03:49

Thanks all. I'll still be able to compile.

nucleon 2013-01-17 09:20

So that's the top3 TF'ers left the project. (Well three of current top4)

Sorry to hear about everyone's loss. I lost my last grandparent (grandmother) early 2011.

-- Craig

nucleon 2013-01-17 09:23

[QUOTE=chalsall;324811]

If only we had a GPU P-1 program.... :wink:[/QUOTE]

Yes I'm definitely keen for this. And have been for sometime.

-- Craig

swl551 2013-01-17 13:01

[QUOTE=flashjh;324932]Maybe this should be in the Unhappy Me Thread, but anyway.

Some not so good news is that my father-in-law is not doing well, but we knew this was coming for some time......
If anyone is interested in some great TF GTX 580s or a whole i7 system, PM me.[/QUOTE]


Jerry, I'm sorry to hear this. Your participation with the MISFIT project was fantastic. I hope this is not the last we see of FLASHJH.


thx

Scott

kracker 2013-01-17 16:28

@flashjh: Sorry to hear that... best of wishes to you and your father-in-law.

garo 2013-01-21 23:06

[QUOTE=chalsall;324713]Just putting this out there for discussion...

However, it raises the question: for those who are doing DCTF work (where we're currently over 500 days ahead of the wave), should we bump the release level to 71, and perhaps bring back in some candidates in the 30M, 31M and 32M regions to go from 70 to 71?
If so, I'd suggest we simply increase the current release level for DCTF, and bring in a few candidates at a time at the top of 30M for processing, and work down until we meet the wavefront, Then start working upwards from 31M.

Thoughts?[/QUOTE]

Unless my eyes are deceiving me, isn't most of 31M still stuck at 69? Shouldn't we be taking that up to 70 before thinking of taking other stuff up to 71?

James Heinrich 2013-01-21 23:19

[QUOTE=garo;325398]Unless my eyes are deceiving me, isn't most of 31M still stuck at 69?[/QUOTE]Yes, it is most at 2[sup]69[/sup]. Perhaps a picture shows that most clearly (each pixel column is 0.1M):
[url]http://mersenne.ca/graphs/factor_bits_100M/[/url]

chalsall 2013-01-21 23:26

[QUOTE=garo;325398]Unless my eyes are deceiving me, isn't most of 31M still stuck at 69?[/QUOTE]

Language can be so important...

The DC candidates at 31M are not "stuck" at 69. That's where we took them before moving on. And before we found ourselves with new tech....

[QUOTE=garo;325398]Shouldn't we be taking that up to 70 before thinking of taking other stuff up to 71?[/QUOTE]

Are you volunteering? Knowing there's more important work (LLTFing) to do?

chalsall 2013-01-21 23:32

[QUOTE=James Heinrich;325400]Yes, it is most at 2[sup]69[/sup]. Perhaps a picture shows that most clearly (each pixel column is 0.1M):
[url]http://mersenne.ca/graphs/factor_bits_100M/[/url][/QUOTE]

Or, complementary, [URL="http://www.mersenne.info/trial_factored_bar_graph_7/2/30000000/"]30-40M TF level[/URL].

VictordeHolland 2013-01-21 23:49

[QUOTE=chalsall;324146]Another thing to consider is that we currently are only [I]just[/I] keeping (slightly) ahead of the LL wavefront.[/QUOTE]
[QUOTE=chalsall;325401]Knowing there's more important work (LLTFing) to do?[/QUOTE]
Speaking of LL-TFing, how many exponents/days is GPU72 ahead or behind the LL frontline? I'm thinking about doing some LL-TFing once I complete the 131-132M range to 2^70 (probably in ~2 weeks) :smile:.

chalsall 2013-01-22 00:07

[QUOTE=VictordeHolland;325403]Speaking of LL-TFing, how many exponents/days is GPU72 ahead or behind the LL frontline? I'm thinking about doing some LL-TFing once I complete the 131-132M range to 2^70 (probably in ~2 weeks) :smile:.[/QUOTE]

Very sophisticated....

VictordeHolland 2013-01-22 00:56

[QUOTE=chalsall;325405]Very sophisticated....[/QUOTE]
A rough indication would suffice.
What I found so far: ~10,700 LL tests are assigned in the 60M-65M range. (PrimeNet Work Distribution Map) and compare that to the TF Tabular Data of the 60-70M range, which shows ~28,300 exponents are TF-ed to 73 bit.
So GPU72 is about 28,300-10,700=17,600 exponents ahead, right? But I've got no idea of the amount of exponents that are assigned for LL testing each day and the amount that are TF-ed to 73 bits each day.

chalsall 2013-01-22 01:38

[QUOTE=VictordeHolland;325413]But I've got no idea of the amount of exponents that are assigned for LL testing each day and the amount that are TF-ed to 73 bits each day.[/QUOTE]

You do actually have access to that information....

petrw1 2013-01-22 04:22

[QUOTE=VictordeHolland;325413]But I've got no idea of the amount of exponents that are assigned for LL testing each day and the amount that are TF-ed to 73 bits each day.[/QUOTE]

Does this help?
The number of exponents in the 60-70M range either LL'd or Factored in the last month.
[url]http://www.mersenne.info/exponent_status_tabular_delta_30/2/60000000/[/url]

petrw1 2013-01-22 04:26

[QUOTE=VictordeHolland;325403]Speaking of LL-TFing, how many exponents/days is GPU72 ahead or behind the LL frontline? I'm thinking about doing some LL-TFing once I complete the 131-132M range to 2^70 (probably in ~2 weeks) :smile:.[/QUOTE]

Not much .... and more is needed. Thx

garo 2013-01-24 20:01

[QUOTE=chalsall;325401]Language can be so important...

The DC candidates at 31M are not "stuck" at 69. That's where we took them before moving on. And before we found ourselves with new tech....

Are you volunteering? Knowing there's more important work (LLTFing) to do?[/QUOTE]

I am doing LLTF almost exclusively. My point - lost in translation perhaps - was that I think there is more utility in taking 31M 69->70 than taking 33M 70->71. This was in response to your query whether we should take 33M to 71 (and then 32M and then 31M).

chalsall 2013-01-24 20:23

[QUOTE=garo;325694]My point - lost in translation perhaps - was that I think there is more utility in taking 31M 69->70 than taking 33M 70->71. This was in response to your query whether we should take 33M to 71 (and then 32M and then 31M).[/QUOTE]

Sorry... I was having a bad day...

George doesn't think coming back down is worth it, but Pete does think it's worth going further a bit further up. That's what we're now doing.

Since the DC wavefront is now in 31M, I think it would make sense for the time being to just deal with 33M, and then if we have time come back down into the high 32Ms.

I'm working on a report which will show us all just how many days we are ahead of each wavefront, so we can make a more informed decision as to where we transition.

chalsall 2013-01-24 20:33

New "temporal" Workers' reports...
 
At the suggestion of kracker, I've created some new reports...

Please see the [URL="https://www.gpu72.com/reports/workers/day/"]Workers' Overall Progress for the last Day[/URL], [URL="https://www.gpu72.com/reports/workers/week/"]Week[/URL], [URL="https://www.gpu72.com/reports/workers/month/"]Month[/URL] and [URL="https://www.gpu72.com/reports/workers/quarter/"]Quarter[/URL].

These can be accessed as sub-menus on the Workers' Progress -> Overall Work menu.

Once on the time-constrained page, you can then drill down to the different work-types to see, for example, how much [URL="https://www.gpu72.com/reports/workers/dctf/71/month/"]DC TFing has been done to 71 in the last month[/URL].

Please let me know if anyone sees any SPEs....

Edit: Funny... Within five minutes of my posting this here GoogleBot is busy indexing the several hundred "new" pages which resulted from this.... :smile:

Prime95 2013-01-24 21:04

[QUOTE=chalsall;325696]George doesn't think coming back down is worth it....[/QUOTE]

I'm not sure what you mean. Perhaps, something I wrote was worded poorly. Garo's statement makes sense.

In general:

1) I'm in favor of any TF work that eliminates exponents faster than the same GPU card can eliminate exponents with CUDALucas testing.
2) In prioritizing the TF work, I favor assignments that save the most LL work. That is, TFing from 2^69 to 2^70 is more important than TFing from 2^70 to 2^71 since it eliminates exponents faster. When TFing to the same bit level, TFing a larger exponent is better since it is faster to TF and saves more LL time when a factor is found.
3) All the prioritization is moot if you have enough GPU resources and are ahead of the wavefront. Prioritization is only a consideration when catching up to the wavefront or when we don't have enough resources to stay ahead of the wavefront.

Hope that makes sense!

chalsall 2013-01-24 21:20

[QUOTE=Prime95;325705]I'm not sure what you mean. Perhaps, something I wrote was worded poorly. Garo's statement makes sense.[/QUOTE]

I'm sorry if I misinterpreted you George. I was going by this:

[QUOTE=Prime95]No. James' table improperly estimates the DCTF crossover. Since these exponents have already had P-1 done, TF to 2^71 will find fewer factors than LLTF. James should be able to compute a "proper" crossover based on finding approximately 1 factor per 100 DCTF exponents (GPU72 should be able refine this approximation by calculating how many factors are being found DCTFing to 2^70).[/QUOTE]

[QUOTE=Prime95;325705]3) All the prioritization is moot if you have enough GPU resources and are ahead of the wavefront. Prioritization is only a consideration when catching up to the wavefront or when we don't have enough resources to stay ahead of the wavefront.[/QUOTE]

But, at the same time, it doesn't make sense right now to take the lowest available 31M candidates to 70 if that means that higher candidates will be assigned for DCing which are only at 69.

[QUOTE=Prime95;325705]Hope that makes sense![/QUOTE]

Yes.

May I suggest, then, that I start bringing in some candidates in the 31M range to take to 70, and work down until we meet the wavefront? 32M is already at 70, so we can then repeat the process -- work down from the top and take as much of it as we can to 71.

Does that make sense to everyone? (This assumes, of course, that we have people interested in doing this work.)

James Heinrich 2013-01-24 21:20

[QUOTE=chalsall;325698]GoogleBot is busy indexing the several hundred "new" pages which resulted from this.... :smile:[/QUOTE]Just be glad [i]your[/i] server doesn't have details pages for more than 200-million exponents... :max:

chalsall 2013-01-24 21:30

[QUOTE=James Heinrich;325711]Just be glad [i]your[/i] server doesn't have details pages for more than 200-million exponents... :max:[/QUOTE]

LOL... That's what "robots.txt", ".htaccess" (and for really nasty and/or stupid spiders and you have root access, iptables) are for.... :wink:

Prime95 2013-01-24 23:08

[QUOTE=chalsall;325710]But, at the same time, it doesn't make sense right now to take the lowest available 31M candidates to 70 if that means that higher candidates will be assigned for DCing which are only at 69.[/QUOTE]

I agree. Your statement implies that GPU72 is not ahead of the DC wavefront and is in "catch-up" mode. Your TF to 2^70 working downward until you meet the wavefront is consistent with my suggested rules.

The alternative strategy (and maybe better strategy?) would be for GPU72 to grab all the 31M exponents so that the server hands out 32M exponents that are already TFed to 2^70. This assumes there are enough 32M exponents TF'ed to 2^70 for Primenet to hand out while GPU72 catches up in the 31M area.

chalsall 2013-01-24 23:33

[QUOTE=Prime95;325717]Your statement implies that GPU72 is not ahead of the DC wavefront and is in "catch-up" mode.[/QUOTE]

We are no longer ahead of the wave. Not as of the release of mfaktc version 0.20; thanks to Oliver, you and rcv. New tech changes the game.

And please don't forget that we haven't yet heard from those running mfakto, those who are running CC1.x, and those who we hope might be interested in doing this work.

Pete et al, what say you? Should we take all of 31M to 70 first? We can release it immediately upon completion. It would make sense.

petrw1 2013-01-25 03:39

[QUOTE=chalsall;325718]We are no longer ahead of the wave. .[/QUOTE]

This:
[url]http://www.mersenne.info/exponent_status_tabular_delta_7/2/30000000/[/url]
tells me that in the last week DC was working only in the 30M range; while TF was in the 33 and 34M ranges. Looks ahead to me???


[QUOTE=chalsall;325718]Pete et al, what say you? Should we take all of 31M to 70 first? We can release it immediately upon completion. It would make sense.[/QUOTE]

If you are only talkng about DC then I (who has an opinion but no real power without a GPU) says TF DC to 70 only where/if you are ahead.

As far as GPU72 project factoring as a whole goes; my "opinion" is more LL-TF.

chalsall 2013-01-25 04:00

[QUOTE=petrw1;325739]This: [url]http://www.mersenne.info/exponent_status_tabular_delta_7/2/30000000/[/url] tells me that in the last week DC was working only in the 30M range; while TF was in the 33 and 34M ranges. Looks ahead to me???[/QUOTE]

You also have to consider the [URL="http://www.mersenne.org/primenet/"]PrimeNet Activity Summary[/URL] report. Please note the 636 current DC assignments in the 31M range.

[QUOTE=petrw1;325739]As far as GPU72 project factoring as a whole goes; my "opinion" is more LL-TF.[/QUOTE]

Indeed. That is optimal.

flashjh 2013-01-25 04:21

I'm going to stick to WMS LL-TF 7x-73 unless you need a lot of DC done quickly.

LaurV 2013-01-25 05:42

Talking from my angle of view only, and related to "high end" nvidia cards (this excludes mfakto, cuda sm older then 1.3, etc, who can't do GPU-LL):

I take the opportunity to reaffirm again what I am talking here since years: that DC-TF you are arguing here, like 30M to 70 or 33M to 71, for expos with P-1 done? C'mon! That makes no sense! Neither of it.

One LL 30M takes under 20 hours on a gtx580 and a bit longer on 570. To find a factor for this range/bitlevels you need 25-30 hours in average.

Which one is better?

And this, of course, if you don't hit a dry path (like I just did, over 40 hours without any factor! on the LMH range/bitlevel where I am supposed to find a factor every 2 hours or so). Of course, one can try forcing his/her luck.

But high-bit GPU TF for the DC range is not worth. Even if you are "ahead".

For LL-front range the story is different, as TF is still finding factors (eliminating exponents) much faster then TWO LL tests will do. If you find one factor per week, you are still faster than TWO-LL's can eliminate exponents.

But for DC, you compete against ONE LL only, so half of the time, which is also much shorter, as the exponents are lower. You need to find factors about 3-4 times faster to "justify" the TF. Which you never will, especially for exponents which survived P-1, their chances to have factors in your TF range which were missed by P-1 is micro-thin...

LaurV 2013-01-25 06:46

Backing it up with numbers: (from the GPU72's "[URL="http://www.gpu72.com/reports/factoring_cost/"]factoring cost[/URL]" report)

[CODE]
70bit 71 bit
expo trials factors trials factors
-------------------------------------
30M 10,117 115 27 0
31M 83 0 53 0
32M 20,716 241 264 5
33M 9,949 108 422 8

total 40,865 464 766 13
[/CODE]On a summary calculus, that come to about [B]420 GHzDays per factor[/B], which is consistent with the "factoring cost" table if you take in calculus the "infinities" that appear in the table (here as "0" factors found). This would be a FULL DAY (and a bit more) work of a gtx580. Therefore, you can clear one exponent per day doing TF here, [B]if you are lucky[/B], or more, or less if you jinx it.

Finding factors might be fun, but doing DC-LL is the "safe" process (not affected by probability/luck/jinx)*: you clear one expo every ~20 hours.
And you are few hours faster than doing TF.

My advice would be that GPU72 keeps a bunch of few 30-33M TF to 70-71 bits for the people who might like to get such assignments (mfakto users, whatever), eventually rotate them regularly as the DC front is progressing, but do not make such a big deal of it. There is no gain TF-ing here for the "heavy" GTX users, they better do CuLu-DC, be faster, and still have a slim chance to find a missed prime, which would INDEED be a wonderful hit!

*beside of the situation when your computer takes fire :D

ckdo 2013-01-25 09:30

You know, you can basically read anything into numbers. Relevant example:

"GPU72 has thus far completed 4,607 DCs. On the other hand we have only found 2,748 factors by means of DCTF. Evidently we are doing way too much DCTF already."

On the other hand, I myself have saved 26,157 GHzd (or around 60 GHzd/d) worth of DC tests using a single mid-range GPU. That's around 25% of GPU72's total DC(!) throughput, and I'm not going to get anywhere near that throughput by actually doing those DCs on all the hardware I have available (18 cores and the GPU).

But this is getting off topic. The question at hand was whether we should take 30-32M to 70 or skip that and take everything to 71 starting at 34M. My vote is on the first option.


All times are UTC. The time now is 22:10.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.