mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

chalsall 2013-04-25 19:22

[QUOTE=James Heinrich;338299]And I'm pretty sure that's already part of the GPU72 assignment strategy.[/QUOTE]

It is.

GPU72's assignment strategy is based on your empirical analysis as to where the cross-over points are (taking into consideration Primenet's integer bit level convention) and the resources and candidates available for each work type.

chalsall 2013-04-25 19:28

[QUOTE=James Heinrich;338299]You could use 10 tests saved to improve the odds even further, but the overall idea is to make most [i]efficient[/i] use of computing resources to clear exponents. Spending more time on TF and/or P-1 will find more factors, but the optimal balance of factoring effort vs probability will clear exponents (either by factor or by two matching LL tests) fastest.[/QUOTE]

Or one could TF everything to 90 bits. Wouldn't make sense, but one could (eventually) do it.

The DC P-1 work was made available at the request of a few Workers. This is why the [URL="https://www.gpu72.com/account/getassignments/dcp-1/"]DC P-1 manual assignment page[/URL] has the "Effort" option. The default is 2.0, 1.0 is available, and then "Custom".

James Heinrich 2013-04-25 20:10

[QUOTE=chalsall;338304]GPU72's assignment strategy is based on your empirical analysis as to where the cross-over points are[/QUOTE]An analysis which may soon require some revisiting when CUDAPm1 goes beyond alpha. :smile:

chalsall 2013-04-25 20:26

[QUOTE=James Heinrich;338309]An analysis which may soon require some revisiting when CUDAPm1 goes beyond alpha. :smile:[/QUOTE]

Indeed.

"Real time" is always very interesting.... :smile:

petrw1 2013-04-25 21:17

[QUOTE=chalsall;338304]It is.

GPU72's assignment strategy is based on your empirical analysis as to where the cross-over points are (taking into consideration Primenet's integer bit level convention) and the resources and candidates available for each work type.[/QUOTE]

But when I look at the estimated completion charts at the 45 -49 ranges both DC and LL show them going to 72 bits.

chalsall 2013-04-25 21:27

[QUOTE=petrw1;338316]But when I look at the estimated completion charts at the 45 -49 ranges both DC and LL show them going to 72 bits.[/QUOTE]

Good point.

LLTF is where the focus is at the moment. And it's currently optimal for the firepower we have available.

DCTF is currently working at 33M (and 36M for those who only want to go a single bit level).

We have lots of time to refine DCTF to be optimal.

bcp19 2013-04-25 22:34

[QUOTE=chalsall;338319]Good point.

LLTF is where the focus is at the moment. And it's currently optimal for the firepower we have available.

DCTF is currently working at 33M (and 36M for those who only want to go a single bit level).

We have lots of time to refine DCTF to be optimal.[/QUOTE]
When that was set up, weren't we still using the old mfatkc/o that required the use of CPU cores? With the release of .20 the bit depth changed, which is why we went back and took some 30-31M exp's to 70.

c10ck3r 2013-04-27 13:09

[QUOTE=davieddy;338511]Which would be zilch IMAO.[/QUOTE]
Are you meaning that there should be no further DCTF? Just want to make sure I correctly understand your belief before trying to crunch the numbers...

davieddy 2013-05-02 03:58

[QUOTE=c10ck3r;338512]Are you meaning that there should be no further DCTF? Just want to make sure I correctly understand your belief before trying to crunch the numbers...[/QUOTE]
Yes, for he time being anyway.

We have effectively TFed between 30M and 34M to 70 bits.
As far as saving LL work goes, this is equivalent to taking 60M to 68M to 74 bits. (Convince youself of this).
Current firepower is succeeding in TF to 74 nearly as fast as LLs are being completed.
As Chris has said, we can reappraise the state of play in a year or so.

In my book, there is another sound reason not to overcook DCTF:
the DC checks the residue from the first test.

David

owftheevil 2013-05-03 11:09

Isn't knowing a factor of a number more interesting than knowing two tests gave the same result (or not)?

bcp19 2013-05-03 18:48

[QUOTE=davieddy;338986]Yes, for he time being anyway.

We have effectively TFed between 30M and 34M to 70 bits.
As far as saving LL work goes, this is equivalent to taking 60M to 68M to 74 bits. (Convince youself of this).
Current firepower is succeeding in TF to 74 nearly as fast as LLs are being completed.
As Chris has said, we can reappraise the state of play in a year or so.

In my book, there is another sound reason not to overcook DCTF:
the DC checks the residue from the first test.

David[/QUOTE]
You are making no sense. Comparing 30M-34M ^70 to 60M-68M ^74 is ludicrous. Every LLTF factor saves 2 tests, every DCTF saves 1, so you would be more correct saying 60M-62M ^74, though you would still make no sense.

Your last statement though highlights your lack of understanding. You are basically saying we should let the DC's run, even though it takes less computational time to find a DCTF factor simply because there is already a residue. If I were still running my GPUs, I could find a DC factor faster than I could match a residue, which means I could clear more exponents with TF than I can with DC. Less time spent is always better. Period.


All times are UTC. The time now is 23:17.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.