mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
Thread Tools
Old 2013-04-25, 19:22   #2201
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
And I'm pretty sure that's already part of the GPU72 assignment strategy.
It is.

GPU72's assignment strategy is based on your empirical analysis as to where the cross-over points are (taking into consideration Primenet's integer bit level convention) and the resources and candidates available for each work type.
chalsall is offline   Reply With Quote
Old 2013-04-25, 19:28   #2202
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
You could use 10 tests saved to improve the odds even further, but the overall idea is to make most efficient use of computing resources to clear exponents. Spending more time on TF and/or P-1 will find more factors, but the optimal balance of factoring effort vs probability will clear exponents (either by factor or by two matching LL tests) fastest.
Or one could TF everything to 90 bits. Wouldn't make sense, but one could (eventually) do it.

The DC P-1 work was made available at the request of a few Workers. This is why the DC P-1 manual assignment page has the "Effort" option. The default is 2.0, 1.0 is available, and then "Custom".
chalsall is offline   Reply With Quote
Old 2013-04-25, 20:10   #2203
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

11·311 Posts
Default

Quote:
Originally Posted by chalsall View Post
GPU72's assignment strategy is based on your empirical analysis as to where the cross-over points are
An analysis which may soon require some revisiting when CUDAPm1 goes beyond alpha.
James Heinrich is offline   Reply With Quote
Old 2013-04-25, 20:26   #2204
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
An analysis which may soon require some revisiting when CUDAPm1 goes beyond alpha.
Indeed.

"Real time" is always very interesting....
chalsall is offline   Reply With Quote
Old 2013-04-25, 21:17   #2205
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

22×3×17×23 Posts
Default

Quote:
Originally Posted by chalsall View Post
It is.

GPU72's assignment strategy is based on your empirical analysis as to where the cross-over points are (taking into consideration Primenet's integer bit level convention) and the resources and candidates available for each work type.
But when I look at the estimated completion charts at the 45 -49 ranges both DC and LL show them going to 72 bits.
petrw1 is offline   Reply With Quote
Old 2013-04-25, 21:27   #2206
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by petrw1 View Post
But when I look at the estimated completion charts at the 45 -49 ranges both DC and LL show them going to 72 bits.
Good point.

LLTF is where the focus is at the moment. And it's currently optimal for the firepower we have available.

DCTF is currently working at 33M (and 36M for those who only want to go a single bit level).

We have lots of time to refine DCTF to be optimal.
chalsall is offline   Reply With Quote
Old 2013-04-25, 22:34   #2207
bcp19
 
bcp19's Avatar
 
Oct 2011

10101001112 Posts
Default

Quote:
Originally Posted by chalsall View Post
Good point.

LLTF is where the focus is at the moment. And it's currently optimal for the firepower we have available.

DCTF is currently working at 33M (and 36M for those who only want to go a single bit level).

We have lots of time to refine DCTF to be optimal.
When that was set up, weren't we still using the old mfatkc/o that required the use of CPU cores? With the release of .20 the bit depth changed, which is why we went back and took some 30-31M exp's to 70.
bcp19 is offline   Reply With Quote
Old 2013-04-27, 13:09   #2208
c10ck3r
 
c10ck3r's Avatar
 
Aug 2010
Kansas

547 Posts
Default

Quote:
Originally Posted by davieddy View Post
Which would be zilch IMAO.
Are you meaning that there should be no further DCTF? Just want to make sure I correctly understand your belief before trying to crunch the numbers...
c10ck3r is offline   Reply With Quote
Old 2013-05-02, 03:58   #2209
davieddy
 
davieddy's Avatar
 
"Lucan"
Dec 2006
England

2×3×13×83 Posts
Default

Quote:
Originally Posted by c10ck3r View Post
Are you meaning that there should be no further DCTF? Just want to make sure I correctly understand your belief before trying to crunch the numbers...
Yes, for he time being anyway.

We have effectively TFed between 30M and 34M to 70 bits.
As far as saving LL work goes, this is equivalent to taking 60M to 68M to 74 bits. (Convince youself of this).
Current firepower is succeeding in TF to 74 nearly as fast as LLs are being completed.
As Chris has said, we can reappraise the state of play in a year or so.

In my book, there is another sound reason not to overcook DCTF:
the DC checks the residue from the first test.

David
davieddy is offline   Reply With Quote
Old 2013-05-03, 11:09   #2210
owftheevil
 
owftheevil's Avatar
 
"Carl Darby"
Oct 2012
Spring Mountains, Nevada

32·5·7 Posts
Default

Isn't knowing a factor of a number more interesting than knowing two tests gave the same result (or not)?

Last fiddled with by owftheevil on 2013-05-03 at 11:09 Reason: grammar
owftheevil is offline   Reply With Quote
Old 2013-05-03, 18:48   #2211
bcp19
 
bcp19's Avatar
 
Oct 2011

7×97 Posts
Default

Quote:
Originally Posted by davieddy View Post
Yes, for he time being anyway.

We have effectively TFed between 30M and 34M to 70 bits.
As far as saving LL work goes, this is equivalent to taking 60M to 68M to 74 bits. (Convince youself of this).
Current firepower is succeeding in TF to 74 nearly as fast as LLs are being completed.
As Chris has said, we can reappraise the state of play in a year or so.

In my book, there is another sound reason not to overcook DCTF:
the DC checks the residue from the first test.

David
You are making no sense. Comparing 30M-34M ^70 to 60M-68M ^74 is ludicrous. Every LLTF factor saves 2 tests, every DCTF saves 1, so you would be more correct saying 60M-62M ^74, though you would still make no sense.

Your last statement though highlights your lack of understanding. You are basically saying we should let the DC's run, even though it takes less computational time to find a DCTF factor simply because there is already a residue. If I were still running my GPUs, I could find a DC factor faster than I could match a residue, which means I could clear more exponents with TF than I can with DC. Less time spent is always better. Period.
bcp19 is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Status Primeinator Operation Billion Digits 5 2011-12-06 02:35
62 bit status 1997rj7 Lone Mersenne Hunters 27 2008-09-29 13:52
OBD Status Uncwilly Operation Billion Digits 22 2005-10-25 14:05
1-2M LLR status paulunderwood 3*2^n-1 Search 2 2005-03-13 17:03
Status of 26.0M - 26.5M 1997rj7 Lone Mersenne Hunters 25 2004-06-18 16:46

All times are UTC. The time now is 06:33.


Mon Aug 2 06:33:06 UTC 2021 up 10 days, 1:02, 0 users, load averages: 1.43, 1.31, 1.23

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.