mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Reply
 
Thread Tools
Old 2012-01-10, 23:37   #89
Brian-E
 
Brian-E's Avatar
 
"Brian"
Jul 2007
The Netherlands

7×467 Posts
Default

Quote:
Originally Posted by Xyzzy View Post
In our opinion, this statement is not true:

Code:
You can increase your chances of finding a Mersenne prime very slightly if you let the program occasionally use more memory.
You don't find Mersenne primes doing P-1 work, and the time spent doing P-1 work could be spent doing more LL work.

Xyzzy may be doing his subtle behind-the-scenes thread steering by throwing in a question - or, in this case, statement - about something with hidden relevance to the discussion.
The statement is not true for people who only do LL-testing on numbers which have already had their P-1 done by other users.
In the past it was more normal than it is now for LL assignments to be given out without their P-1 having been done, and the user would then do P-1 first.
Nowadays that is not normally the case. So machines doing LL work don't do P-1. And therefore they don't need the extra memory. So changing the default memory allowed won't have any effect. So isn't the whole idea of increasing the default memory allowed an irrelevance? (Anyone specifically taking on P-1 work will obviously need to change the default themselves anyway, otherwise they cannot do their work.)
Brian-E is offline   Reply With Quote
Old 2012-01-11, 00:20   #90
diamonddave
 
diamonddave's Avatar
 
Feb 2004

25·5 Posts
Default

Quote:
Originally Posted by Brian-E View Post
So isn't the whole idea of increasing the default memory allowed an irrelevance? (Anyone specifically taking on P-1 work will obviously need to change the default themselves anyway, otherwise they cannot do their work.)
It might be 2 different issue?

Perhaps LL task shouldn't get exponent not P-1.

But that doesn't make the default any helpful since the readme.txt file point the user in the wrong direction if he want to do P-1 work. None of the setting suggested in the readme.txt will help the user in getting P-1 work.

In reality I don't know why people are so against changing a default value? But like I said putting the value at 0 would at least not numb the user into thinking the value specified (8 MB) is any helpful. At least lets HELP them find a proper value with a chart or something.
diamonddave is offline   Reply With Quote
Old 2012-01-11, 04:01   #91
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by Brian-E View Post
Nowadays that is not normally the case. So machines doing LL work don't do P-1. And therefore they don't need the extra memory. So changing the default memory allowed won't have any effect.
Not NORMALLY the case, but unfortunately, it still does happen a lot, and way-too-often on machines who haven't allocated mem or who don't have mem. Raising the default would be specifically aimed at those doing LL who incidentally have to do P-1 before LL; they are not prepared for it, and therefore a bad job is done.

Quote:
Originally Posted by diamonddave View Post

Perhaps LL task shouldn't get exponent not P-1.
That would be ideal, but unfortunately, the LL wavefront is moving faster than the P-1 wavefront, so if such a rule was enforced, it wouldn't be long before it's impossible to get an LL. Better an LL with no P-1 than nothing (but better nothing than a B1=B2 P-1 without LL).

The problem is that those who only want LLs ignore all those messages about P-1 and ECM at setup, since they believe the messages don't apply to them, where unfortunately they do. That's why we should raise the default -- to target those users who believe the memory stuff doesn't apply, through no fault of their own.

Last fiddled with by Dubslow on 2012-01-11 at 04:02
Dubslow is offline   Reply With Quote
Old 2012-01-11, 04:05   #92
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

11100001101012 Posts
Default

The other thought I just had (which is sort of a counter argument to one of my conclusions above):

That this thread exists is a sign that someone out there is doing dedicated P-1 work (not P-1-before-LL work) without sufficient memory. Proof: There are 100+ exponents in chalsall's system that have had B1=B2, with no LL tests done. Therefore that user(s) are not doing LL work which requires P-1; they are requesting P-1 work only. How many expos out there have had P-1 with B1=B2 with an LL completed by somebody else?
Dubslow is offline   Reply With Quote
Old 2012-01-11, 04:23   #93
diamonddave
 
diamonddave's Avatar
 
Feb 2004

2408 Posts
Default

Quote:
Originally Posted by Dubslow View Post
[...] if such a rule was enforced, it wouldn't be long before it's impossible to get an LL.
There is no shortage of DC work!
diamonddave is offline   Reply With Quote
Old 2012-01-11, 04:33   #94
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by Dubslow View Post
That this thread exists is a sign that someone out there is doing dedicated P-1 work (not P-1-before-LL work) without sufficient memory. Proof: There are 100+ exponents in chalsall's system that have had B1=B2, with no LL tests done. Therefore that user(s) are not doing LL work which requires P-1; they are requesting P-1 work only. How many expos out there have had P-1 with B1=B2 with an LL completed by somebody else?
You misunderstand the data I provided.

The (quick) query I ran was simply to answer what Spidy had seen; not what Spidy had facilitated. All of the candidates where B1 == B2 were "thrown back into the pool" unless they hadn't yet been TFed to 72.

I (one again state that I) tend to agree with garo et al that until we don't have any first time candidates which have not had P-1 work done (read: never) that the thought of redoing P-1 work where B1 == b2 would not be an optimal usage of the P-1 fire power we find ourselves with.
chalsall is offline   Reply With Quote
Old 2012-01-11, 04:41   #95
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by chalsall View Post
You misunderstand the data I provided.

The (quick) query I ran was simply to answer what Spidy had seen; not what Spidy had facilitated. All of the candidates where B1 == B2 were "thrown back into the pool" unless they hadn't yet been TFed to 72.
That means that even more have gone through the system in the last month, right? So there's way more than 100 exponents with B1=B2 that have gone in and out of the system?
Quote:
Originally Posted by chalsall View Post
I (one again state that I) tend to agree with garo et al that until we don't have any first time candidates which have not had P-1 work done (read: never) that the thought of redoing P-1 work where B1 == b2 would not be an optimal usage of the P-1 fire power we find ourselves with.
I'm pretty sure I agree here, however the more I think about it, the more systematic inefficiencies I see, i.e. instead of trying to treat the symptoms like I originally suggested, lets see if we can do anything to stop these P-1s from appearing, or get them Stage 2 the first time somehow, or bladeeblabla. Perhaps much of this thread should be moved the the P-1 thread in PrimeNet's proper forum.
Dubslow is offline   Reply With Quote
Old 2012-01-11, 04:51   #96
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Let's be a little bit more empirical (this is a query against the G72 database of those candidates G72 currently "owns"):

Code:
mysql> select count(*) from GPU where Status<3;
+----------+
| count(*) |
+----------+
|    38595 | 
+----------+
1 row in set (0.01 sec)

mysql> select count(*) from GPU where Status<3 and P1=0;
+----------+
| count(*) |
+----------+
|     9676 | 
+----------+
1 row in set (0.03 sec)

mysql> select count(*) from GPU where Status<3 and P1=1 and B1=B2;
+----------+
| count(*) |
+----------+
|     6614 | 
+----------+
1 row in set (0.04 sec)
Since doing a "virgin" P-1 test (both stage 1 and 2) takes the same amount of time as redoing a P-1 test with only stage 1 done, but with something like twice the probability of finding a factor, what do you think would be the most useful application of our available resources?
chalsall is offline   Reply With Quote
Old 2012-01-11, 04:54   #97
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

111248 Posts
Default

Quote:
Originally Posted by chalsall View Post
I (one again state that I) tend to agree with garo et al that until we don't have any first time candidates which have not had P-1 work done (read: never) that the thought of redoing P-1 work where B1 == b2 would not be an optimal usage of the P-1 fire power we find ourselves with.
Agreed
petrw1 is offline   Reply With Quote
Old 2012-01-11, 05:05   #98
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

9,767 Posts
Default

Quote:
Originally Posted by Dubslow View Post
I'm pretty sure I agree here, however the more I think about it, the more systematic inefficiencies I see, i.e. instead of trying to treat the symptoms like I originally suggested, lets see if we can do anything to stop these P-1s from appearing, or get them Stage 2 the first time somehow, or bladeeblabla.
Let's argue this from another angle...

Perhaps those who ask for LL work should only do LL work.

Let the P-1 work be left for those who understand what they're getting themselves into (knowing they won't find a MP with this method), and let those who get LL work assigned without P-1 done ("properly" or not) take the "exposure" of having a slightly lower chance of finding the next MP.
chalsall is offline   Reply With Quote
Old 2012-01-11, 20:49   #99
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22·3·641 Posts
Default

Quote:
Originally Posted by axn View Post
All three of scenarios are trivially invalid. Prime95 has a working memory requirement of about 50MB. So merely running P95 will cause trashing.
Let me clarify.

My three scenarios were each intended to have enough free memory so that prime95 could run without thrashing _unless_ it did P-1 stage 2. I thought that intent was obvious, but it wasn't. I should have explained the purpose of the figure instead of just throwing in the 40MB 50MB figure without explaining the intended significance it had for me.

I apologize for not recognizing earlier that that was the basis of your objection -- that I hadn't explained that the 40MB/50MB was not to be taken literally, but was intended to represent an amount in which prime95 could run without thrashing.

(However, I thought that this:
Quote:
Originally Posted by axn View Post
But I'll assume that there is a certain usage level at which Stage 2 will cause trashing, and it is this that you're interested in (not the specific number 40).
indicated that you DID understand what I meant!!!!)


So, will you please answer the following reworded questions,
first with the assumption that the 90% allocation limit is NOT relevant, and
second, without that assumption
?

Suppose that Xmb denotes the amount of memory in which prime95 can run LL, TF, P-1 stage 1, or P-1 stage 2 with "available memory" = no more than 8 MB, without causing any thrashing.

Scenario 1) Suppose a minimum system has applications that use all but Xmb of RAM without thrashing. Then Prime95 is introduced and, except for P-1 stage 2 with "available memory" > 8 MB, exists peacefully without causing any thrashing. But stage 2 with "available memory" > 8 MB does cause thrashing, noticeably slowing other applications. Is that acceptable, in your opinion?

Scenario 2) Suppose a typical system has applications that use all but Xmb of RAM without thrashing. Then Prime95 is introduced and, except for P-1 stage 2 with "available memory" > 8 MB, exists peacefully without causing any thrashing. But stage 2 with "available memory" > 8 MB does cause thrashing, noticeably slowing other applications. Is that acceptable, in your opinion?

Scenario 3) Suppose a maximum system has applications that use all but Xmb of RAM without thrashing. Then Prime95 is introduced and, except for P-1 stage 2 with "available memory" > 8 MB, exists peacefully without causing any thrashing. But stage 2 with "available memory" > 8 MB does cause thrashing, noticeably slowing other applications. Is that acceptable, in your opinion?

Last fiddled with by cheesehead on 2012-01-11 at 20:56
cheesehead is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 09:16.


Mon Aug 2 09:16:24 UTC 2021 up 10 days, 3:45, 0 users, load averages: 1.78, 1.50, 1.40

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.