mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Lounge (https://www.mersenneforum.org/forumdisplay.php?f=7)
-   -   TF: A job half done? (https://www.mersenneforum.org/showthread.php?t=13948)

davieddy 2010-09-22 13:29

TF: A job half done?
 
As ckdo just pointed out, the TF "wavefront" has just passed 77M.

It's good to know that there is enthusiasm for finding factors
2^x where x>64, but before an LLtest is performed, it is worthwhile to TF from
2^x to 2^(x+1) which takes as long as TFing to 2^x.
Assuming that the machines/participants involved are relatively
more efficient at factoring than LL-testing, wouldn't it be sensible
to ask them to go the last mile?

I think that doing P-1 before the last bit of TF isn't worth the
candle/hassle. I think it may well be a deterrent to the needed
P-1 effort.



David

petrw1 2010-09-22 15:06

[QUOTE=davieddy;230890]As ckdo just pointed out, the TF "wavefront" has just passed 77M.

It's good to know that there is enthusiasm for finding factors
2^x where x>64, but before an LLtest is performed, it is worthwhile to TF from
2^x to 2^(x+1) which takes as long as TFing to 2^x.
Assuming that the machines/participants involved are relatively
more efficient at factoring than LL-testing, wouldn't it be sensible
to ask them to go the last mile?

I think that doing P-1 before the last bit of TF isn't worth the
candle/hassle. I think it may well be a deterrent to the needed
P-1 effort.
David[/QUOTE]

I believe the reasoning for doing that last bit AFTER P-1 was that the P-1 had a better chance of finding a factor per unit of time than the last bit of TF.

My rough example using exponents and settings I have been using:
[CODE]
Exponent: 53,000,001
TF 67-68 : 0.56 GhzDays : about 1.5% chance of finding a factor
P-1 B1=650000/B2=17,700,000 : 4.82 GhzDays : 6.8% chance
TF 68-69 : 1.13 GhzDays : 1.47% chance[/CODE]

Using a theoretical PC that does 1 GhzDay of work each day:
TF 67-68: about 53.5 per month : about 0.8 factors found
P-1: about 6.25 per month : about 0.42 factors found
TF 68-69 : about 26.5 per month : about 0.39 factors found

My rough calculations give a slight edge to P-1.
I suspect depending on the RAM you allocated to P-1 and the resulting B1&B2 this could make the edge more or less.

chalsall 2010-09-22 15:38

[QUOTE=petrw1;230907]My rough calculations give a slight edge to P-1. I suspect depending on the RAM you allocated to P-1 and the resulting B1&B2 this could make the edge more or less.[/QUOTE]

But I think this might be part of the issue -- P-1 requires a LOT of RAM to be significantly more effective than TFing.

Meanwhile, the default settings for Prime95 is only 8MB....

Personally, I give Prime95 / mprime 64MB. As an example, on one of my quads mprime is currently consuming 73 MB virtual, but only 2.8 MB resident (real) doing TF work. This is easy to donate (along with the CPU usage, of course).

Donating hundreds of MBs or even GBs of RAM [I]isn't[/I] as easy to most people who use the machines for other purposes.

So, based on this, I agree with davieddy that perhaps P-1 shouldn't hold back the last step of TF work.

Uncwilly 2010-09-22 16:16

One further bit of info. IIRC George also reasoned that P-1 will capture some of the factors in the the range of the last bit level. This gives it a bit more edge.

I would suggest, that in the full release of v26, that the default daytime memory for P-1 be set to 40MB or better. On a system with 2GB, it is nothing. And maybe the set-up program could ask when "night" should start and end, and can we use X MB then? The X could be 100 MB or 10% of total RAM or ? .

cheesehead 2010-09-22 17:17

[QUOTE=davieddy;230890]I think that doing P-1 before the last bit of TF isn't worth the candle/hassle.[/QUOTE]There is a sound basis, not mere guesswork, for doing it that way.

There was specific investigation before this change was made. Analysis plus experimentation showed that it was more efficient -- for the overall project, not just for the P-1 step -- to do P-1 before the last bit of TF than to do it after the last bit of TF.

[QUOTE=chalsall;230913]P-1 requires a LOT of RAM to be significantly more effective than TFing.[/QUOTE]No, it doesn't. TF and P-1 search different mathematical spaces. While there is some overlap there, your general statement is not justified.

Saying that P-1 finds more factors if it's given more RAM (all else being equal, and using the GIMPS algorithm for choosing B1/B2 bounds) is valid.

Saying that P-1 with minimum RAM is not as effective as TF is not valid. (One clue to its invalidity is that that claim omits mentioning any effect of setting different search bounds on P-1 and TF. One can construct cases in which P-1 is far more effective than TF, and other cases in which the reverse is true.)

[quote]Meanwhile, the default settings for Prime95 is only 8MB....[/quote]That was taken into account during the analysis.

[quote]Donating hundreds of MBs or even GBs of RAM [I]isn't[/I] as easy to most people who use the machines for other purposes.[/quote]That's why such allocations are not required in order for P-1 assignments to be useful to GIMPS.

The bounds selection algorithm takes "available memory" into account as one of its parameters. For low RAM, it selects higher B1 (and lower B2, or omitting Stage 2) than for high RAM, all else being equal.

[quote]So, based on this, I agree with davieddy that perhaps P-1 shouldn't hold back the last step of TF work.[/quote]Might that be based on incomplete understanding of the whole situation?

chalsall 2010-09-22 18:34

[QUOTE=cheesehead;230941]Might that be based on incomplete understanding of the whole situation?[/QUOTE]

Perhaps. But is that an incomplete understanding on my part? Or yours? (I have learnt in my old age that there is an important difference between the theoretical optimal given unlimited resources, and the optimal which can be achieved in real life with limited resources.)

Let us please examine the empirical. Please look at the Primenet summary at [url]http://www.mersenne.org/primenet/[/url]

Please note that the (non-LMH) factoring wavefront is current up at 76,000,000.

Please note that the P-1 factoring wavefront is currently up at 53,000,000.

Please note that the LL wavefront is currently up at 50,000,000.

Thus, I conclude that those wishing to do First Time LL tests are being *forced* to do P-1 testing even if they don't want to (unless they understand the undoc.txt parameters to add to their .txt files).

And even if their machines have not been configured with enough memory to do P-1 tests more effectively (to find factors) than those willing to do TF work can.

QED.

axn 2010-09-22 18:40

[QUOTE=chalsall;230952]Thus, I conclude that those wishing to do First Time LL tests are being *forced* to do P-1 testing even if they don't want to (unless they understand the undoc.txt parameters to add to their .txt files).[/QUOTE]

How would that change if the last bit of TF was done before P-1?

chalsall 2010-09-22 18:52

[QUOTE=axn;230955]How would that change if the last bit of TF was done before P-1?[/QUOTE]

It would mean that the client requesting LL work would understand that it can and should immediately start on LL work, rather than wasting its time on P-1.

Edit: Actually, you raise a very interesting point. Unless I'm wrong (definitely a possibility) if the Prime95 / mprime client is given an exponent to LL test which has only been trial factored below a certain level, it will do TF work to a certain level, then P-1 work, then some more TF work, before actually beginning the LL work it was assigned.

Shouldn't the client instead do [I]exactly[/I] and [B]only[/B] the work it was assigned?

davieddy 2010-09-22 19:18

[QUOTE=chalsall;230913]
So, based on this, I agree with davieddy that perhaps P-1 shouldn't hold back the last step of TF work.[/QUOTE]

[QUOTE=cheesehead;230941]
Might that be based on incomplete understanding of the whole situation?[/QUOTE]

No.
I think it was based on reading posts #1 and 2 more
carefully than you seem to have done.

David

Uncwilly 2010-09-22 19:46

[QUOTE=chalsall;230956]Unless I'm wrong (definitely a possibility) if the Prime95 / mprime client is given an exponent to LL test which has only been trial factored below a certain level, it will do TF work to a certain level, then P-1 work, then some more TF work, before actually beginning the LL work it was assigned.

Shouldn't the client instead do [I]exactly[/I] and [B]only[/B] the work it was assigned?[/QUOTE]The client is doing exactly what it is assigned, Test the number. It is given the information on what has been done, thus also, what remains to be done. It does not make sense to waste an LL on a number if it hasn't been TF'ed to a reasonable level.

The seperation of work is nice, because some, like you, want a lot of smaller work units. Some have slower machines that serve some other function and LMH-TF helps them earn their keep. While this is nice, the seperation of work is not a requirement to achieve the ends of the project. TF and P-1 eliminate the need to do 2 LL, thus they save cycles overall. So, if someone is doing first time LL's and has to do P-1 and 1 bit of TF first, they are quite likely to get one or more "kills", over the course of their machine's life. Before I went all 100M-TF, I had some. It is part of testing (trying the qualities of) a number.

Prime95 2010-09-22 19:47

[QUOTE=cheesehead;230941]There is a sound basis, not mere guesswork, for doing it that way.

There was specific investigation before this change was made. Analysis plus experimentation showed that it was more efficient -- for the overall project, not just for the P-1 step -- to do P-1 before the last bit of TF than to do it after the last bit of TF.[/QUOTE]

You and davideddy are both correct. If GIMPS had infinite resources then it would be best to do the P-1 step before the last TF bit.

However, GIMPS has a surplus of TF resources and a deficiency in P-1 resources. Under GIMPS current scheme, when the LL wavefront passes the P-1 wavefront, LL testers will be assigned exponents that need P-1, TF, and then LL testing. This increases our surplus of TF resources.

What to do?

You could argue that since there are an infinite number of exponents then a surplus of TF resources is no big deal -- we need that work done someday anyway. Or you could argue that doing TF on exponents that won't be needed for many years slows down our discovery of new Mersenne primes. There is no right or wrong answer.

I have, on occasion, had the server send out TF assignments for the last TF bit before P-1 has been done. I'd rather get away from this kind of manual intervention. The extra throughput from doing the last TF bit after P-1 may not be worth the hassle.


All times are UTC. The time now is 17:25.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.