mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

Prime95 2009-02-24 02:13

[QUOTE=sichase;163702]As old assignments (presumably from the V4 server) have aged out, many exponents 40M+ have appeared as needing assignment for P-1. But the server is only assigning exponents in the 49M-50M range. [/QUOTE]

The server is preferring to hand out P-1 assignments where the final two bits of trial factoring have not been completed. This will allow the TF'ers time to do the final two bits after P-1 and before the exponents are handed out for LL testing. This is best for overall throughput of GIMPS.

James Heinrich 2009-02-24 03:13

[QUOTE=Prime95;163751]...P-1 assignments where the final two bits of trial factoring have not been completed.... This is best for overall throughput of GIMPS.[/QUOTE]Hi George, welcome back. Have you had a chance to ponder the consequences of the question that was presented a little while back ([url=http://www.mersenneforum.org/showpost.php?p=154223&postcount=75]post 75 of this thread[/url]):[quote]...since Prime95 now does the last 2 bitdepths of TF after P-1, [b]does it make sense for P-1 to pick bounds based on where the number should/will be TF'd to, as opposed to where it actually is?[/b] For example:
>> [color=blue]Assuming no factors below 2^75 and 2 primality tests saved if a factor is found.[/color]
this exponent will be TF'd to 2^77 after I've done P-1 (assuming no factor is found) -- would it make sense for P-1 to assuming there are no factors <2^77 instead, if that would let it spend more effort elsewhere? I guess I'm asking if there's overlap between what factors P-1 could find, and those that the last 2 stages of TF could find?[/quote]

Prime95 2009-02-24 04:15

[QUOTE=James Heinrich;163761]Hi George, welcome back. [/QUOTE]

I'm not back yet! I'm in Uluru, a.k.a. Ayer's Rock. Uluru is the aboriginal name - loosely translated it means: "Hot land of a billion flies that want to swarm around your head".

Prime95 2009-02-24 04:34

As to #75, ask akruppa. It is a complicated feedback optimization problem.

Also, over optimiziing is pointless. Perfect optimization for a Core 2 is wrong for a Phenom and wrong for a P4 and wrong for an i7. Worse still is that the TF, P-1, LL, and double-check are likely to be done on completely different architectures. The best we can ever hope for is "good enough" optimization, not "perfect" optimization.

jmb1982 2009-04-08 17:47

Hi. Is p-1-pushing still needed? LL or D last too long for my laptop so I thought about changing to p-1.

Greetings

Jens

James Heinrich 2009-04-08 20:33

P-1 will undoubtedly [i]always[/i] need more "pushers", benefitted most by those with a generous amount of available RAM (for current P-1 assignment I'd say (per worker) 512MB at the lower end, 1GB is good, 2GB+ is more than plenty). You can get by with less assigned RAM, but at lower efficiency.

garo 2009-04-08 20:58

Yes P-1 is severely underpowered and is soon going to be a bottleneck for the project. So any help is appreciated.

cheesehead 2009-04-08 23:31

[quote=garo;168536]Yes P-1 is severely underpowered and is soon going to be a bottleneck for the project.[/quote]If a bottleneck occurs, PrimeNet will simply have to assign L-Ls of exponents that haven't been P-1ed, so that assignees do the P-1 before the L-L. It may help to explain to them that if they skip/abort the P-1 they risk having their L-L credit suddenly disappear later.

I can see that that will result in lots of stage-2-less P-1. But that's what will have to be done, unless PrimeNet assigns P-1-only to folks ("whatever makes sense") hoping for an L-L instead.

- - -

Hey! Here's an idea -- Come up with a way to give L-L credit to those who agree to do P-1 as the initial step of an L-L assignment. After all, they're both FFT-heavy.

Maybe: Add a PrimeNet option to convert any P-1 credit that was acquired for work done in P-1 immediately prior to L-L on the same exponent. After the P-1/LL combination assignment is successfully completed, the assignee has the privilege of going to a page for converting the P-1 credit to an equivalent L-L credit.

ckdo 2009-04-09 06:51

[quote=cheesehead;168551]If a bottleneck occurs, PrimeNet will simply have to assign L-Ls of exponents that haven't been P-1ed, so that assignees do the P-1 before the L-L.[/quote]

As I see it, P-1 already is a bottleneck to the project, but mainly for the TF-LMH folks.

Looking at the exponent status distribution, the are less than 1,000 P-1 tests assigned in the 44M-51M range and 100,000+ available. Most of these will need the last 2 bits of TF to be done as well.

On the other hand, the TF(-LMH) wave is about to hit 70M rather soon, leaving little room to maneuvre for "classic" LMH.

Now if PrimeNet would assign the last two bits of TF even on exponents which haven't been P-1'd yet, that would (a) remove the need for a lot of P-1 and LL tests, (b) allow for better P-1 bounds on the remaining exponents, removing the need for even more LL tests, (c) let LL assignments finish more quickly and finally (d) leave more room for "classic" LMH.

Looks like the way to go, at least to me. BTW in my tests P-1 took about 5 times as long as the last 2 bits of TF with a chance of finding a factor less than 3 times as high...

Mr. P-1 2009-04-09 14:39

[QUOTE=cheesehead;168551]If a bottleneck occurs, PrimeNet will simply have to assign L-Ls of exponents that haven't been P-1ed, so that assignees do the P-1 before the L-L.[/QUOTE]

That's what happens now.

[QUOTE]I can see that that will result in lots of stage-2-less P-1.[/QUOTE]

That's also what happens now. I get a lot of my factors from stage 2 and very few of them would have been discovered by the deeper stage 1 that low memory machines do. This is a real loss to the project.

I recommend anyone able to commit a significant amount of memory to devote a core or two to P-1. Those with little or no memory should consider doing doublechecks, which have all been P-1ed, so as not to miss factors that another assignee might find.

petrw1 2009-04-09 14:49

[QUOTE=ckdo;168591]As I see it, P-1 already is a bottleneck to the project, but mainly for the TF-LMH folks.

Looking at the exponent status distribution, the are less than 1,000 P-1 tests assigned in the 44M-51M range and 100,000+ available. Most of these will need the last 2 bits of TF to be done as well....[/QUOTE]

I believe the reasoning for this is that a P-1 test is more efficient/effective than the last 2 bits of TF. As I understand it if P-1 does NOT find a factor THEN it will assign the last 2 bits of TF before the LL test.:question:

[QUOTE]On the other hand, the TF(-LMH) wave is about to hit 70M rather soon, leaving little room to maneuvre for "classic" LMH.[/QUOTE]

I have one PC doing TF-LMH and it is getting assignments in the 358M range.
Another machine that is doing TF is getting assignments in the 67M range.


All times are UTC. The time now is 21:55.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.