mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2009-02-24, 02:13   #144
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

7,537 Posts
Default

Quote:
Originally Posted by sichase View Post
As old assignments (presumably from the V4 server) have aged out, many exponents 40M+ have appeared as needing assignment for P-1. But the server is only assigning exponents in the 49M-50M range.
The server is preferring to hand out P-1 assignments where the final two bits of trial factoring have not been completed. This will allow the TF'ers time to do the final two bits after P-1 and before the exponents are handed out for LL testing. This is best for overall throughput of GIMPS.
Prime95 is offline   Reply With Quote
Old 2009-02-24, 03:13   #145
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

11×311 Posts
Default

Quote:
Originally Posted by Prime95 View Post
...P-1 assignments where the final two bits of trial factoring have not been completed.... This is best for overall throughput of GIMPS.
Hi George, welcome back. Have you had a chance to ponder the consequences of the question that was presented a little while back (post 75 of this thread):
Quote:
...since Prime95 now does the last 2 bitdepths of TF after P-1, does it make sense for P-1 to pick bounds based on where the number should/will be TF'd to, as opposed to where it actually is? For example:
>> Assuming no factors below 2^75 and 2 primality tests saved if a factor is found.
this exponent will be TF'd to 2^77 after I've done P-1 (assuming no factor is found) -- would it make sense for P-1 to assuming there are no factors <2^77 instead, if that would let it spend more effort elsewhere? I guess I'm asking if there's overlap between what factors P-1 could find, and those that the last 2 stages of TF could find?

Last fiddled with by James Heinrich on 2009-02-24 at 03:14
James Heinrich is offline   Reply With Quote
Old 2009-02-24, 04:15   #146
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

7,537 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
Hi George, welcome back.
I'm not back yet! I'm in Uluru, a.k.a. Ayer's Rock. Uluru is the aboriginal name - loosely translated it means: "Hot land of a billion flies that want to swarm around your head".
Prime95 is offline   Reply With Quote
Old 2009-02-24, 04:34   #147
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

7,537 Posts
Default

As to #75, ask akruppa. It is a complicated feedback optimization problem.

Also, over optimiziing is pointless. Perfect optimization for a Core 2 is wrong for a Phenom and wrong for a P4 and wrong for an i7. Worse still is that the TF, P-1, LL, and double-check are likely to be done on completely different architectures. The best we can ever hope for is "good enough" optimization, not "perfect" optimization.
Prime95 is offline   Reply With Quote
Old 2009-04-08, 17:47   #148
jmb1982
 
Feb 2009

5 Posts
Default

Hi. Is p-1-pushing still needed? LL or D last too long for my laptop so I thought about changing to p-1.

Greetings

Jens
jmb1982 is offline   Reply With Quote
Old 2009-04-08, 20:33   #149
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

1101010111012 Posts
Default

P-1 will undoubtedly always need more "pushers", benefitted most by those with a generous amount of available RAM (for current P-1 assignment I'd say (per worker) 512MB at the lower end, 1GB is good, 2GB+ is more than plenty). You can get by with less assigned RAM, but at lower efficiency.
James Heinrich is offline   Reply With Quote
Old 2009-04-08, 20:58   #150
garo
 
garo's Avatar
 
Aug 2002
Termonfeckin, IE

22×691 Posts
Default

Yes P-1 is severely underpowered and is soon going to be a bottleneck for the project. So any help is appreciated.
garo is offline   Reply With Quote
Old 2009-04-08, 23:31   #151
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally Posted by garo View Post
Yes P-1 is severely underpowered and is soon going to be a bottleneck for the project.
If a bottleneck occurs, PrimeNet will simply have to assign L-Ls of exponents that haven't been P-1ed, so that assignees do the P-1 before the L-L. It may help to explain to them that if they skip/abort the P-1 they risk having their L-L credit suddenly disappear later.

I can see that that will result in lots of stage-2-less P-1. But that's what will have to be done, unless PrimeNet assigns P-1-only to folks ("whatever makes sense") hoping for an L-L instead.

- - -

Hey! Here's an idea -- Come up with a way to give L-L credit to those who agree to do P-1 as the initial step of an L-L assignment. After all, they're both FFT-heavy.

Maybe: Add a PrimeNet option to convert any P-1 credit that was acquired for work done in P-1 immediately prior to L-L on the same exponent. After the P-1/LL combination assignment is successfully completed, the assignee has the privilege of going to a page for converting the P-1 credit to an equivalent L-L credit.

Last fiddled with by cheesehead on 2009-04-08 at 23:47
cheesehead is offline   Reply With Quote
Old 2009-04-09, 06:51   #152
ckdo
 
ckdo's Avatar
 
Dec 2007
Cleves, Germany

53010 Posts
Default

Quote:
Originally Posted by cheesehead View Post
If a bottleneck occurs, PrimeNet will simply have to assign L-Ls of exponents that haven't been P-1ed, so that assignees do the P-1 before the L-L.
As I see it, P-1 already is a bottleneck to the project, but mainly for the TF-LMH folks.

Looking at the exponent status distribution, the are less than 1,000 P-1 tests assigned in the 44M-51M range and 100,000+ available. Most of these will need the last 2 bits of TF to be done as well.

On the other hand, the TF(-LMH) wave is about to hit 70M rather soon, leaving little room to maneuvre for "classic" LMH.

Now if PrimeNet would assign the last two bits of TF even on exponents which haven't been P-1'd yet, that would (a) remove the need for a lot of P-1 and LL tests, (b) allow for better P-1 bounds on the remaining exponents, removing the need for even more LL tests, (c) let LL assignments finish more quickly and finally (d) leave more room for "classic" LMH.

Looks like the way to go, at least to me. BTW in my tests P-1 took about 5 times as long as the last 2 bits of TF with a chance of finding a factor less than 3 times as high...
ckdo is offline   Reply With Quote
Old 2009-04-09, 14:39   #153
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

7×167 Posts
Default

Quote:
Originally Posted by cheesehead View Post
If a bottleneck occurs, PrimeNet will simply have to assign L-Ls of exponents that haven't been P-1ed, so that assignees do the P-1 before the L-L.
That's what happens now.

Quote:
I can see that that will result in lots of stage-2-less P-1.
That's also what happens now. I get a lot of my factors from stage 2 and very few of them would have been discovered by the deeper stage 1 that low memory machines do. This is a real loss to the project.

I recommend anyone able to commit a significant amount of memory to devote a core or two to P-1. Those with little or no memory should consider doing doublechecks, which have all been P-1ed, so as not to miss factors that another assignee might find.
Mr. P-1 is offline   Reply With Quote
Old 2009-04-09, 14:49   #154
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

22×3×17×23 Posts
Default

Quote:
Originally Posted by ckdo View Post
As I see it, P-1 already is a bottleneck to the project, but mainly for the TF-LMH folks.

Looking at the exponent status distribution, the are less than 1,000 P-1 tests assigned in the 44M-51M range and 100,000+ available. Most of these will need the last 2 bits of TF to be done as well....
I believe the reasoning for this is that a P-1 test is more efficient/effective than the last 2 bits of TF. As I understand it if P-1 does NOT find a factor THEN it will assign the last 2 bits of TF before the LL test.

Quote:
On the other hand, the TF(-LMH) wave is about to hit 70M rather soon, leaving little room to maneuvre for "classic" LMH.
I have one PC doing TF-LMH and it is getting assignments in the 358M range.
Another machine that is doing TF is getting assignments in the 67M range.

Last fiddled with by petrw1 on 2009-04-09 at 14:50 Reason: sepllgni
petrw1 is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 11:03.


Mon Aug 2 11:03:10 UTC 2021 up 10 days, 5:32, 0 users, load averages: 1.65, 1.78, 1.66

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.